patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11943171 | DETAILED DESCRIPTION The subject matter of embodiments of the invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. Throughout this disclosure, several acronyms and shorthand notations are employed to aid the understanding of certain concepts pertaining to the associated system and services. These acronyms and shorthand notations are intended to help provide an easy methodology of communicating the ideas expressed herein and are not meant to limit the scope of embodiments described in the present disclosure. Various technical terms are used throughout this description. An illustrative resource that fleshes out various aspects of these terms can be found in Newton's Telecom Dictionary, 25th Edition (2009). As used herein, the term “node” is used to refer to network access technology, such as eNode, gNode, etc. In other aspects, the term “node” may be used to refer to one or more antennas being used to communicate with a user device. Embodiments of the present technology may be embodied as, among other things, a method, system, or computer-program product. Accordingly, the embodiments may take the form of a hardware embodiment, or an embodiment combining software and hardware. An embodiment takes the form of a computer-program product that includes computer-useable instructions embodied on one or more computer-readable media. Computer-readable media include both volatile and nonvolatile media, removable and nonremovable media, and contemplate media readable by a database, a switch, and various other network devices. Network switches, routers, and related components are conventional in nature, as are means of communicating with the same. By way of example, and not limitation, computer-readable media comprise computer-storage media and communications media. Computer-storage media, or machine-readable media, include media implemented in any method or technology for storing information. Examples of stored information include computer-useable instructions, data structures, program modules, and other data representations. Computer-storage media include, but are not limited to RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD), holographic media or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage, and other magnetic storage devices. These memory components can store data momentarily, temporarily, or permanently. Communications media typically store computer-useable instructions—including data structures and program modules—in a modulated data signal. The term “modulated data signal” refers to a propagated signal that has one or more of its characteristics set or changed to encode information in the signal. Communications media include any information-delivery media. By way of example but not limitation, communications media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, infrared, radio, microwave, spread-spectrum, and other wireless media technologies. Combinations of the above are included within the scope of computer-readable media. By way of background, a traditional telecommunications network employs a plurality of base stations (i.e., cell sites, cell towers) to provide network coverage. The base stations are employed to broadcast and transmit transmissions to user devices of the telecommunications network. An access point may be considered to be a portion of a base station that may comprise an antenna, a radio, and/or a controller. In aspects, an access point is defined by its ability to communicate with a user equipment (UE), such as a wireless communication device (WCD), according to a single protocol (e.g., 3G, 4G, LTE, 5G, and the like); however, in other aspects, a single access point may communicate with a UE according to multiple protocols. As used herein, a base station may comprise one access point or more than one access point. Factors that can affect the telecommunications transmission include, e.g., location and size of the base stations, and frequency of the transmission, among other factors. The base stations are employed to broadcast and transmit transmissions to user devices of the telecommunications network. Traditionally, the base station establishes uplink (or downlink) transmission with a mobile handset over a single frequency that is exclusive to that particular uplink connection (e.g., an LTE connection with an eNodeB). In this regard, typically only one active uplink connection can occur per frequency. The base station may include one or more sectors served by individual transmitting/receiving components associated with the base station (e.g., antenna arrays controlled by an eNodeB). These transmitting/receiving components together form a multi-sector broadcast arc for communication with mobile handsets linked to the base station. As employed herein, a UE (also referenced herein as a user device) or WCD can include any device employed by an end-user to communicate with a wireless telecommunications network. A UE can include a mobile device, a mobile broadband adapter, or any other communications device employed to communicate with the wireless telecommunications network. A UE, as one of ordinary skill in the art may appreciate, generally includes one or more antenna coupled to a radio for exchanging (e.g., transmitting and receiving) transmissions with a nearby base station. In aspects, a UE provides location and channel quality information to the wireless communication network via the access point. Location information may be based on a current or last known position utilizing GPS or other satellite location services, terrestrial triangulation, an access point's physical location, or any other means of obtaining coarse or fine location information. Channel quality information may indicate a realized uplink and/or downlink transmission data rate, observed signal-to-interference-plus-noise ratio (SINR) and/or signal strength at the user device, or throughput of the connection. Channel quality information may be provided via, for example, an uplink pilot time slot, downlink pilot time slot, sounding reference signal, channel quality indicator (CQI), rank indicator, precoding matrix indicator, or some combination thereof. Channel quality information may be determined to be satisfactory or unsatisfactory, for example, based on exceeding or being less than a threshold. Location and channel quality information may take into account the user device capability, such as the number of antennas and the type of receiver used for detection. Processing of location and channel quality information may be done locally, at the access point or at the individual antenna array of the access point. In other aspects, the processing of said information may be done remotely. The present disclosure is directed to systems, methods, and computer readable media for scheduling PRBs in non-standard channel sizes. Generally, a scheduling component is configured to perform several operations, including assigning resources to users out of a set of PRBs. The number and size of PRBs available is predetermined based upon the size of bandwidth owned or controlled by an individual carrier. The bandwidths available for selection are based on standardized amounts. For example, the PRBs available may be in increments of 5 MHz (e.g. 5 MHz, 10 Mhz, 15 MHz, etc.). A wireless carrier may own or control noncontiguous and contiguous portions of a spectrum. Normally, an operator instructs the scheduler or scheduling component to use a standard size up to the largest contiguous bandwidth that meets one of the standard amounts. For example, if a carrier operates or owns 7 MHz contiguously in a spectrum, the scheduler can use a 5 MHz block of the 7 MHz since 5 MHz is a standard bandwidth size. Therefore PRBs would be allocated for the 5 MHz standard bandwidth. A larger standard bandwidth, such as 10 MHz cannot be assigned because 3 MHz would then be assigned that are not owned or controlled by the wireless carrier. Utilizing the 3 MHz not owned or controlled by the wireless carrier would then cause interference or would not be permitted. As discussed herein, the present disclosure also provides the ability to identify PRBs that cannot be used or must be avoided in order to prevent interference. Additionally, the present disclosure provides systems and methods that allow the scheduler to allocate PRBs from non-contiguous parts of a carrier owned or controlled spectrum at the same time to a given user. This PRB aggregation allows a carrier to utilize more of the spectrum owned or controlled by the carrier, increase speeds, and reduce the need for carrier aggregation. A first aspect of the disclosure is directed to a system for scheduling PRBs in non-standard sizes that comprises a scheduling component and a radio component. The scheduling component or a subcomponent within the scheduling component selects a first segment of the spectrum that is available to a wireless carrier for wirelessly communicating with one or more user devices. The first segment of the spectrum has a first bandwidth that is determined to be between a lower standardized carrier bandwidth and an upper standardized carrier bandwidth. The system then determines a first bandwidth differential that is a difference between the upper standardized carrier bandwidth and the first of bandwidth. Based on the determination, a plurality of physical resource blocks are scheduled to be allocated for at least a portion of the first bandwidth while no physical resource blocks are scheduled to be allocated for the first bandwidth differential. The schedule of allocation of the plurality of PRBs to a radio component are communicated. The radio component is configured to wirelessly communicate the plurality of resource blocks to one or more user devices. In aspects, the first bandwidth differential corresponds to a second segment of the spectrum that is not available to the wireless carrier. In another aspect, a method for assigning PRBs based on available bandwidth is disclosed. In this aspect, a first amount of bandwidth that is carrier-controlled is determined. Then, a standardized bandwidth that is greater than the first amount of bandwidth is determined. A bandwidth differential that is a difference between the standardized bandwidth and the first amount of bandwidth is computed and then PRBs are assigned corresponding to the first amount of bandwidth without assigning PRBs corresponding to the bandwidth differential. The bandwidth differential corresponds to a second amount of bandwidth that is not carrier-controlled. For example, the standardized bandwidth may be 10 MHz and the first amount of bandwidth that is carrier-controlled is determined to be 7 MHz. As such, the bandwidth differential determined is 3 Mhz. In this aspect, no PRBs will be assigned to the bandwidth differential or the 3 MHz. By contrast, all 7 Mhz of the first amount of bandwidth are assigned PRBs. This is an improvement to prior technology, which would have only allowed the standard 5 MHz of the 7 Mhz of the first amount of bandwidth to be assigned PRBs. As a result, a greater amount of carrier-controlled bandwidth can be utilized. In aspects, the first amount of bandwidth and the second amount of bandwidth are located adjacent to one another within a first spectrum. In other aspects, it is contemplated that the first amount of bandwidth and the second amount of bandwidth may not be located adjacent to one another within the first spectrum and may be separated by portions of bandwidth that are owned or operated by another carrier. In yet another aspect, another method for scheduling PRBs in non-standard channel sizes is disclosed. A first amount and a second amount of bandwidth that are carrier-controlled are determined. For the first amount of bandwidth, a first standardized bandwidth that is greater than the first amount of bandwidth is determined. Then, for the second amount of bandwidth, a second standardized bandwidth that is greater than the second amount of bandwidth is determined. Then, a first bandwidth differential that is a difference between the first standardized bandwidth and the first amount of bandwidth is computed. Similarly, a second bandwidth differential that is a difference between the second standardized bandwidth and the second amount of bandwidth is also computed. Based on these computations, PRBs are assigned corresponding to the first amount and the second amount of bandwidth without assigning PRBs corresponding to the first bandwidth differential and the second bandwidth differential. Additionally, the first amount and the second amount of bandwidth may be aggregated. In aspects, the first amount and the second amount of bandwidth may be noncontiguous portions of bandwidth. In aspects, the first standardized bandwidth and second standardized bandwidths are one or more of 5 MHz, 10 MHz, 15 MHz, and 20 MHz. However, in other aspects, standardized bandwidths may be any increment of bandwidth that is established as standard by the applicable governing body. Additionally, the first bandwidth differential and the second bandwidth differential correspond to bandwidth not controlled by the carrier. Turning toFIG.1, a diagram is depicted of an exemplary computing environment suitable for use in implementations of the present disclosure. In particular, the exemplary computer environment is shown and designated generally as computing device100. Computing device100is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should computing device100be interpreted as having any dependency or requirement relating to any one or combination of components illustrated. In aspects, the computing device100may be a UE, or other user device, capable of two-way wireless communications with an access point. Some non-limiting examples of the computing device100include a cell phone, tablet, pager, personal electronic device, wearable electronic device, activity tracker, desktop computer, laptop, PC, and the like. The implementations of the present disclosure may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program components, including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types. Implementations of the present disclosure may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, specialty computing devices, etc. Implementations of the present disclosure may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network. With continued reference toFIG.1, computing device100includes bus102that directly or indirectly couples the following devices: memory104, one or more processors106, one or more presentation components108, input/output (I/O) ports110, I/O components112, power supply114, radio116, and transmitter118. Bus102represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the devices ofFIG.1are shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines would more accurately be grey and fuzzy. For example, one may consider a presentation component such as a display device to be one of I/O components112. Also, processors, such as one or more processors106, have memory. The present disclosure hereof recognizes that such is the nature of the art, and reiterates thatFIG.1is merely illustrative of an exemplary computing environment that can be used in connection with one or more implementations of the present disclosure. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “handheld device,” etc., as all are contemplated within the scope ofFIG.1and refer to “computer” or “computing device.” Computing device100typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device100and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Computer storage media does not comprise a propagated data signal. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. Memory104includes computer-storage media in the form of volatile and/or nonvolatile memory. Memory104may be removable, nonremovable, or a combination thereof. Exemplary memory includes solid-state memory, hard drives, optical-disc drives, etc. Computing device100includes one or more processors106that read data from various entities such as bus102, memory104or I/O components112. One or more presentation components108presents data indications to a person or other device. Exemplary one or more presentation components108include a display device, speaker, printing component, vibrating component, etc. I/O ports110allow computing device100to be logically coupled to other devices including I/O components112, some of which may be built into computing device100. Illustrative I/O components112include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc. The radio116represents one or more radios that facilitate communication with a wireless telecommunications network. While a single radio116is shown inFIG.1, it is contemplated that there may be more than one radio116coupled to the bus102. In aspects, the radio116utilizes a transmitter118to communicate with the wireless telecommunications network. It is expressly conceived that a computing device with more than one radio116could facilitate communication with the wireless telecommunications network via both the first transmitter118and an additional transmitters (e.g. a second transmitter). Illustrative wireless telecommunications technologies include CDMA, GPRS, TDMA, GSM, and the like. The radio116may additionally or alternatively facilitate other types of wireless communications including Wi-Fi, WiMAX, LTE, 3G, 4G, LTE, 5G, NR, VoLTE, or other VoIP communications. As can be appreciated, in various embodiments, radio116can be configured to support multiple technologies and/or multiple radios can be utilized to support multiple technologies. A wireless telecommunications network might include an array of devices, which are not shown so as to not obscure more relevant aspects of the invention. Components such as a base station, a communications tower, or even access points (as well as other components) can provide wireless connectivity in some embodiments. Next,FIG.2provides an exemplary network environment in which implementations of the present disclosure may be employed. Such a network environment is illustrated and designated generally as network environment200. Network environment200is but one example of a suitable network environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the network environment be interpreted as having any dependency or requirement relating to any one or combination of components illustrated. Network environment200includes UEs202,204, and206, access point214(which may be a cell site, base station, or the like), network208, database210, and dynamic PRB assignment engine212. In network environment200, user devices may take on a variety of forms, such as a personal computer (PC), a user device, a smart phone, a smart watch, a laptop computer, a mobile phone, a mobile device, a tablet computer, a wearable computer, a personal digital assistant (PDA), a server, a CD player, an MP3 player, a global positioning system (GPS) device, a video player, a handheld communications device, a workstation, a router, a hotspot, and any combination of these delineated devices, or any other device (such as the computing device100) that communicates via wireless communications with the access point214in order to interact with a public or private network. In some aspects, the UEs202,204, and206can correspond to computing device100inFIG.1. Thus, a user device can include, for example, a display(s), a power source(s) (e.g., a battery), a data store(s), a speaker(s), memory, a buffer(s), a radio(s) and the like. In some implementations, for example, a UE202comprises a wireless or mobile device with which a wireless telecommunication network(s) can be utilized for communication (e.g., voice and/or data communication). In this regard, the user device can be any mobile computing device that communicates by way of a wireless network, for example, a 3G, 4G, 5G, LTE, CDMA, or any other type of network. In some cases, the UEs202,204, and206in network environment200can optionally utilize network208to communicate with other computing devices (e.g., a mobile device(s), a server(s), a personal computer(s), etc.) through access point214. The network208may be a telecommunications network(s), or a portion thereof. A telecommunications network might include an array of devices or components (e.g., one or more base stations), some of which are not shown. Those devices or components may form network environments similar to what is shown inFIG.2, and may also perform methods in accordance with the present disclosure. Components such as terminals, links, and nodes (as well as other components) can provide connectivity in various implementations. Network208can include multiple networks, as well as being a network of networks, but is shown in more simple form so as to not obscure other aspects of the present disclosure. Network208can be part of a telecommunication network that connects subscribers to their immediate service provider. In some instances, network208can be associated with a telecommunications provider that provides services (e.g., LTE) to user devices, such as UE202. For example, network208may provide voice, SMS, and/or data services to user devices or corresponding users that are registered or subscribed to utilize the services provided by a telecommunications provider. Network208can comprise any communication network providing voice, SMS, and/or data service(s), such as, for example, a 1× circuit voice, a 3G network (e.g., CDMA, CDMA2000, WCDMA, GSM, UMTS), a 4G network (WiMAX, LTE, HSDPA), or a 5G network. In some implementations, access point214is configured to communicate with a UE, such as UE202, that is located within the geographical area, or cell, covered by radio antennas of access point214. Cell site or access point214may include one or more base stations, base transmitter stations, radios, antennas, antenna arrays, power amplifiers, transmitters/receivers, digital signal processors, control electronics, GPS equipment, and the like. In particular, access point214may selectively communicate with the user devices using dynamic beamforming. As shown, access point214is in communication with dynamic PRB assignment engine212, which comprises various components that are utilized, in various implementations, to assign PRBs based on available bandwidth where non-standard channel sizes may be present. In some implementations, dynamic PRB assignment engine212comprises components including a carrier bandwidth amount determiner216, a bandwidth analyzer218, a bandwidth differential computer220, a PRB assignor222, an aggregator224, a spectrum selector226, a scheduler228, and a radio communicator230. However, in other implementations, more or less components than those shown inFIG.2may be utilized to carry out aspects of the invention described herein. The carrier bandwidth amount determiner216is configured to determine an amount of bandwidth that is carrier-owned or carrier-controlled. As mentioned, a carrier may own or control multiple sections within a spectrum. However, the portions or sections owned or controlled by a carrier may be contiguous or non-contiguous. Additionally, the un-owned or portions not controlled by the carrier may be controlled by a second carrier. For example, in an example spectrum that is 10 MHz in size, the carrier bandwidth amount determiner216will determine that a first carrier may own or control 7 Mhz of the 10 Mhz spectrum. Additionally, the carrier bandwidth amount determiner216will determine that the remaining 3 MHz is not owned or controlled by the first carrier. In some instances, the carrier bandwidth amount determiner216will determine only a first amount of bandwidth that is carrier-controlled. However, in other instances, the carrier bandwidth amount determiner216can determine additional amounts of bandwidth that are carrier-controlled (e.g. a second amount of bandwidth that is carrier-controlled, a third amount of bandwidth that is carrier-controlled, etc.). For example, if a 100 Mhz spectrum is present, the carrier bandwidth amount determiner216may determine a first amount of carrier-controlled bandwidth of 15 Mhz, a second amount of carrier-controlled bandwidth of 6 Mhz. If more than one amount of bandwidth that is carrier-controlled is present, then the carrier bandwidth amount determiner216will determine additional amounts of bandwidth that are carrier-controlled. In instances, any additional amounts of bandwidth that are carrier controlled and additional to the first amount of bandwidth may be located adjacent one another on the spectrum or may be located non-contiguously within the spectrum. Additionally, the carrier bandwidth amount determiner216is capable of also determining bandwidths within the spectrum that are not controlled or owned by the carrier and can identify the entity controlling or owning such additional bandwidths. Once at least a first amount of bandwidth that is carrier-controlled is determined by the carrier bandwidth amount determiner216, a bandwidth analyzer218determines a standardized bandwidth that is greater than the first amount of bandwidth. Standardized bandwidths are predetermined bandwidths that a carrier is provided with for selection. For example, the standard bandwidths may exist in increments of 5 MHz. As such, a list of standardized bandwidths available may include 5 MHz, 10 MHz, 15 Mhz, etc. While this example list includes bandwidths that begin with 5 Mhz and each additional standard bandwidth is in multiples of 5 MHz, it is contemplated that any other combination of standardized bandwidths may be available. For example, in other aspects, the standard bandwidths determined by the bandwidth analyzer218may be in increments of 10 Mhz, beginning with 10 Mhz and ending with 50 MHz depending on the size of the spectrum. The standardized bandwidth determined by the bandwidth analyzer218will be greater than the first amount of bandwidth. As such, if the first amount of bandwidth is determined by the carrier bandwidth amount determiner216to be 7 Mhz, then the standardized bandwidth determined by the bandwidth analyzer218would be the standardized bandwidth available that is larger than 7 Mhz. For example, the next larger standardized bandwidth may be 10 Mhz. In this case, the bandwidth analyzer218would determine that a 10 Mhz standardized bandwidth is greater than the 7 Mhz first amount of bandwidth that is carrier-controlled. In instances in which there is a first amount of bandwidth and a second amount of bandwidth determined by the carrier bandwidth amount determiner216, then the bandwidth analyzer218will determine a first standardized bandwidth that is greater than the first amount of bandwidth and a second standardized bandwidth that is greater than the second amount of bandwidth. For example, when the first amount of bandwidth is determined to be 15 Mhz, the bandwidth analyzer218will determine that the first standardized bandwidth that is greater than the first amount of bandwidth of 15 Mhz is 20 Mhz. Similarly, the bandwidth analyzer218will determine a second standardized bandwidth that is greater than the second amount of bandwidth. If the second amount of bandwidth that is carrier controlled was determined to be 6 Mhz, then the bandwidth analyzer218may determine that the second standardized bandwidth that is greater than the 6 Mhz second amount of bandwidth is 10 Mhz. The first amount of bandwidth, second amount of bandwidth, any additional bandwidth amounts, and any standardized bandwidths discussed herein are for example purposes only and non-limiting. It is contemplated that any combination of standardized bandwidths and carrier-controlled bandwidth amounts may be present depending on the spectrum and determinations by the spectrum governing body. The bandwidth differential computer220computes the bandwidth differential after the first amount of bandwidth and the standardized bandwidth have been determined. The bandwidth differential is the difference between the standardized bandwidth and the first amount of bandwidth. Continuing with the same example, if the standardized bandwidth is determined to be 10 Mhz and the first amount of bandwidth was determined to be 7 Mhz, then the bandwidth differential computer220will determine that the bandwidth differential is 3 Mhz (10 Mhz-7 Mhz). The bandwidth differential computed by the bandwidth differential computer220may correspond to a second amount of bandwidth that is not carrier-owned or controlled. In instances, it is contemplated that the first amount of bandwidth (e.g. 7 Mhz) and the second amount of bandwidth (3 Mhz) are located adjacent to one another within a first spectrum. However, in other instances, the first amount of bandwidth and second amount of bandwidths may not be contiguous. In cases where there is more than a first amount of bandwidth that is carrier controlled, the bandwidth differential computer220will determine a more than one bandwidth differential. Continuing with the example amounts above, if the first standardized bandwidth is determined to be 20 Mhz and the first amount of bandwidth controlled by the carrier is determined to be 15 Mhz, then the first bandwidth differential will be 5 Mhz. The bandwidth differential computer220would also calculate the bandwidth differential for the second amount of bandwidth that is carrier-controlled. As such, the bandwidth differential computer220would calculate that the difference between the second standardized bandwidth (10 Mhz) and the second amount of bandwidth that is carrier controlled is 4 Mhz. Once the bandwidth differential is computed by the bandwidth differential computer220, the PRB assignor222assigns PRBs corresponding to the first amount of bandwidth. However, the bandwidth differential is not assigned any PRBs. As such, the PRB assignor would assign PRBs corresponding to the 7 Mhz that is the first amount of bandwidth that is carrier controlled. The determined bandwidth differential of 3 Mhz will not be assigned any PRBs by the PRB assignor222. As mentioned, previously, only 5 Mhz of the 7 Mhz of the first amount of bandwidth that is carrier-controlled would have been able to be assigned PRBs since 5 Mhz is one of the standard bandwidths. Because new channel sizes require a lengthy process of approval from standard bodies, are expensive to implement on chipsets and software, and cause other complications, it would have been impossible for all 7 Mhz of the first amount of bandwidth that is carrier-controlled to have been allocated PRBs. This would have limited the carrier to only being able to have PRBs assigned to the 5 Mhz corresponding to the standardized bandwidth, even though the carrier controlled more 7 Mhz. By contrast, in the present system, all 7 Mhz of the first bandwidth that is carrier-controlled will be able to have PRB resources allocated, which results in the operator being able to utilize more of the carrier-controlled spectrum, increase speeds, and reduce the need for carrier aggregation. In this instance, bandwidth analyzer218would have determined that the standardized bandwidth should be 10 Mhz, rather than 5 Mhz, since 10 Mhz would have been the next larger standardized bandwidth that is greater than the 7 Mhz first amount of bandwidth. Additionally, no PRBs would be assigned by the PRB assignor222to the bandwidth differential or the 3 Mhz. As mentioned, the present disclosure would allow the carrier to utilize a greater amount of carrier-controlled bandwidth (7 Mhz) than it could have previously. Returning to the example where more than a first amount of bandwidth that is carrier controlled is determined, the PRB assignor222will assign PRBs corresponding to both the first amount of bandwidth and the second amount of bandwidth that is carrier-controlled. In this case, the PRB assignor222would assign PRBs to the first amount of bandwidth (15 Mz) and the second amount of bandwidth (6 Mhz). However, the first bandwidth differential (5 Mhz) and the second bandwidth differential (4 Mhz) would not receive allocation of any PRBs. Additionally, in aspects, aggregator224may also aggregate the first amount and the second amount of bandwidths. This may occur when the first amount of bandwidth and the second amount of bandwidth that are carrier-controlled are non-contiguous portions of bandwidth. In other words, as seen inFIG.3, if the first amount of bandwidth of 15 Mhz and the second amount of bandwidth of 6 Mhz are separated by a first amount of bandwidth that is not carrier-controlled (e.g. 15 MHz first bandwidth is located adjacent to 6 MHz of non-carrier controlled bandwidth which is then located adjacent to the second bandwidth of 6 MHz), the aggregator224will aggregate the first and second amounts of bandwidth that are carrier-controlled even though they are not located adjacent one another on the spectrum. Once again, the aggregation of the first amount and the second amount of bandwidth provides for greater utilization of the portion of the spectrum that is carrier controlled. It also increases speeds. In some aspects, a spectrum selector226may select a first segment of spectrum that is available to a wireless carrier for wirelessly communicating with one or more user devices. The first segment of spectrum will have an first bandwidth, which is determined by the carrier bandwidth amount determiner216. In other words, if the spectrum selector226selects segment A of 100 Mhz spectrum, the carrier bandwidth amount determiner216will determine that the size of the first bandwidth. For example, the first segment of the spectrum is determined to have a first bandwidth of 28 Mhz. In this case, the bandwidth analyzer218will determine that the first bandwidth 28 Mhz is between a lower standardized carrier bandwidth (e.g. 25 Mhz) and an upper standardized carrier bandwidth (e.g. 30 MHz). The bandwidth differential computer220will determine a first bandwidth differential between an upper standard carrier bandwidth (e.g. 30 Mhz) and the first amount of bandwidth 28 Mhz). The first bandwidth differential will be determined to be 2 Mhz (30 Mhz upper standardized carrier bandwidth minus the first amount of bandwidth—28 Mhz). The first bandwidth differential corresponds to a second segment of spectrum that is not available to the wireless carrier. Once the bandwidth differential computer220computes the first bandwidth differential, the scheduler228schedules a plurality of PRBs for at least a portion of the first bandwidth (e.g. the scheduler may schedule a plurality of PRBs to all 28 Mhz of the first bandwidth amount or a portion such as 25 Mhz of the 28 Mhz of the first bandwidth). Additionally, the first bandwidth differential (2 Mhz) will not have any PRBs scheduled. Once scheduled, the communicator230will communicate the schedule of the plurality of PRBs to a radio component. The radio component will then wireless communicate the PRBs to one or more user devices. Next,FIG.3, illustrates an example illustration300comparing the improvements of the present disclosure in comparison to prior PRB allocation methods. First, Spectrum A illustrates 10 Mhz spectrum. Of the 10 Mhz of Spectrum A, 7 Mhz is determined to be carrier-owned or controlled and 3 Mhz is un-owned and unusable. As mentioned, in prior PRB allocation methods and systems, only 5 Mhz of the 7 Mhz could have had PRBs allocated since the first standardized bandwidth would have been 5 Mhz. By contrast, when the present systems and methods disclosed herein are implemented as shown at306, the first standardized bandwidth that is greater than the first amount of bandwidth is determined. In this case, the first standardized bandwidth greater than the first amount of bandwidth would be a 10 Mhz standardized bandwidth. The bandwidth differential computer220would compute the bandwidth differential between the first standardized bandwidth (10 Mhz) and the first amount of bandwidth (7 Mhz). In this case, the first bandwidth differential would be 3 Mhz. As shown at306, the PRB assignor222would then assign PRBs to 7 Mhz or the first amount of bandwidth but not assign any PRBs to the first bandwidth differential or the 3 Mhz. As illustrated, this allows a greater amount of the carrier-owned spectrum to be utilized as 7 Mhz can be assigned PRBs instead of being limited to only assigning PRBs to the 5 Mhz. In the previous instances, 2 Mhz of the 7 Mhz first amount of bandwidth that is carrier controlled would have remained wasted and unused. However, the present disclosure provides systems and methods for increasing the utilization of the carrier owned spectrum such that all of the first amount of bandwidth that is carrier-controlled shown at306will be assigned PRBs. As mentioned, in some instances, there may be more than one amount of bandwidth that is carrier controlled. As shown by Spectrum B303inFIG.3, a first amount310(15 Mhz), a second amount312(6 MHz), and a third amount314(63 Mhz) are designated as carrier-controlled. Unowned portions316(6 Mhz) and318(10 Mhz) are also present in spectrum B. Additionally, as seen, the first amount of bandwidth310, second amount of bandwidth312, and third amount of bandwidth314that are carrier-controlled are non-contiguous and separated by the unowned portions316and318. Previously, due to the limitations of assigning PRBs, only the third amount of bandwidth314would have been used since the 60 Mhz would have corresponded to a standardized bandwidth. As such, 3 Mhz of the 63 Mhz first amount of bandwidth would not have been utilized in any way. This would have left first amount of bandwidth310and second amount of bandwidth312unused and wasted as well. Therefore, of the 84 Mhz in total that is carrier-controlled in Spectrum B303, only 60 Mhz would have been utilized. By contrast, when the features of the present disclosure are implemented in Spectrum B303, it results in the use of all 84 Mhz in Spectrum B303that are carrier-controlled. More specifically, the carrier bandwidth amount determiner216will determine the first amount of bandwidth310(15 Mhz), second amount of bandwidth312(6 Mhz) and third amount of bandwidth314(63 Mhz) that is carrier controlled. The bandwidth analyzer218will determine a first standardized bandwidth that is greater than the first amount of bandwidth (e.g. 20 Mhz), a second standardized bandwidth that is greater than the second amount of bandwidth (e.g. 10 Mhz) and a third standardized bandwidth that is greater than the third amount of bandwidth (e.g. 65 MHz). The bandwidth analyzer2118may determine the best standardized bandwidth to be 100 MHz, which covers the first amount of bandwidth310, the second amount of bandwidth213, and the third amount of bandwidth314. Of this 100 MHz carrier, the PRBs belonging to the two unowned spectrum blocks would all belong to the bandwidth differential and would not be scheduled. The bandwidth differential includes a determined amount of spectrum (e.g. 3 MHz) and all the resources (e.g. PRBs) associated with the unowned spectrum. By assigning PRBs corresponding to the first amount of bandwidth310, second amount of bandwidth312, and third amount of bandwidth314that are carrier-controlled only and then aggregating the first amount of bandwidth310, second amount of bandwidth312, and third amount of bandwidth312that are noncontiguous portions within Spectrum B303, all 84 MHz that are carrier controlled will be utilized as shown at320and there will be no wasting of resources as there would have been in prior implementations, where the first amount310, second amount312, and a portion of the third amount (3 Mhz)314would have been wasted, resulting in a waste of 24 Mhz and usage of only 60 Mhz. FIG.4depicts a flow diagram of an exemplary method400for dynamically assigning PRBs based on available bandwidth. Initially, at block410, the first amount of bandwidth that is carrier controlled is determined. Then, at block420, a standardized bandwidth that is greater than the first amount of bandwidth is determined. Based on that, a bandwidth differential is computed at block430. The bandwidth differential is the difference between the standardized bandwidth and the first amount of bandwidth. Following this, at block440, PRBs are assigned corresponding to the first amount of bandwidth and no PRBs are assigned to the bandwidth differential. FIG.5depicts another flow diagram of another example method500for assigning PRBs based on available bandwidth. In method500, a first amount and a second amount of bandwidth that are carrier-controlled are determined at block510. Then, at block520, a first standardized bandwidth that is greater than the first amount of bandwidth is determined for the first amount of bandwidth. Similarly, at block530, a second standardized bandwidth that is greater than the second amount of bandwidth is determined for the second amount of bandwidth. Then, a first bandwidth differential that is a difference between the first standardized bandwidth and the first amount of bandwidth at block540. A second bandwidth differential that is a difference between the second standardized bandwidth and the second amount of bandwidth is also computed at block550. Based on such computations, PRBs corresponding to the first amount and the second amount of bandwidth are assigned without assigning PRBs corresponding to the first bandwidth differential and the second bandwidth differential at block560. Then the first amount and the second amount of bandwidth are aggregated at block570, and the first amount and the second amount of bandwidth are noncontiguous portions of bandwidth. FIG.6illustrates an example flow diagram of another method600for scheduling PRBs in non-standardized channel sizes. As shown, method600begins when a scheduling component selects a first segment of spectrum available to a wireless carrier for wirelessly communicating with one or more user devices at block610. Then, it is determined that the first bandwidth is between a lower standardized carrier bandwidth and an upper standardized carrier bandwidth at block620. Following this, a first bandwidth differential that is a difference between the upper standardized carrier bandwidth and the first of bandwidth is determined at block630. Based on such determinations, a plurality of physical resource blocks are scheduled for at least a portion of the first bandwidth and no physical resource blocks are scheduled for the first bandwidth differential at block640. The schedule of the plurality of PRBs is communicated to a radio component at block650and then the radio component wirelessly communicates the plurality or resource blocks to one or more user devices at block660. Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the scope of the claims below. Embodiments of our technology have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to readers of this disclosure after and because of reading it. Alternative means of implementing the aforementioned can be completed without departing from the scope of the claims below. Certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations and are contemplated within the scope of the claims. | 46,628 |
11943172 | DESCRIPTION OF EMBODIMENTS In the following, technical solutions of embodiments of the present application will be described in combination with the drawings in embodiments of the present application. Obviously, the described embodiments are parts, but not all, of embodiments of the present application. On the basis of embodiments of the present application, all of other embodiments obtained by the ordinary persons skilled in the art without creative effort shall fall into the protection scope of the present application. The technical solutions of embodiments of the present application can be applied to various communication systems, such as: a global system of mobile communication (Global System of Mobile Communication, GSM) system, a code division multiple access (Code Division Multiple Access, CDMA) system, a wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA) system, a general packet radio service (General Packet Radio Service, GPRS), a long term evolution (Long Term Evolution, LTE) system, an LTE frequency division duplex (Frequency Division Duplex, FDD) system, an LTE time division duplex (Time Division Duplex, TDD), a universal mobile telecommunication system (Universal Mobile Telecommunication System, UMTS), a worldwide interoperability for microwave access (Worldwide Interoperability for Microwave Access, WiMAX) communication system or a 5G system, etc. For example, a communication system100applied in an embodiment of the present application is shown inFIG.1. The communication system100may include a network device110, which can be a device in communication with a terminal120(or referred to as a communication terminal or a terminal). The network device110can provide communication coverage for a specific geographic area, and can communicate with terminals located in the coverage area. Optionally, the network device110may be a base station (Base Transceiver Station, BTS) in a GSM system or in a CDMA system, a base station (NodeB, NB) in a WCDMA system, an evolutional base station (Evolutional Node B, eNB or eNodeB) in an LTE system, or a wireless controller in a cloud radio access network (Cloud Radio Access Network, CRAN); or, the network device may be a mobile switching center, a relay station, an access point, an in-vehicle device, a wearable device, a hub, a switch, a bridge, a router, a network side device in the 5G network, or a network device in a public land mobile network (Public Land Mobile Network, PLMN) in future evolution, etc. The communication system100further includes at least one terminal120located within the coverage area of the network device110. The “terminal” used here includes, but is not limited to, connections via a wired line, such as via a public switched telephone network (Public Switched Telephone Network, PSTN), a digital subscriber line (Digital Subscriber Line, DSL), a digital cable, and a direct cable connection; and/or via another data connection/network; and/or via a wireless interface, for instance, for a cellular network, a wireless local area network (Wireless Local Area Network, WLAN), a digital TV network such as a DVB-H network, a satellite network, an AM-FM broadcast transmitter; and/or via a device of another terminal, configured to receive/send communication signals; and/or via an internet of things (Internet of Things, IoT) device. The terminal configured to communicate through a wireless interface may be referred to as a “wireless communication terminal”, a “wireless terminal” or a “mobile terminal”. Examples of the mobile terminal include, but are not limited to, a satellite or a cellular phone; a personal communications system (Personal Communications System, PCS) terminal that can combine cellular radio phones with data processing, fax, and data communication capabilities; an PDA that can include a radio phone, a pager, an Internet/intranet access, a Web browser, a memo pad, a calendar and/or a global positioning system (Global Positioning System, GPS) receiver; and a conventional laptop and/or palmtop receiver or other electronic devices including a radio telephone transceiver. The terminal can refer to an access terminal, a user equipment (User Equipment, UE), a user unit, a user station, a mobile station, a MS, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent or a user apparatus. The access terminal can be a cellular phone, a cordless phone, a session initiation protocol (Session Initiation Protocol, SIP) phone, a wireless local loop (Wireless Local Loop, WLL) station, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device with a wireless communication function, a computing device with a wireless communication function, or other processing devices connected to a wireless modem and having a wireless communication function, an in-vehicle device, a wearable device, a terminal in the 5G network, or a terminal in the PLMN in future evolution, etc. Optionally, a device to device (Device to Device, D2D) communication may be performed between terminals120. Optionally, the 5G system or the 5G network may also be referred to as a new radio (New Radio, NR) system or an NR network. FIG.1exemplarily shows one network device and two terminals. Optionally, the communication system100may include a plurality of network devices, and the coverage of each network device may include other numbers of terminals, which are not limited in embodiments of the present application. Optionally, the communication system100may further include other network entities, such as a network controller, a mobility management entity, which is not limited in embodiments of the present application. It should be understood that the device with a communication function in the network/system in embodiments of the present application may be referred to as a communication device. Taking the communication system100shown inFIG.1as an example, the communication device may include the network device110and the terminal120with the communication function, and the network device110and the terminal120may be a specific device described above, which will not be repeated here. The communication device may further include other devices in the communication system100, for example, other network entities such as a network controller, a mobility management entity and the like, which are not limited in embodiments of the present application. It should be understood that the terms “system” and “network” are often used interchangeably herein. The expression “and/or” herein is only an association relationship for describing associated objects, and indicates an existence of three types of relationships. For example, the expression “A and/or B” may indicate three cases including existence of A alone, existence of both A and B at the same time, and existence of B alone. In addition, the character “/” herein generally indicates that the relationship between the associated objects before and after the character is the relationship of “or”. To facilitate the understanding of the technical solutions of embodiments of the present application, technologies related to embodiments of the present application are described as follow. An unlicensed spectrum is a spectrum that is obtained through dividing by countries and regions and that can be used for radio equipment communications. The spectrum is usually considered to be a shared spectrum, that is, if communication devices in different communication systems meet regulatory requirements set by a country or region on the spectrum, the communication devices can use the spectrum without applying for a proprietary spectrum license from a government. In order to allow various communication systems using the unlicensed spectrum for wireless communication to friendly coexist on the spectrum, some countries or regions have stipulated the regulatory requirements that must be met when the unlicensed spectrum is used. For example, in Europe, a communication device follows an LBT principle, that is, it is necessary for the communication device to perform channel sensing before sending signals on an unlicensed spectrum channel. Only when a channel sensing result is that the channel is idle, the communication device can send the signals; if the channel sensing result of the communication device on the unlicensed spectrum channel is that the channel is busy, the communication device cannot send the signals. Furthermore, in order to ensure fairness, during one transmission, a duration of the signal transmission by the communication device using the unlicensed spectrum channel cannot exceed a maximum channel occupation time (MCOT, Maximum Channel Occupation Time). In the unlicensed spectrum system, a base station needs to perform LBT when sending a downlink channel, and there is a limitation for time to occupy the channel at one time; as a result, the transmission of the downlink channel and signal may be discontinuous. The terminal does not know when the base station starts to occupy the downlink channel for transmission, so the terminal needs to constantly detect the downlink channel, which will result in power consumption of the terminal. In order to reduce the power consumption of the terminal, after the result of the channel sensing performed by the base station is idle, an indication signal is sent to the terminal, to inform the terminal that the base station obtains a downlink transmission opportunity. After receiving the indication, the terminal starts to receive a corresponding downlink channel and signal, such as a PDCCH, a reference signal and the like. Before receiving the indication, the terminal may not detect channels and signals other than the indication signal, or may detect downlink channels and signals including the indication signal using a longer period. On the other hand, the terminal needs to perform radio resource management (RRM, Radio Resource Management) measurement and radio link monitoring (RLM, Radio Link Monitoring) measurement. By measuring a reference signal received power (RSRP, Reference Signal Received Power), a reference signal received quality (RSRQ, Reference Signal Received Quality) or a signal-to-noise and interference ratio (SINR, Signal-to-noise and Interference Ratio) of a channel state information-reference signal (CSI-RS, Channel State Information-Reference Signal) or a synchronization signal (SS, synchronization signal), mobility management and synchronization and out-of-synchronization judgment are performed. Or, the terminal performs channel state measurement through a configured CSI-RS. FIG.2is a first schematic flowchart of a signal transmission method according to an embodiment of the present application. As shown inFIG.2, the signal transmission method includes the following steps: Step201: a terminal receives a first indication signal, and determines, based on the first indication signal, at least one of a time domain, a frequency domain, and a code domain of a measurement reference signal whose measurement result is valid. In embodiments of the present application, the terminal may be any device which is capable of communicating through a network, such as a mobile phone, a tablet computer, an in-vehicle terminal, a notebook computer, etc. In an embodiment of the present application, the terminal receives the first indication signal sent by the base station, and determines, based on the first indication signal, at least one of the time domain, the frequency domain, and the code domain of the measurement reference signal whose measurement result is valid. Here, the base station can be, but is not limited to, a gNB in 5G. In embodiments of the present application, the first indication signal indicates at least one of the time domain, the frequency domain and the code domain of the measurement reference signal whose measurement result is valid. In this way, the terminal can determine the measurement result of the measurement reference signal detected at which position is a valid measurement result. In an implementation, the first indication signal directly indicates at least one of a time domain, a frequency domain and a code domain of a measurement reference signal that needs to be detected, where a measurement result corresponding to the measurement reference signal that needs to be detected is a valid measurement result. The terminal determines, based on the first indication signal, at least one of the time domain, the frequency domain, and the code domain of the measurement reference signal that needs to be detected. Further, the terminal performs measurement reference signal detection only at the position of the measurement reference signal whose measurement result is valid. In an embodiment of the present application, the measurement reference signal includes at least one of a CSI-RS, a TRS, a synchronization signal, and a synchronization signal block. In an implementation, the first indication signal is a physical layer signal or information carried by a PDCCH. Further, the physical layer signal is a reference signal. Here, the information carried by the PDCCH may be pre-emption indication (Pre-emption Indication) information. Specifically, the Pre-emption Indication information is carried by the PDCCH, and the Pre-emption Indication information is used to indicate a position of a time-frequency resource occupied by a terminal with ultra-high-reliability and ultra-low-latency communication (URLLC) service, where the position of the time-frequency resource is located in an area occupied by a physical downlink shared channel (PDSCH) that is scheduled to other terminals before the Pre-emption Indication information is sent, and is used for other terminals to determine time-frequency resources actually occupied by their PDSCHs, thereby further realizing rate matching. In the embodiment of the present application, the Pre-emption Indication information is used to indicate at least one of the time domain, the frequency domain and the code domain of the measurement reference signal whose measurement result is valid. In embodiments of the present application, considering the LBT principle in the unlicensed spectrum system, it is necessary for the base station to perform the LBT first when sending a downlink channel/signal. After the LBT is successful, the base station obtains a downlink transmission opportunity; and correspondingly, there is a downlink reception opportunity for the terminal side. Based on this, the first indication signal is used to indicate a starting position of the downlink transmission opportunity and/or occupancy time of the downlink transmission opportunity; or, the first indication signal is used to indicate a starting position of the downlink reception opportunity and/or downlink reception time. In the above solution, a time domain position of the measurement reference signal whose measurement result is valid includes a time domain position of the measurement reference signal within the occupancy time of the downlink transmission opportunity or within the downlink reception time. Referring toFIG.3, the base station obtains, after successfully performing LBT, a downlink transmission opportunity, and informs the starting position of the downlink transmission opportunity and/or the occupancy time of the downlink transmission opportunity to the terminal through the first indication signal, so that the terminal can determine a time range corresponding to the downlink transmission opportunity, and the terminal measures the measurement reference signal within the time range corresponding to the downlink transmission opportunity, where the time range corresponding to the downlink transmission opportunity cannot exceed the MCOT. An example in which the time range corresponding to one downlink transmission opportunity is the MCOT is shown inFIG.3. After the end of the MCOT, and before receiving the first indication signal again, the terminal does not measure the measurement reference signal, or not report the measurement result to a higher layer. Considering that the time domain position of the measurement reference signal is pre-configured on the network side, therefore the terminal detects the measurement reference signal at the time domain position of the measurement reference signal within the occupancy time of the downlink transmission opportunity or within the downlink reception time. Referring toFIG.3, time of the MCOT includes time domain positions of 4 measurement reference signals, and the terminal will detect the measurement reference signals at the time domain positions of such4measurement reference signals. In an embodiment of the present application, the terminal reports the measurement result of the measurement reference signal at the time domain position to the higher layer. Further, the measurement reference signal is used for RRM measurement, RLM measurement, synchronization, channel state measurement, or time-frequency tracking. FIG.4is a second schematic flowchart of a signal transmission method according to an embodiment of the present application. As shown inFIG.4, the signal transmission method includes the following steps: Step401: a base station sends a first indication signal, where the first indication signal is used by a terminal to determine at least one of a time domain, a frequency domain, and a code domain of a measurement reference signal whose measurement result is valid. In embodiments of the present application, the base station can be, but is not limited to, a gNB in 5G. In an embodiment of the present application, the base station sends the first indication signal to the terminal. The terminal may be any device capable of communicating through a network, such as a mobile phone, a tablet computer, an in-vehicle terminal, and a notebook computer. In an embodiment of the present application, the first indication signal indicates at least one of the time domain, the frequency domain and the code domain of the measurement reference signal whose measurement result is valid. In this way, the terminal can determine the measurement result of the measurement reference signal detected at which position is a valid measurement result. In an implementation, the first indication signal directly indicates at least one of a time domain, a frequency domain and a code domain of a measurement reference signal that needs to be detected, where a measurement result corresponding to the measurement reference signal that needs to be detected is a valid measurement result. It can be seen that the first indication signal is used by the terminal to determine at least one of the time domain, the frequency domain and the code domain of the measurement reference signal that needs to be detected. In an embodiment of the present application, the measurement reference signal includes at least one of a CSI-RS, a TRS, a synchronization signal, and a synchronization signal block. In an implementation, the first indication signal is a physical layer signal or information carried by a PDCCH. Further, the physical layer signal is a reference signal. Here, the information carried by the PDCCH may be pre-emption indication (Pre-emption Indication) information. Specifically, the Pre-emption Indication information is carried by the PDCCH, and the Pre-emption Indication information is used to indicate a position of a time-frequency resource occupied by a terminal with ultra-high-reliability and ultra-low-latency communication (URLLC) service, where the position of the time-frequency resource is located in an area occupied by a PDSCH that is scheduled to other terminals before the Pre-emption Indication information is sent, and is used for other terminals to determine time-frequency resources actually occupied by their PDSCHs, thereby further realizing rate matching. In the embodiment of the present application, the Pre-emption Indication information is used to indicate at least one of the time domain, the frequency domain and the code domain of the measurement reference signal whose measurement result is valid. In embodiments of the present application, considering the LBT principle in the unlicensed spectrum system, it is necessary for the base station to perform the LBT first when sending a downlink channel/signal. After the LBT is successful, the base station obtains a downlink transmission opportunity; and correspondingly, there is a downlink reception opportunity for the terminal side. Based on this, the first indication signal is used to indicate a starting position of the downlink transmission opportunity and/or occupancy time of the downlink transmission opportunity; or, the first indication signal is used to indicate a starting position of the downlink reception opportunity and/or downlink reception time. In the above solution, a time domain position of the measurement reference signal whose measurement result is valid includes a time domain position of the measurement reference signal within the occupancy time of the downlink transmission opportunity or within the downlink reception time. Referring toFIG.3, the base station obtains, after successfully performing LBT, a downlink transmission opportunity, and informs the starting position of the downlink transmission opportunity and/or the occupancy time of the downlink transmission opportunity to the terminal through the first indication signal, so that the terminal can determine a time range corresponding to the downlink transmission opportunity, and the terminal measures the measurement reference signal within the time range corresponding to the downlink transmission opportunity, where the time range corresponding to the downlink transmission opportunity cannot exceed the MCOT. An example in which the time range corresponding to one downlink transmission opportunity is the MCOT is shown inFIG.3. After the end of the MCOT, and before receiving the first indication signal again, the terminal does not measure the measurement reference signal, or not report the measurement result to a higher layer. Considering that the time domain position of the measurement reference signal is pre-configured on the network side, therefore, the terminal detects the measurement reference signal at the time domain position of the measurement reference signal within the occupancy time of the downlink transmission opportunity or within the downlink reception time. Referring toFIG.3, time of the MCOT includes time domain positions of 4 measurement reference signals, and the terminal will detect the measurement reference signals at the time domain positions of such4measurement reference signals. In an embodiment of the present application, the measurement reference signal is used for RRM measurement, RLM measurement, synchronization, channel state measurement, or time-frequency tracking. FIG.5is a first schematic structural component diagram of a signal transmission apparatus according to an embodiment of the present application. As shown inFIG.5, the signal transmission apparatus includes: a receiving unit501, configured to receive a first indication signal; a determining unit502, configured to determine, based on the first indication signal, at least one of a time domain, a frequency domain and a code domain of a measurement reference signal whose measurement result is valid. In an implementation, the determining unit502is configured to determine, based on the first indication signal, at least one of a time domain, a frequency domain and a code domain of a measurement reference signal that needs to be detected, where a measurement result corresponding to the measurement reference signal that needs to be detected is a valid measurement result. In an implementation, the measurement reference signal includes at least one of a CSI-RS, a TRS, a synchronization signal, and a synchronization signal block. In an implementation, the first indication signal is a physical layer signal or information carried by a PDCCH. In an implementation, the physical layer signal is a reference signal. In an implementation, the first indication signal is used to indicate a starting position of a downlink transmission opportunity and/or occupation time of the downlink transmission opportunity; or, the first indication signal is used to indicate a starting position of a downlink reception opportunity and/or downlink reception time. In an implementation, a time domain position of the measurement reference signal include a time domain position of the measurement reference signal within the occupancy time of the downlink transmission opportunity or within the downlink reception time. In an implementation, the apparatus further includes: a detecting unit503, configured to detect the measurement reference signal at the time domain position of the measurement reference signal within the occupancy time of the downlink transmission opportunity or within the downlink reception time. In an implementation, the apparatus further includes: a reporting unit504, configured to report the measurement result of the measurement reference signal at the time domain position to a higher layer. In an implementation, the measurement reference signal is used for RRM measurement, RLM measurement, synchronization, channel state measurement or time-frequency tracking. It should be understand for those skilled in the art that the description related to the foregoing signal transmission apparatus in embodiments of the present application can be understood with reference to the description related to the signal transmission method in embodiments of the present application. FIG.6is a second schematic structural component diagram of a signal transmission apparatus according to an embodiment of the present application. As shown inFIG.6, the signal transmission apparatus includes: a sending unit601, configured to send a first indication signal, where the first indication signal is used by a terminal to determine at least one of a time domain, a frequency domain and a code domain of a measurement reference signal whose measurement result is valid. In an implementation, the first indication signal is used by the terminal to determine at least one of a time domain, a frequency domain and a code domain of a measurement reference signal that needs to be detected, where a measurement result corresponding to the measurement reference signal that needs to be detected is a valid measurement result. In an implementation, the measurement reference signal includes at least one of a CSI-RS, a TRS, a synchronization signal, and a synchronization signal block. In an implementation, the first indication signal is a physical layer signal or information carried by a PDCCH. In an implementation, the physical layer signal is a reference signal. In an implementation, the first indication signal is used to indicate a starting position of a downlink transmission opportunity and/or occupation time of the downlink transmission opportunity; or, the first indication signal is used to indicate a starting position of a downlink reception opportunity and/or downlink reception time. In an implementation, a time domain position of the measurement reference signal includes a time domain position of the measurement reference signal within the occupancy time of the downlink transmission opportunity or within the downlink reception time. In an implementation, the measurement reference signal is used for RRM measurement, RLM measurement, synchronization, channel state measurement or time-frequency tracking. It should be understand for those skilled in the art that the description related to the foregoing signal transmission apparatus in embodiments of the present application can be understood with reference to the description related to the signal transmission method in embodiments of the present application. FIG.7is a schematic structural diagram of a communication device600according to an embodiment of the present application. The communication device may be a terminal or a network device. The communication device600shown inFIG.7includes a processor610, which can call and execute a computer program from a memory to implement the methods in embodiments of the present application. Optionally, as shown inFIG.7, the communication device600can further include a memory620. The processor610can call and execute a computer program from the memory620to implement the methods in embodiments of the present application. The memory620may be a separate component independent of the processor610, or may be integrated into the processor610. Optionally, as shown inFIG.7, the communication device600can further include a transceiver630, and the processor610can control the transceiver630to communicate with other devices. Specifically, the transceiver can send information or data to other devices, or receive information or data sent by other devices. The transceiver630may include a transmitter and a receiver. The transceiver630may further include an antenna, and the number of the antenna may be one or more. Optionally, the communication device600may specifically be a network device according to embodiments of the present application, and the communication device600may implement corresponding processes implemented by the network device in respective methods of the embodiments of the present application. For brevity, the details are not repeated here. Optionally, the communication device600may specifically be a mobile terminal/terminal according to embodiments of the present application, and the communication device600may implement corresponding processes implemented by the mobile terminal/terminal in respective methods of the embodiments of the present application. For brevity, the details are not repeated here. FIG.8is a schematic structural diagram of a chip according to an embodiment of the present application. The chip700shown inFIG.8includes a processor710, which can call and execute a computer program from a memory to implement the methods in embodiments of the present application. Optionally, as shown inFIG.8, the chip700can further include a memory720. The processor710can call and execute a computer program from the memory720to implement the methods in embodiments of the present application. The memory720may be a separate component independent of the processor710, or may be integrated into the processor710. Optionally, the chip700can further include an input interface730. The processor710can control the input interface730to communicate with other devices or chips, and specifically to obtain information or data sent by other devices or chips. Optionally, the chip700can further include an output interface740. The processor710can control the output interface740to communicate with other devices or chips, and specifically to output information or data to other devices or chips. Optionally, the chip can be applied to the network device in embodiments of the present application, and the chip can implement corresponding processes implemented by the network device in respective methods of the embodiments of the present application. For brevity, the details are not repeated here. Optionally, the chip can be applied to the mobile terminal/terminal in embodiments of the present application, and the chip can implement corresponding processes implemented by the mobile terminal/terminal in respective methods of the embodiments of the present application. For brevity, the details are not repeated here. It should be understood that the chip described in embodiments of the present application may also be referred to as a system-level chip, a system-on-chip, a chip system, an SoC chip or the like. FIG.9is a schematic block diagram of a communication system900according to an embodiment of the present application. As shown inFIG.9, the communication system900includes a terminal910and a network device920. The terminal910can be configured to implement corresponding functions implemented by the terminal in the foregoing methods, and the network device920can be configured to implement corresponding functions implemented by the network device in the foregoing methods. For brevity, the details are not repeated herein. It should be understood that the processor according to embodiments of the present application may be an integrated circuit chip with the capability of processing signals. In the implementation process, the steps of the foregoing method embodiments can be completed by an integrated logic circuit in hardware of the processor or by instructions in the form of software. The above processor may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, which can implement or perform the methods, steps, and logical block diagrams disclosed in the embodiments of the present application. The general-purpose processor may be a microprocessor, or the processor may also be any conventional processor or the like. The steps of the methods disclosed in combination with embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or being executed and completed by a combination of hardware and software modules in a decoding processor. The software modules can be located in a storage medium which is mature in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, a register, etc. The storage medium is located in a memory, and the processor reads information in the memory and completes the steps of the above methods in combination with hardware thereof. It can be understood that the memory in embodiments of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories. The non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electrically erasable programmable read-only memory (Electrically EPROM, EEPROM) or a flash memory. The volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache. By way of exemplary but not restrictive description, various RAMS are available, such as a static random access memory (Static RAM, SRAM), a dynamic random access memory (Dynamic RAM, DRAM), a synchronous dynamic random access memory (Synchronous DRAM, SDRAM), a double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR SDRAM), an enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), a synch link dynamic random access memory (Synch link DRAM, SLDRAM)) or a direct rambus random access memory (Direct Rambus RAM, DR RAM). It should be noted that the memories of the systems and methods described herein are intended to include, but are not limited to, these and any other suitable types of memories. It should be understood that the above memories are exemplary but not restrictive. For example, the memory in embodiments of the present application may also be a static random access memory (static RAM, SRAM), a dynamic random access memory (dynamic RAM, DRAM), a synchronous dynamic random access memory (synchronous DRAM, SDRAM), a double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), an enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), a synch link dynamic random access memory (Synch link DRAM, SLDRAM)) or a direct rambus random access memory (Direct Rambus RAM, DR RAM), etc. That is to say, the memories in embodiments of the present application are intended to include, but not limited to, these and any other suitable types of memories. An embodiment of the present application further provides a computer-readable storage medium, configured to store a computer program. Optionally, the computer-readable storage medium can be applied to the network device in embodiments of the present application, and the computer program causes a computer to perform corresponding processes implemented by the network device in respective methods of the embodiments of the present application. For brevity, the details are not repeated herein. Optionally, the computer-readable storage medium can be applied to the mobile terminal/terminal in embodiments of the present application, and the computer program causes the computer to perform corresponding processes implemented by the mobile terminal/terminal in respective methods of the embodiments of the present application. For brevity, the details are not repeated herein. An embodiment of the present application further provides a computer program product, including computer program instructions. Optionally, the computer program product can be applied to the network device in embodiments of the present application, and the computer program instructions cause a computer to perform corresponding processes implemented by the network device in respective methods of the embodiments of the present application. For brevity, the details are not repeated herein. Optionally, the computer program product can be applied to the mobile terminal/terminal in embodiments of the present application, and the computer program instructions cause the computer to perform corresponding processes implemented by the mobile terminal/terminal in respective methods of the embodiments of the present application. For brevity, the details are not repeated herein. An embodiment of the present application further provides a computer program. Optionally, the computer program can be applied to the network device in embodiments of the present application. The computer program causes, when executed on a computer, the computer to perform corresponding processes implemented by the network device in respective methods of the embodiments of the present application. For brevity, the details are not repeated herein. Optionally, the computer program can be applied to the mobile terminal/terminal in embodiments of the present application. The computer program causes, when executed on the computer, the computer to perform corresponding processes implemented by the mobile terminal/terminal in respective methods of the embodiments of the present application. For brevity, the details are not repeated herein. Those ordinary persons skilled in the art may realize that the units and algorithm steps of the examples described in combination with embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed by hardware or software depends on specific applications and design constraints for the technical solution. Those skilled persons can use different methods to implement the described functions for each specific application, but such implementations should not be considered beyond the scope of the present application. Those skilled in the art can clearly understand that, for convenience and concise description, the corresponding processes in the foregoing method embodiments may be referred to for the specific operation processes of the system, apparatus, and unit described above, and the details are not repeated here. It should be understood that, the system, the apparatus and the method disclosed in the several embodiments provided in the present application may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative. For example, the division of the units is merely a logical function division. There may be other divisions in actual implementation, for example, a plurality of units or components may be combined, or may be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, apparatuses or units, and may be in electrical, mechanical or other forms. Units described as separate components may or may not be physically separated, and a component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual requirements so as to achieve objectives of solutions of the embodiments. In addition, the various functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may physically exist alone, or two or more units may be integrated into one unit. If the functions are implemented in the form of software functional units and sold or used as an independent product, they can be stored in a computer readable storage medium. Based on this understanding, the essence, or the portion contributing to the prior art, or part of the technical solutions of the present application can be embodied in the form of a software product. The computer software product is stored in a storage medium, including several instructions used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or parts of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes various media that can store program codes, such as a U disk, a mobile hard disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk. The above description is only specific implementations of the present application, but the protection scope of the present application is not limited thereto. Changes or replacements easily thought of by any person skilled in the art within the technical scope disclosed in the present application should be contained in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims. | 42,748 |
11943173 | DETAILED DESCRIPTION OF THE EMBODIMENTS Exemplary embodiments will be described in detail here with the examples thereof expressed in the drawings. Where the following descriptions involve the drawings, like numerals in different drawings refer to like or similar elements unless otherwise indicated. The implementations described in the following examples do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatuses and methods consistent with some aspects of the present disclosure as detailed in the appended claims. The terms used in the present disclosure are for the purpose of describing particular examples only, and are not intended to limit the present disclosure. Terms determined by “a”, “the” and “said” in their singular forms in the present disclosure and the appended claims are also intended to include plurality, unless clearly indicated otherwise in the context. It should also be understood that the term “and/or” as used herein is and includes any and all possible combinations of one or more of the associated listed items. It is to be understood that, although terms “first,” “second,” “third,” and the like may be used in the present disclosure to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one category of information from another. For example, without departing from the scope of the present disclosure, first information may be referred as second information; and similarly, second information may also be referred as first information. Depending on the context, the word “if” as used herein may be interpreted as “when”, “upon”, or “in response to determining”. FIG.1is a flowchart of a channel indication method illustrated according to an example, andFIG.2is a scenario diagram of a channel indication method illustrated according to an example. The channel indication method can be performed by a base station working on an unlicensed spectrum. As illustrated inFIG.1, the channel indication method includes the following steps110-130. At step110, one or more first channel detection subbands that pass a channel detection are determined. In one or more examples of the present disclosure, the base station may perform the channel detection on a plurality of channel detection subbands to obtain a channel detection result. The one or more first channel detection subbands that pass the channel detection may be included in the channel detection result. The first channel detection subband here refers to a channel detection subband that has passed the channel detection. In addition, the one or more first channel detection subbands that pass the channel detection may be a plurality of bandwidth parts configured on one unlicensed carrier, a plurality of unlicensed carriers, or a plurality of bandwidth parts configured on a plurality of unlicensed carriers. At step120, a channel indication signal is generated. The channel indication signal indicates the one or more first channel detection subbands that pass the channel detection. In one or more examples of the present disclosure, the channel indication signal indicates which channel detection subbands have passed the channel detection. In an example, the channel indication signal in the step120may include a first downlink signal and first downlink control signaling. The first downlink signal is configured for instructing a terminal to detect the downlink control signaling transmitted subsequently. The first downlink control signaling includes identification information for representing the one or more first channel detection subbands. The detailed realization of this example may refer to an example illustrated inFIG.3. In an example, the channel indication signal in the step120may include a second downlink signal. A sequence value of the second downlink signal indicates the one or more first channel detection subbands that pass the channel detection. The detailed realization of this example may refer to an example illustrated inFIG.4. In an example, the channel indication signal in the step120may include a third downlink signal. One or more positions at which the third downlink signal is transmitted indicate the one or more first channel detection subbands that pass the channel detection. The detailed realization of this example may refer to an example illustrated inFIG.5. In an example, the channel indication signal in the step120may include second downlink control signaling. A designated information field of the second downlink control signaling includes first indication information for explicitly indicating the one or more first channel detection subbands, or a CRC scrambling sequence of the second downlink control signaling includes second indication information for implicitly indicating the one or more first channel detection subbands. The detailed realization of this example may refer to an example illustrated inFIG.6. At step130, the channel indication signal is transmitted to the terminal, so that the terminal determines, based on the channel indication signal, the one or more first channel detection subbands that pass the channel detection. In one or more examples of the present disclosure, the base station can inform the terminal which channel detection subbands have passed the channel detection through the channel indication signal, so that the terminal can perform a data transmission on these channel detection subbands that have passed the channel detection. As illustrated inFIG.2, a base station11and a terminal12are included in an exemplary scenario. After determining one or more first channel detection subbands that pass a channel detection, the base station11may generate a channel indication signal. The channel indication signal indicates the one or more first channel detection subbands that pass the channel detection, and is transmitted to the terminal12. After receiving the channel indication signal from the base station11, the terminal12can determine, based on the channel indication signal, the one or more first channel detection subbands that pass the channel detection, and perform a data transmission on the one or more first channel detection subbands. In the present disclosure, the base station11may be a facility deployed in an access network to provide the terminal12with wireless communication functions. The base station11may cover various forms of a macro base station, a micro base station, a relay station, an access point and the like. In systems implemented with different wireless access technologies, the facility with base station functions may be named differently. For example, in a 5G NR system, it is called gNodeB or gNB. The name, “base station”, may be changed with the development of communication technologies. In order to simplify the description, in the examples of the present disclosure, the above-mentioned facilities that provide the terminal12with the wireless communication functions is collectively referred to as base stations. There are usually a plurality of terminals12. In a cell controlled by one base station11, there may distribute one or more terminals12. The terminal12may cover various devices with the wireless communication functions, such as handheld devices, in-vehicle devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and may cover various forms of User Equipment (UE), a mobile station (MS), a terminal device and the like. In order to simplify the description, in the examples of the present disclosure, the devices mentioned above are collectively referred to as terminals. According to the above examples, after determining the one or more first channel detection subbands that pass the channel detection, the channel indication signal can be generated to indicate the one or more first channel detection subbands that pass the channel detection, and be transmitted to the terminal, so that based on the channel indication signal, the terminal can accurately determine the one or more first channel detection subbands that pass the channel detection, thereby reducing an energy consumption for the channel detection and improving a data transmission performance. FIG.3is a flowchart of another channel indication method illustrated according to an example. The channel indication method can be performed by a base station working on an unlicensed spectrum. On the basis of the method illustrated inFIG.1, the channel indication signal includes a first downlink signal and first downlink control signaling, the first downlink signal is configured for instructing the terminal to detect the downlink control signaling transmitted subsequently, and the first downlink control signaling includes identification information for representing the one or more first channel detection subbands. As illustrated inFIG.3, when the step130is performed, the following steps310-330may be included. At step310, one or more second channel detection subbands for transmitting the first downlink signal and the first downlink control signaling are determined. The one or more second channel detection subbands are all or a part of the one or more first channel detection subbands. In one or more examples of the present disclosure, the first downlink signal may be a Demodulation Reference Signal (DMRS), a Channel State Information Reference Signal (CSI-RS), or another type of downlink signal. The first downlink control signaling may be control signaling configured to carry common control information. If the one or more second channel detection subbands are all of the one or more first channel detection subbands, it means that the first downlink signal and the first downlink control signaling are to be transmitted on every first channel detection subband. As illustrated inFIG.3A, a channel detection subband1and a channel detection subband3are the first channel detection subbands that pass the channel detection, and the first downlink signal and the first downlink control signaling are transmitted on the channel detection subband1and the channel detection subband3. As an example, the first downlink control signaling transmitted on the channel detection subband1and the first downlink control signaling transmitted on the channel detection subband3include the same identification information: Channel Detection Subband1and Channel Detection Subband3. As another example, the identification information included in the first downlink control signaling transmitted on the channel detection subband1is Channel Detection Subband1, and the identification information included in the first downlink control signaling transmitted on the channel detection subband3is Channel Detection Subband3. If the one or more second channel detection subbands are a part of the one or more first channel detection subbands, it means that the first downlink signal and the first downlink control signaling are to be transmitted on each of the part of subbands. As illustrated inFIG.3B, a channel detection subband1, a channel detection subband2and a channel detection subband3are the first channel detection subbands that pass the channel detection, and the first downlink signal and the first downlink control signaling are only transmitted on the channel detection subband1and the channel detection subband3. As an example, the first downlink control signaling transmitted on the channel detection subband1and the first downlink control signaling transmitted on the channel detection subband3include the same identification information: Channel Detection Subband1, Channel Detection Subband2, and Channel Detection Subband3. As another example, the identification information included in the first downlink control signaling transmitted on the channel detection subband1is Channel Detection Subband1and Channel Detection Subband2, and the identification information included in the first downlink control signaling transmitted on the channel detection subband3is Channel Detection Subband3. At step320, the first downlink signal is transmitted at a first position of the one or more second channel detection subbands. At step330, the first downlink control signaling is transmitted at a second position of the one or more second channel detection subbands. The second position is subsequent to the first position. A time interval between the first position in the step320and the second position in the step330may be predefined or be informed by the base station in advance through signaling. According to the above example, the first downlink signal and the first downlink control signaling can be transmitted respectively at the first position and the second position of each of the one or more first channel detection subbands, or respectively at the first position and the second position of each of the part of the one or more first channel detection subbands, thereby enriching a channel indication diversity and improving a channel indication reliability. FIG.4is a flowchart of another channel indication method illustrated according to an example. The channel indication method can be performed by a base station working on an unlicensed spectrum. On the basis of the method illustrated inFIG.1, the channel indication signal includes a second downlink signal, and a sequence value of the second downlink signal indicates the one or more first channel detection subbands that pass the channel detection. As illustrated inFIG.4, when the step130is performed, the following step410may be included. At step410, the channel indication signal carrying the second downlink signal is transmitted to the terminal, so that the terminal determines the sequence value of the second downlink signal, and determines the one or more first channel detection subbands based on the sequence value of the second downlink signal. In an example, as illustrated inFIG.4, the channel indication method may further include the following steps420-430. At step420, a first correspondence between preset downlink signal sequence values and preset channel detection subbands passing the channel detection is acquired. In one or more examples of the present disclosure, the first correspondence involves the preset downlink signal sequence values and the preset channel detection subbands passing the channel detection. A specific correspondence may be illustrated inFIG.4Ain detail. As illustrated inFIG.4A, the first downlink signal may be a DMRS, and the preset downlink signal sequence values include a DMRS sequence1, a DMRS sequence2, a DMRS sequence3, a DMRS sequence4, a DMRS sequence5, and so on. The DMRS sequence1corresponds to a channel detection subband1, the DMRS sequence2corresponds to a channel detection subband2, the DMRS sequence3corresponds to a channel detection subband3, the DMRS sequence4corresponds to the channel detection subbands1and2, the DMRS sequence5corresponds to the channel detection subbands1,2,3and4, and so on. At step430, the first correspondence is transmitted to the terminal, so that the terminal determines, based on the first correspondence, the one or more first channel detection subbands corresponding to the sequence value of the second downlink signal. In one or more examples of the present disclosure, there is no restriction on an order of the transmissions in the step410and the step430. The transmission in the step410and the transmission in the step430may be performed at the same time, the transmission in the step410may be performed before that in the step430, or the transmission in the step410may be performed after that in the step430. In addition, if the terminal may know the first correspondence in the steps420-430in advance instead of being informed by the base station, for example, if the first correspondence has been given in a protocol, the base station may not transmit the first correspondence to the terminal. According to the above examples, the sequence value of the second downlink signal can be utilized to indicate the one or more first channel detection subbands that pass the channel detection, and the channel indication signal carrying the second downlink signal can be transmitted to the terminal, so that the terminal can determine the sequence value of the second downlink signal, and determine the one or more first channel detection subbands based on the sequence value of the second downlink signal, thereby saving a signaling overhead for the channel indication and improving a channel indication efficiency. FIG.5is a flowchart of another channel indication method illustrated according to an example. The channel indication method can be performed by a base station working on an unlicensed spectrum. On the basis of the method illustrated inFIG.1, the channel indication signal includes a third downlink signal, and one or more positions at which the third downlink signal is transmitted indicate the one or more first channel detection subbands that pass the channel detection. As illustrated inFIG.5, when the step130is performed, the following step510may be included. At step510, the channel indication signal carrying the third downlink signal is transmitted to the terminal, so that the terminal determines the one or more positions at which the third downlink signal is transmitted, and determines the one or more first channel detection subbands based on the one or more positions at which the third downlink signal is transmitted. In an example, as illustrated inFIG.5, the channel indication method may further include the following steps520-530. At step520, a second correspondence between preset downlink signal transmission positions and preset channel detection subbands passing the channel detection is acquired. In one or more examples of the present disclosure, the second correspondence involves acquiring the preset downlink signal transmission positions and the preset channel detection subbands passing the channel detection. For example, the third downlink signal may be a DMRS. If the DMRS is detected at a frequency position x on a channel detection subband, it means that the channel detection subband1has passed the channel detection. If the DMRS is detected at a frequency position y on a channel detection subband, it means that the channel detection subbands1and2have passed the channel detection. Either x or y may correspond to one or more values. The correspondence between the DMRS transmission positions and the channel detection results is pre-defined or informed to the terminal through signaling by the base station. At step530, the second correspondence is transmitted to the terminal, so that the terminal determines, based on the second correspondence, the one or more first channel detection subbands corresponding to the one or more positions at which the third downlink signal is transmitted. In one or more examples of the present disclosure, there is no restriction on an order of the transmissions in the step510and the step530. The transmission in the step510and the transmission in the step530may be performed at the same time, the transmission in the step510may be performed before that in the step530, or the transmission in the step510may be performed after that in the step530. In addition, if the terminal may know the second correspondence in the steps520-530in advance instead of being informed by the base station, for example, if the second correspondence has been given in a protocol, the base station may not transmit the second correspondence to the terminal. According to the above examples, the one or more positions at which the third downlink signal is transmitted can be utilized to indicate the one or more first channel detection subbands that pass the channel detection, and the channel indication signal carrying the third downlink signal can be transmitted to the terminal, so that the terminal can determine the one or more positions at which the third downlink signal is transmitted, and determine the one or more first channel detection subbands based on the one or more positions at which the third downlink signal is transmitted, thereby saving a signaling overhead for the channel indication and extending channel indication forms. FIG.6is a flowchart of another channel indication method illustrated according to an example. The channel indication method can be performed by a base station working on an unlicensed spectrum. On the basis of the method illustrated inFIG.1, the channel indication signal includes second downlink control signaling. A designated information field of the second downlink control signaling includes first indication information for explicitly indicating the one or more first channel detection subbands, or a CRC scrambling sequence of the second downlink control signaling includes second indication information for implicitly indicating the one or more first channel detection subbands. As illustrated inFIG.6, when the step130is performed, the following step610may be included. At step610, the channel indication signal carrying the second downlink control signaling is transmitted to the terminal, so that the terminal determines the one or more first channel detection subbands based on the first indication information or the second indication information of the second downlink control signaling. According to the above example, the second downlink control signaling can be utilized to explicitly or implicitly indicate the one or more first channel detection subbands that pass the channel detection, and the channel indication signal carrying the second downlink control signaling can be transmitted to the terminal, so that the terminal can determine the one or more first channel detection subbands based on the first indication information or the second indication information of the second downlink control signaling, thereby improving a channel indication accuracy. FIG.7is a flowchart of a channel indication method illustrated according to an example, andFIG.2is a scenario diagram of a channel indication method illustrated according to an example. The channel indication method can be performed by a terminal working on an unlicensed spectrum. As illustrated inFIG.7, the channel indication method includes the following steps710-720. At step710, a channel indication signal from a base station is received. The channel indication signal indicates one or more first channel detection subbands that pass a channel detection. In one or more examples of the present disclosure, the base station can inform the terminal which channel detection subbands have passed the channel detection through the channel indication signal, so that the terminal can perform a data transmission on these channel detection subbands that pass the channel detection. The first channel detection subband here refers to a channel detection subband that has passed the channel detection. In addition, the one or more first channel detection subbands that pass the channel detection may be a plurality of bandwidth parts configured on one unlicensed carrier, a plurality of unlicensed carriers, or a plurality of bandwidth parts configured on a plurality of unlicensed carriers. At step720, the one or more first channel detection subbands that pass the channel detection are determined based on the channel indication signal. In one or more examples of the present disclosure, in view of different contents included in the channel indication signal, the terminal may determine the one or more first channel detection subbands that pass the channel detection in a corresponding manner. In an example, the channel indication signal in the step710may include a first downlink signal and first downlink control signaling. The first downlink signal is configured for instructing the terminal to detect the downlink control signaling transmitted subsequently. The first downlink control signaling includes identification information for representing the one or more first channel detection subbands. Correspondingly, when the step710is performed, it may include:(1-1) performing a receipt action for the first downlink signal; and(1-2) proceeding to receive the first downlink control signaling if the first downlink signal is received. In this way, since it is unknown for the terminal which first channel detection subband is used by the base station to transmit the identification information, the terminal receives the first downlink signal on every channel detection subband. Only after the first downlink signal is received on a channel detection subband, the terminal determines that the base station will transmit the identification information on the first channel detection subband, and thus continues to receive subsequent first downlink control signaling. Correspondingly, when the step720is performed, it may include:(1-3) determining the one or more first channel detection subbands based on the identification information included in the first downlink control signaling. In an example, the channel indication signal in the step710may include a second downlink signal. A sequence value of the second downlink signal indicates the one or more first channel detection subbands that pass the channel detection. Correspondingly, when the step720is performed, it may include:(2-1) determining the sequence value of the second downlink signal; and(2-2) determining the one or more first channel detection subbands based on the sequence value of the second downlink signal. In an example, when the step (2-2) is performed, it may include:(3-1) acquiring a first correspondence between preset downlink signal sequence values and preset channel detection subbands passing the channel detection; and(3-2) determining, based on the first correspondence, the one or more first channel detection subbands corresponding to the sequence value of the second downlink signal. The approach for acquiring the first correspondence in above step (3-1) may include: receiving a notice from the base station; or learning in advance in the terminal, for example, when the first correspondence is given in a protocol. In an example, the channel indication signal in the step710may include a third downlink signal. One or more positions at which the third downlink signal is transmitted indicate the one or more first channel detection subbands that pass the channel detection. Correspondingly, when the step720is performed, it may include:(4-1) determining the one or more positions at which the third downlink signal is transmitted; and(4-2) determining the one or more first channel detection subbands based on the one or more positions at which the third downlink signal is transmitted. In an example, when the step (4-2) is performed, it may include:(5-1) acquiring a second correspondence between preset downlink signal transmission positions and preset channel detection subbands passing the channel detection; and(5-2) determining, based on the second correspondence, the one or more first channel detection subbands corresponding to the one or more positions at which the second downlink signal is transmitted. The approach for acquiring the second correspondence in above step (5-1) may include: receiving a notice from the base station; or learning in advance in the terminal, for example, the second correspondence given in a protocol. In an example, the channel indication signal in the step710may include a second downlink control signaling. A designated information field of the second downlink control signaling includes first indication information for explicitly indicating the one or more first channel detection subbands, or a CRC scrambling sequence of the second downlink control signaling includes second indication information for implicitly indicating the one or more first channel detection subbands. Correspondingly, when the step720is performed, it may include:(6-1) determining the one or more first channel detection subbands based on the first indication information or the second indication information. The first indication information is configured for explicitly indicating the one or more first channel detection subbands, and the second indication information is configured for implicitly indicating the one or more first channel detection subbands. According to the above examples, after the channel indication signal, which is transmitted by the base station and indicates the one or more first channel detection subbands that pass the channel detection, is received, the one or more first channel detection subbands that pass the channel detection can be accurately determined based on the channel indication signal, thereby reducing an energy consumption for the channel detection and improving a data transmission performance. In particular, it can adopt corresponding determination schemes according to different contents included in the channel indication signal, thereby enriching a channel indication diversity and improving a channel indication reliability and a channel indication accuracy. Corresponding to the foregoing channel indication method examples, the present disclosure also provides channel indication apparatus examples. FIG.8is a block diagram of a channel indication apparatus illustrated according to an example. The apparatus is configured in a base station working on an unlicensed spectrum and is configured to perform the channel indication method illustrated inFIG.1. As illustrated inFIG.8, the channel indication apparatus may include:a determining module81that is configured to determine one or more first channel detection subbands that pass a channel detection;a generating module82that is configured to generate a channel indication signal to indicate the one or more first channel detection subbands that pass the channel detection; anda first transmitting module83that is configured to transmit the channel indication signal to a terminal, so that the terminal determines, based on the channel indication signal, the one or more first channel detection subbands that pass the channel detection. According to the above example, after determining the one or more first channel detection subbands that pass the channel detection, the channel indication signal can be generated to indicate the one or more first channel detection subbands that pass the channel detection, and can be transmitted to the terminal, so that based on the channel indication signal, the terminal can accurately determine the one or more first channel detection subbands that pass the channel detection, thereby reducing an energy consumption for the channel detection and improving a data transmission performance. In an example, on the basis of the apparatus illustrated inFIG.8, the channel indication signal includes a first downlink signal and first downlink control signaling. The first downlink signal is configured for instructing the terminal to detect the downlink control signaling transmitted subsequently. The first downlink control signaling includes identification information for representing the one or more first channel detection subbands. In an example, as illustrated inFIG.9, the first transmitting module83may include:a subband determining submodule91that is configured to determine one or more second channel detection subbands for transmitting the first downlink signal and the first downlink control signaling, where the one or more second channel detection subbands are all or a part of the one or more first channel detection subbands;a first transmitting submodule92that is configured to transmit the first downlink signal at a first position of the one or more second channel detection subbands; anda second transmitting submodule93that is configured to transmit the first downlink control signaling at a second position of the one or more second channel detection subbands, where the second position is subsequent to the first position. According to the above example, the first downlink signal and the first downlink control signaling can be transmitted respectively at the first position and the second position of each of the one or more first channel detection subbands, or respectively at the first position and the second position of each of a part of the one or more first channel detection subbands, thereby enriching a channel indication diversity and improving a channel indication reliability. In an example, on the basis of the apparatus illustrated inFIG.8, the channel indication signal includes a second downlink signal. A sequence value of the second downlink signal indicates the one or more first channel detection subbands that pass the channel detection. In an example, as illustrated inFIG.10, the apparatus further includes:a first acquiring module101that is configured to acquire a first correspondence between preset downlink signal sequence values and preset channel detection subbands passing the channel detection; anda second transmitting module102that is configured to transmit the first correspondence to the terminal, so that the terminal determines, based on the first correspondence, the one or more first channel detection subbands corresponding to the sequence value of the second downlink signal. According to the above example, the sequence value of the second downlink signal can be utilized to indicate the one or more first channel detection subbands that pass the channel detection, and the channel indication signal carrying the second downlink signal can be transmitted to the terminal, so that the terminal can determine the sequence value of the second downlink signal, and determine the one or more first channel detection subbands based on the sequence value of the second downlink signal, thereby saving a signaling overhead for the channel indication and improving a channel indication efficiency. In an example, on the basis of the apparatus illustrated inFIG.8, the channel indication signal includes a third downlink signal. One or more positions at which the third downlink signal is transmitted indicate the one or more first channel detection subbands that pass the channel detection. In an example, as illustrated inFIG.11, the apparatus further includes:a second acquiring module111that is configured to acquire a second correspondence between preset downlink signal transmission positions and preset channel detection subbands passing the channel detection; anda third transmitting module112that is configured to transmit the second correspondence to the terminal to determine, based on the second correspondence, the one or more first channel detection subbands corresponding to the one or more positions at which the third downlink signal is transmitted. According to the above example, the one or more positions at which the third downlink signal is transmitted can be utilized to indicate the one or more first channel detection subbands that pass the channel detection, and the channel indication signal carrying the third downlink signal can be transmitted to the terminal, so that the terminal can determine the one or more positions at which the third downlink signal is transmitted, and determine the one or more first channel detection subbands based on the one or more positions at which the third downlink signal is transmitted, thereby saving a signaling overhead for the channel indication and extending channel indication forms. In an example, on the basis of the apparatus illustrated inFIG.8, the channel indication signal includes second downlink control signaling. A designated information field of the second downlink control signaling includes first indication information for explicitly indicating the one or more first channel detection subbands, or a CRC scrambling sequence of the second downlink control signaling includes second indication information for implicitly indicating the one or more first channel detection subbands. According to the above example, the second downlink control signaling can be utilized to explicitly or implicitly indicate the one or more first channel detection subbands that pass the channel detection, and the channel indication signal carrying the second downlink control signaling can be transmitted to the terminal, so that the terminal can determine the one or more first channel detection subbands based on the first indication information or the second indication information of the second downlink control signaling, thereby improving a channel indication accuracy. FIG.12is a block diagram of a channel indication apparatus illustrated according to an example. The apparatus is configured in a terminal working on an unlicensed spectrum and is configured to perform the channel indication method illustrated inFIG.7. As illustrated inFIG.12, the channel indication apparatus may include:a receiving module121that is configured to receive a channel indication signal from a base station, where the channel indication signal indicates one or more first channel detection subbands that pass a channel detection; anda determining module122that is configured to determine, based on the channel indication signal, the one or more first channel detection subbands that pass the channel detection. In an example, on the basis of the apparatus illustrated inFIG.12, the channel indication signal includes a first downlink signal and first downlink control signaling. The first downlink signal is configured for instructing the terminal to detect the downlink control signaling transmitted subsequently. The first downlink control signaling includes identification information for representing the one or more first channel detection subbands. In an example, as illustrated inFIG.13, the receiving module121may include:a first receiving submodule131that is configured to perform a receipt action for the first downlink signal; anda second receiving submodule132that is configured to proceed to receive the first downlink control signaling if the first downlink signal is received. The determining module122may include:a first determining submodule133that is configured to determine the one or more first channel detection subbands based on the identification information included in the first downlink control signaling. In an example as illustrated inFIG.14, on the basis of the apparatus illustrated inFIG.12, the channel indication signal includes a second downlink signal. A sequence value of the second downlink signal indicates the one or more first channel detection subbands that pass the channel detection. The determining module122may include:a second determining submodule141that is configured to determine the sequence value of the second downlink signal; anda third determining submodule142that is configured to determine the one or more first channel detection subbands based on the sequence value of the second downlink signal. In an example as illustrated inFIG.15, on the basis of the apparatus illustrated inFIG.14, the third determining submodule142may include:a first acquiring unit151that is configured to acquire a first correspondence between preset downlink signal sequence values and preset channel detection subbands passing the channel detection; anda first determining unit152that is configured to determine, based on the first correspondence, the one or more first channel detection subbands corresponding to the sequence value of the second downlink signal. In an example as illustrated inFIG.16, on the basis of the apparatus illustrated inFIG.12, the channel indication signal includes a third downlink signal. One or more positions at which the third downlink signal is transmitted indicate the one or more first channel detection subbands that pass the channel detection. The determining module122may include:a fourth determining submodule161that is configured to determine the one or more positions at which the third downlink signal is transmitted; anda fifth determining submodule162that is configured to determine the one or more first channel detection subbands based on the one or more positions at which the third downlink signal is transmitted. In an example as illustrated inFIG.17, on the basis of the apparatus illustrated inFIG.16, the fifth determining submodule162may include:a second acquiring unit171that is configured to acquire a second correspondence between preset downlink signal transmission positions and preset channel detection subbands passing the channel detection; anda second determining unit172that is configured to determine, based on the second correspondence, the one or more first channel detection subbands corresponding to the one or more positions at which the third downlink signal is transmitted. In an example as illustrated inFIG.18, on the basis of the apparatus illustrated inFIG.12, the channel indication signal includes second downlink control signaling. A designated information field of the second downlink control signaling includes first indication information for explicitly indicating the one or more first channel detection subbands, or a CRC scrambling sequence of the second downlink control signaling includes second indication information for implicitly indicating the one or more first channel detection subbands. The determining module122may include:a sixth determining submodule181that is configured to determine the one or more first channel detection subbands based on the first indication information or the second indication information. According to the above example, after the channel indication signal, which is transmitted by the base station and indicates the one or more first channel detection subbands that pass the channel detection, is received, the one or more first channel detection subbands that pass the channel detection can be accurately determined based on the channel indication signal, thereby reducing an energy consumption for the channel detection and improving a data transmission performance. In particular, it can adopt corresponding determination schemes according to different contents included in the channel indication signal, thereby enriching a channel indication diversity and improving a channel indication reliability and a channel indication accuracy. Since the apparatus examples essentially correspond to the method examples, reference may be made to the description of related parts of the method examples. The apparatus examples described above are merely illustrative, where the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place or distributed to multiple units in a network. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present disclosure. It can be understood and implemented by those of ordinary skill in the art without any creative effort. The present disclosure also provides a non-transitory computer-readable storage medium having a computer program stored thereon, and the computer program is configured to perform the channel indication method described in any one ofFIGS.1-6. The present disclosure also provides a non-transitory computer-readable storage medium having a computer program stored thereon, and the computer program is configured to perform the channel indication methods described inFIG.7. The present disclosure also provides a channel indication apparatus, configured in a base station working on an unlicensed spectrum, and the apparatus includes:one or more processors and a memory for storing instructions executable by the one or more processors. The one or more processors are configured to:determine one or more first channel detection subbands that pass a channel detection;generate a channel indication signal to indicate the one or more first channel detection subbands that pass the channel detection; andtransmit the channel indication signal to a terminal to determine, based on the channel indication signal, the one or more first channel detection subbands that pass the channel detection. As illustrated inFIG.19, it is a structure schematic diagram of a channel indication apparatus illustrated according to an example. The apparatus1900may be provided as a base station. Referring toFIG.19, the apparatus1900includes a processing component1922, a wireless transmission/reception component1924, an antenna component1926, and a signal processing part peculiar to the wireless interface. The processing component1922may further include one or more processors. One of the processors of the processing component1922may be configured to perform any one of the above channel indication methods illustrated inFIGS.1-6. The present disclosure also provides a channel indication apparatus, configured in a terminal working on an unlicensed spectrum, and the apparatus includes:one or more processors and a memory for storing instructions executable by the one or more processors. The one or more processors are configured to:receive a channel indication signal from a base station, where the channel indication signal indicates one or more first channel detection subbands that pass a channel detection; anddetermine, based on the channel indication signal, the one or more first channel detection subbands that pass the channel detection. FIG.20is a structure schematic diagram of a channel indication apparatus illustrated according to an example. As illustrated inFIG.20, the channel indication apparatus2000according to an example may be a terminal, such as a computer, a mobile phone, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, fitness equipment, and a personal digital assistant. Referring toFIG.20, the apparatus2000may include one or more of the following components: a processing component2001, a memory2002, a power supply component2003, a multimedia component2004, an audio component2005, an input/output (I/O) interface2006, a sensor component2007, and a communication component2008. The processing component2001generally controls the overall operations of the apparatus2000, such as operations associated with display, phone calls, data communications, camera operations, and recording operations. The processing component2001may include one or more processors2009to execute instructions to complete all or part of the steps of the above methods illustrated inFIG.7. In addition, the processing component2001may include one or more modules to facilitate interaction between the processing component2001and other components. For example, the processing component2001may include a multimedia module to facilitate the interaction between the multimedia component2004and the processing component2001. The memory2002is configured to store various types of data to support the operation of the apparatus2000. Examples of such data include instructions for any application or method operated on the apparatus2000, contact data, phonebook data, messages, pictures, videos, and the like. The memory2002may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, disk or optical disk. The power supply component2003provides power to various components of the apparatus2000. The power supply component2003may include a power supply management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus2000. The multimedia component2004includes a screen providing an output interface between the apparatus2000and a user. In some examples, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes the TP, the screen may be implemented as a touch screen to receive input signals from the user. The TP may include one or more touch sensors to sense touches, swipes, and gestures on the TP. The touch sensors may not only sense a boundary of a touch or swipe, but also sense a lasting time and a pressure associated with the touch or swipe. In some examples, the multimedia component2004includes a front camera and/or a rear camera. The front camera and/or rear camera may receive external multimedia data when the apparatus2000is in an operating mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zooming capability. The audio component2005is configured to output and/or input an audio signal. For example, the audio component2005includes a microphone (MIC) that is configured to receive an external audio signal when the apparatus2000is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may be further stored in the memory2002or sent via the communication component2008. In some examples, the audio component2005also includes a speaker for outputting an audio signal. The I/O interface2006provides an interface between the processing component2001and a peripheral interface module. The above peripheral interface module may be a keyboard, a click wheel, buttons, or the like. These buttons may include but not limited to, a home button, a volume button, a start button and a lock button. The sensor component2007includes one or more sensors to provide the apparatus2000with status assessments in various aspects. For example, the sensor component2007may detect an open/closed state of the apparatus2000and a relative positioning of components such as the display and keypad of the apparatus2000, and the sensor component2007may also detect a change in position of the apparatus2000or a component of the apparatus2000, the presence or absence of user contact with the apparatus2000, orientation or acceleration/deceleration of the apparatus2000, and temperature change of the apparatus2000. The sensor component2007may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor component2007may further include an optical sensor, such as a Complementary Metal-Oxide-Semiconductor (CMOS) or Charged Coupled Device (CCD) image sensor which is used in imaging applications. In some examples, the sensor component2007may also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor. The communication component2008is configured to facilitate wired or wireless communication between the apparatus2000and other devices. The apparatus2000may access a wireless network based on a communication standard, such as Wi-Fi, 2G, 3G, or a combination thereof. In an example, the communication component2008receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an example, the communication component2008also includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on a radio frequency identification (RFID) technology, an infrared data association (IrDA) technology, an ultra-wideband (UWB) technology, a Blue Tooth (BT) technology and other technologies. In an example, the apparatus2000may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above methods. In some examples, there is also provided a non-transitory computer-readable storage medium including instructions, such as the memory2002including instructions executable by the processor2009of the apparatus2000to implement the above methods. For example, the non-transitory computer-readable storage medium may be a read-only memory (ROM), a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. When instructions in the storage medium are executed by the processor, the apparatus2000can execute any one of the channel indication methods described above. Other implementations of the present disclosure will be readily apparent to those skilled in the art after implementing the disclosure by referring to the specification. The present disclosure is intended to cover any variations, uses, or adaptations of the present disclosure that are in accordance with the general principles thereof and include common general knowledge or conventional technical means in the art that are not disclosed in the present disclosure. The specification and examples therein are only illustrative, and the scope and spirit of the present disclosure are to be indicated by appended claims. It should be understood that the present disclosure is not limited to the above described accurate structures illustrated in the drawings, and various modifications and changes can be made to the present disclosure without departing from the scope thereof. The scope of the present disclosure is to be limited only by the appended claims. | 53,548 |
11943174 | DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purposes of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, or the like (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. By way of example, an element, or any portion of an element, or any combination of elements may be implemented with a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, or the like, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Accordingly, in one or more example embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can include a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), compact disk ROM (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer. While aspects may be described herein using terminology commonly associated with a 5G or New Radio (NR) radio access technology (RAT), aspects of the present disclosure can be applied to other RATs, such as a 3G RAT, a 4G RAT, and/or a RAT subsequent to 5G (e.g., 6G). FIG.1is a diagram illustrating a wireless network100in which aspects of the present disclosure may be practiced. The wireless network100may be or may include elements of a 5G (e.g., NR) network and/or a 4G (e.g., Long Term Evolution (LTE)) network, among other examples. The wireless network100may include one or more base stations110(shown as a BS110a, a BS110b, a BS110c, and a BS110d), a user equipment (UE)120or multiple UEs120(shown as a UE120a, a UE120b, a UE120c, a UE120d, and a UE120e), and/or other network entities. A base station110is an entity that communicates with UEs120. A base station110(sometimes referred to as a BS) may include, for example, an NR base station, an LTE base station, a Node B, an eNB (e.g., in 4G), a gNB (e.g., in 5G), an access point, and/or a transmission reception point (TRP). Each base station110may provide communication coverage for a particular geographic area. In the Third Generation Partnership Project (3GPP), the term “cell” can refer to a coverage area of a base station110and/or a base station subsystem serving this coverage area, depending on the context in which the term is used. A base station110may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or another type of cell. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs120with service subscriptions. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs120with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs120having association with the femto cell (e.g., UEs120in a closed subscriber group (CSG)). A base station110for a macro cell may be referred to as a macro base station. A base station110for a pico cell may be referred to as a pico base station. A base station110for a femto cell may be referred to as a femto base station or an in-home base station. In the example shown inFIG.1, the BS110amay be a macro base station for a macro cell102a, the BS110bmay be a pico base station for a pico cell102b, and the BS110cmay be a femto base station for a femto cell102c. A base station may support one or multiple (e.g., three) cells. In some examples, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a base station110that is mobile (e.g., a mobile base station). In some examples, the base stations110may be interconnected to one another and/or to one or more other base stations110or network nodes (not shown) in the wireless network100through various types of backhaul interfaces, such as a direct physical connection or a virtual network, using any suitable transport network. The wireless network100may include one or more relay stations. A relay station is an entity that can receive a transmission of data from an upstream station (e.g., a base station110or a UE120) and send a transmission of the data to a downstream station (e.g., a UE120or a base station110). A relay station may be a UE120that can relay transmissions for other UEs120. In the example shown inFIG.1, the BS110d(e.g., a relay base station) may communicate with the BS110a(e.g., a macro base station) and the UE120din order to facilitate communication between the BS110aand the UE120d. A base station110that relays communications may be referred to as a relay station, a relay base station, a relay, or the like. The wireless network100may be a heterogeneous network that includes base stations110of different types, such as macro base stations, pico base stations, femto base stations, relay base stations, or the like. These different types of base stations110may have different transmit power levels, different coverage areas, and/or different impacts on interference in the wireless network100. For example, macro base stations may have a high transmit power level (e.g., 5 to 40 watts) whereas pico base stations, femto base stations, and relay base stations may have lower transmit power levels (e.g., 0.1 to 2 watts). A network controller130may couple to or communicate with a set of base stations110and may provide coordination and control for these base stations110. The network controller130may communicate with the base stations110via a backhaul communication link. The base stations110may communicate with one another directly or indirectly via a wireless or wireline backhaul communication link. The UEs120may be dispersed throughout the wireless network100, and each UE120may be stationary or mobile. A UE120may include, for example, an access terminal, a terminal, a mobile station, and/or a subscriber unit. A UE120may be a cellular phone (e.g., a smart phone), a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet, a camera, a gaming device, a netbook, a smartbook, an ultrabook, a medical device, a biometric device, a wearable device (e.g., a smart watch, smart clothing, smart glasses, a smart wristband, smart jewelry (e.g., a smart ring or a smart bracelet)), an entertainment device (e.g., a music device, a video device, and/or a satellite radio), a vehicular component or sensor, a smart meter/sensor, industrial manufacturing equipment, a global positioning system device, and/or any other suitable device that is configured to communicate via a wireless medium. Some UEs120may be considered machine-type communication (MTC) or evolved or enhanced machine-type communication (eMTC) UEs. An MTC UE and/or an eMTC UE may include, for example, a robot, a drone, a remote device, a sensor, a meter, a monitor, and/or a location tag, that may communicate with a base station, another device (e.g., a remote device), or some other entity. Some UEs120may be considered Internet-of-Things (IoT) devices, and/or may be implemented as NB-IoT (narrowband IoT) devices. Some UEs120may be considered a Customer Premises Equipment. A UE120may be included inside a housing that houses components of the UE120, such as processor components and/or memory components. In some examples, the processor components and the memory components may be coupled together. For example, the processor components (e.g., one or more processors) and the memory components (e.g., a memory) may be operatively coupled, communicatively coupled, electronically coupled, and/or electrically coupled. In general, any number of wireless networks100may be deployed in a given geographic area. Each wireless network100may support a particular RAT and may operate on one or more frequencies. A RAT may be referred to as a radio technology, an air interface, or the like. A frequency may be referred to as a carrier, a frequency channel, or the like. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, 5G RAT networks may be deployed. In some examples, two or more UEs120(e.g., shown as UE120aand UE120e) may communicate directly using one or more sidelink channels (e.g., without using a base station110as an intermediary to communicate with one another). For example, the UEs120may communicate using peer-to-peer (P2P) communications, device-to-device (D2D) communications, a vehicle-to-everything (V2X) protocol (e.g., which may include a vehicle-to-vehicle (V2V) protocol, a vehicle-to-infrastructure (V2I) protocol, or a vehicle-to-pedestrian (V2P) protocol), and/or a mesh network. In such examples, a UE120may perform scheduling operations, resource selection operations, and/or other operations described elsewhere herein as being performed by the base station110. Devices of the wireless network100may communicate using the electromagnetic spectrum, which may be subdivided by frequency or wavelength into various classes, bands, channels, or the like. For example, devices of the wireless network100may communicate using one or more operating bands. In 5G, two initial operating bands have been identified as frequency range designations FR1 (410 MHz-7.125 GHz) and FR2 (24.25 GHz-52.6 GHz). It should be understood that although a portion of FR1 is greater than 6 GHz, FR1 is often referred to (interchangeably) as a “Sub-6 GHz” band in various documents and articles. A similar nomenclature issue sometimes occurs with regard to FR2, which is often referred to (interchangeably) as a “millimeter wave” band in documents and articles, despite being different from the extremely high frequency (EHF) band (30 GHz-300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band. The frequencies between FR1 and FR2 are often referred to as mid-band frequencies. Recent 5G studies have identified an operating band for these mid-band frequencies as frequency range designation FR3 (7.125 GHz-24.25 GHz). Frequency bands falling within FR3 may inherit FR1 characteristics and/or FR2 characteristics, and thus may effectively extend features of FR1 and/or FR2 into mid-band frequencies. In addition, higher frequency bands are currently being explored to extend 5G operation beyond 52.6 GHz. For example, three higher operating bands have been identified as frequency range designations FR4a or FR4-1 (52.6 GHz-71 GHz), FR4 (52.6 GHz-114.25 GHz), and FR5 (114.25 GHz-300 GHz). Each of these higher frequency bands falls within the EHF band. With the above examples in mind, unless specifically stated otherwise, it should be understood that the term “sub-6 GHz” or the like, if used herein, may broadly represent frequencies that may be less than 6 GHz, may be within FR1, or may include mid-band frequencies. Further, unless specifically stated otherwise, it should be understood that the term “millimeter wave” or the like, if used herein, may broadly represent frequencies that may include mid-band frequencies, may be within FR2, FR4, FR4-a or FR4-1, and/or FR5, or may be within the EHF band. It is contemplated that the frequencies included in these operating bands (e.g., FR1, FR2, FR3, FR4, FR4-a, FR4-1, and/or FR5) may be modified, and techniques described herein are applicable to those modified frequency ranges. In some aspects, the UE120may include a communication manager140. As described in more detail elsewhere herein, the communication manager140may receive configuration information for a sounding reference signal (SRS) resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set; receive downlink control information (DCI) triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set; and transmit the SRS resource set based at least in part on the one or more parameters. Additionally, or alternatively, the communication manager140may perform one or more other operations described herein. In some aspects, the base station110may include a communication manager150. As described in more detail elsewhere herein, the communication manager150may transmit, to a UE120, configuration information for an SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set; transmit, to the UE120, DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set; and receive, from the UE120, the SRS resource set based at least in part on the one or more parameters. Additionally, or alternatively, the communication manager150may perform one or more other operations described herein. As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. FIG.2is a diagram illustrating an example200of a base station110in communication with a UE120in a wireless network100, in accordance with the present disclosure. The base station110may be equipped with a set of antennas234athrough234t, such as T antennas (T≥1). The UE120may be equipped with a set of antennas252athrough252r, such as R antennas (R≥1). At the base station110, a transmit processor220may receive data, from a data source212, intended for the UE120(or a set of UEs120). The transmit processor220may select one or more modulation and coding schemes (MCSs) for the UE120based at least in part on one or more channel quality indicators (CQIs) received from that UE120. The UE120may process (e.g., encode and modulate) the data for the UE120based at least in part on the MCS(s) selected for the UE120and may provide data symbols for the UE120. The transmit processor220may process system information (e.g., for semi-static resource partitioning information (SRPI)) and control information (e.g., CQI requests, grants, and/or upper layer signaling) and provide overhead symbols and control symbols. The transmit processor220may generate reference symbols for reference signals (e.g., a cell-specific reference signal (CRS) or a demodulation reference signal (DMRS)) and synchronization signals (e.g., a primary synchronization signal (PSS) or a secondary synchronization signal (SSS)). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, the overhead symbols, and/or the reference symbols, if applicable, and may provide a set of output symbol streams (e.g., T output symbol streams) to a corresponding set of modems232(e.g., T modems), shown as modems232athrough232t. For example, each output symbol stream may be provided to a modulator component (shown as MOD) of a modem232. Each modem232may use a respective modulator component to process a respective output symbol stream (e.g., for OFDM) to obtain an output sample stream. Each modem232may further use a respective modulator component to process (e.g., convert to analog, amplify, filter, and/or upconvert) the output sample stream to obtain a downlink signal. The modems232athrough232tmay transmit a set of downlink signals (e.g., T downlink signals) via a corresponding set of antennas234(e.g., T antennas), shown as antennas234athrough234t. At the UE120, a set of antennas252(shown as antennas252athrough252r) may receive the downlink signals from the base station110and/or other base stations110and may provide a set of received signals (e.g., R received signals) to a set of modems254(e.g., R modems), shown as modems254athrough254r. For example, each received signal may be provided to a demodulator component (shown as DEMOD) of a modem254. Each modem254may use a respective demodulator component to condition (e.g., filter, amplify, downconvert, and/or digitize) a received signal to obtain input samples. Each modem254may use a demodulator component to further process the input samples (e.g., for OFDM) to obtain received symbols. A MIMO detector256may obtain received symbols from the modems254, may perform MIMO detection on the received symbols if applicable, and may provide detected symbols. A receive processor258may process (e.g., demodulate and decode) the detected symbols, may provide decoded data for the UE120to a data sink260, and may provide decoded control information and system information to a controller/processor280. The term “controller/processor” may refer to one or more controllers, one or more processors, or a combination thereof. A channel processor may determine a reference signal received power (RSRP) parameter, a received signal strength indicator (RSSI) parameter, a reference signal received quality (RSRQ) parameter, and/or a CQI parameter, among other examples. In some examples, one or more components of the UE120may be included in a housing284. The network controller130may include a communication unit294, a controller/processor290, and a memory292. The network controller130may include, for example, one or more devices in a core network. The network controller130may communicate with the base station110via the communication unit294. One or more antennas (e.g., antennas234athrough234tand/or antennas252athrough252r) may include, or may be included within, one or more antenna panels, one or more antenna groups, one or more sets of antenna elements, and/or one or more antenna arrays, among other examples. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include one or more antenna elements (within a single housing or multiple housings), a set of coplanar antenna elements, a set of non-coplanar antenna elements, and/or one or more antenna elements coupled to one or more transmission and/or reception components, such as one or more components ofFIG.2. On the uplink, at the UE120, a transmit processor264may receive and process data from a data source262and control information (e.g., for reports that include RSRP, RSSI, RSRQ, and/or CQI) from the controller/processor280. The transmit processor264may generate reference symbols for one or more reference signals. The symbols from the transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by the modems254(e.g., for DFT-s-OFDM or CP-OFDM), and transmitted to the base station110. In some examples, the modem254of the UE120may include a modulator and a demodulator. In some examples, the UE120includes a transceiver. The transceiver may include any combination of the antenna(s)252, the modem(s)254, the MIMO detector256, the receive processor258, the transmit processor264, and/or the TX MIMO processor266. The transceiver may be used by a processor (e.g., the controller/processor280) and the memory282to perform aspects of any of the methods described herein. At the base station110, the uplink signals from UE120and/or other UEs may be received by the antennas234, processed by the modem232(e.g., a demodulator component, shown as DEMOD, of the modem232), detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by the UE120. The receive processor238may provide the decoded data to a data sink239and provide the decoded control information to the controller/processor240. The base station110may include a communication unit244and may communicate with the network controller130via the communication unit244. The base station110may include a scheduler246to schedule one or more UEs120for downlink and/or uplink communications. In some examples, the modem232of the base station110may include a modulator and a demodulator. In some examples, the base station110includes a transceiver. The transceiver may include any combination of the antenna(s)234, the modem(s)232, the MIMO detector236, the receive processor238, the transmit processor220, and/or the TX MIMO processor230. The transceiver may be used by a processor (e.g., the controller/processor240) and the memory242to perform aspects of any of the methods described herein. The controller/processor240of the base station110, the controller/processor280of the UE120, and/or any other component(s) ofFIG.2may perform one or more techniques associated with dynamic parameter adaptation for aperiodic Doppler tracking SRS resource sets, as described in more detail elsewhere herein. For example, the controller/processor240of the base station110, the controller/processor280of the UE120, and/or any other component(s) ofFIG.2may perform or direct operations of, for example, process900ofFIG.9, process1000ofFIG.10, and/or other processes as described herein. The memory242and the memory282may store data and program codes for the base station110and the UE120, respectively. In some examples, the memory242and/or the memory282may include a non-transitory computer-readable medium storing one or more instructions (e.g., code and/or program code) for wireless communication. For example, the one or more instructions, when executed (e.g., directly, or after compiling, converting, and/or interpreting) by one or more processors of the base station110and/or the UE120, may cause the one or more processors, the UE120, and/or the base station110to perform or direct operations of, for example, process900ofFIG.9, process1000ofFIG.10, and/or other processes as described herein. In some examples, executing instructions may include running the instructions, converting the instructions, compiling the instructions, and/or interpreting the instructions, among other examples. In some aspects, the UE120includes means for receiving configuration information for an SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set; means for receiving DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set; and/or means for transmitting the SRS resource set based at least in part on the one or more parameters. The means for the UE120to perform operations described herein may include, for example, one or more of communication manager140, antenna252, modem254, MIMO detector256, receive processor258, transmit processor264, TX MIMO processor266, controller/processor280, or memory282. In some aspects, the base station110includes means for transmitting, to a UE, configuration information for an SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set; means for transmitting, to the UE, DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set; and/or means for receiving, from the UE, the SRS resource set based at least in part on the one or more parameters. The means for the base station110to perform operations described herein may include, for example, one or more of communication manager150, transmit processor220, TX MIMO processor230, modem232, antenna234, MIMO detector236, receive processor238, controller/processor240, memory242, or scheduler246. While blocks inFIG.2are illustrated as distinct components, the functions described above with respect to the blocks may be implemented in a single hardware, software, or combination component or in various combinations of components. For example, the functions described with respect to the transmit processor264, the receive processor258, and/or the TX MIMO processor266may be performed by or under the control of the controller/processor280. As indicated above,FIG.2is provided as an example. Other examples may differ from what is described with regard toFIG.2. FIG.3is a diagram illustrating an example300of physical channels and reference signals in a wireless network. As shown inFIG.3, downlink channels and downlink reference signals may carry information from a base station110to a UE120, and uplink channels and uplink reference signals may carry information from a UE120to a base station110. As shown, a downlink channel may include a physical downlink control channel (PDCCH) that carries downlink control information (DCI), a physical downlink shared channel (PDSCH) that carries downlink data, or a physical broadcast channel (PBCH) that carries system information, among other examples. In some aspects, PDSCH communications may be scheduled by PDCCH communications. As further shown, an uplink channel may include a physical uplink control channel (PUCCH) that carries uplink control information (UCI), a physical uplink shared channel (PUSCH) that carries uplink data, or a physical random access channel (PRACH) used for initial network access, among other examples. In some aspects, the UE120may transmit acknowledgement (ACK) or negative acknowledgement (NACK) feedback (e.g., ACK/NACK feedback or ACK/NACK information) in UCI on the PUCCH and/or the PUSCH. As further shown, a downlink reference signal may include a synchronization signal block (SSB), a channel state information (CSI) reference signal (CSI-RS), a demodulation reference signal (DMRS), a positioning reference signal (PRS), a phase tracking reference signal (PTRS), and/or a tracking reference signal (TRS), among other examples. As also shown, an uplink reference signal may include an SRS, a DMRS, or a PTRS, among other examples. An SSB may carry information used for initial network acquisition and synchronization, such as a primary synchronization signal (PSS), a secondary synchronization signal (SSS), a PBCH, and a PBCH DMRS. An SSB is sometimes referred to as a synchronization signal/PBCH (SS/PBCH) block. In some aspects, the base station110may transmit multiple SSBs on multiple corresponding beams, and the SSBs may be used for beam selection. A CSI-RS may carry information used for downlink channel estimation (e.g., downlink CSI acquisition), which may be used for scheduling, link adaptation, or beam management, among other examples. The base station110may configure a set of CSI-RSs for the UE120, and the UE120may measure the configured set of CSI-RSs. Based at least in part on the measurements, the UE120may perform channel estimation and may report channel estimation parameters to the base station110(e.g., in a CSI report), such as a channel quality indicator (CQI), a precoding matrix indicator (PMI), a CSI-RS resource indicator (CRI), a layer indicator (LI), a rank indicator (RI), or a reference signal received power (RSRP), among other examples. The base station110may use the CSI report to select transmission parameters for downlink communications to the UE120, such as a number of transmission layers (e.g., a rank), a precoding matrix (e.g., a precoder), a modulation and coding scheme (MCS), or a refined downlink beam (e.g., using a beam refinement procedure or a beam management procedure), among other examples. A DMRS may carry information used to estimate a radio channel for demodulation of an associated physical channel (e.g., PDCCH, PDSCH, PBCH, PUCCH, or PUSCH). The design and mapping of a DMRS may be specific to a physical channel for which the DMRS is used for estimation. DMRSs are UE-specific, can be beamformed, can be confined in a scheduled resource (e.g., rather than transmitted on a wideband), and can be transmitted only when necessary. As shown, DMRSs are used for both downlink communications and uplink communications. A PTRS may carry information used to compensate for oscillator phase noise. Typically, the phase noise increases as the oscillator carrier frequency increases. Thus, PTRS can be utilized at high carrier frequencies, such as millimeter wave frequencies, to mitigate phase noise. The PTRS may be used to track the phase of the local oscillator and to enable suppression of phase noise and common phase error (CPE). As shown, PTRSs are used for both downlink communications (e.g., on the PDSCH) and uplink communications (e.g., on the PUSCH). A TRS may be a downlink reference signal (not shown inFIG.3) and may carry information used to assist in time domain and frequency domain tracking. The TRS may be used to track transmission path delay spread and/or Doppler spread. A TRS may be UE-specific. In some aspects, a TRS may be transmitted in a TRS burst. A TRS burst may consist of four OFDM symbols in two consecutive slots. In some aspects, a TRS may be associated with one or more CSI-RS configurations. For example, a TRS burst may use one or more CSI-RS resources. A PRS may carry information used to enable timing or ranging measurements of the UE120based on signals transmitted by the base station110to improve observed time difference of arrival (OTDOA) positioning performance. For example, a PRS may be a pseudo-random Quadrature Phase Shift Keying (QPSK) sequence mapped in diagonal patterns with shifts in frequency and time to avoid collision with cell-specific reference signals and control channels (e.g., a PDCCH). In general, a PRS may be designed to improve detectability by the UE120, which may need to detect downlink signals from multiple neighboring base stations in order to perform OTDOA-based positioning. Accordingly, the UE120may receive a PRS from multiple cells (e.g., a reference cell and one or more neighbor cells), and may report a reference signal time difference (RSTD) based on OTDOA measurements associated with the PRSs received from the multiple cells. In some aspects, the base station110may then calculate a position of the UE120based on the RSTD measurements reported by the UE120. An SRS may carry information used for uplink channel estimation, which may be used for scheduling, link adaptation, precoder selection, or beam management, among other examples. The base station110may configure one or more SRS resource sets for the UE120, and the UE120may transmit SRSs on the configured SRS resource sets. An SRS resource set may have a configured usage (e.g., as described in more detail elsewhere herein. In some examples, an SRS may be used for uplink CSI acquisition, downlink CSI acquisition for reciprocity-based operations, and/or uplink beam management, among other examples. The base station110may measure the SRSs, may perform channel estimation based at least in part on the measurements, and may use the SRS measurements to configure communications with the UE120. Reference signals may be used to increase the reliability and efficiency of communications between wireless devices. For example, a base station110may measure an uplink reference signal to select a configuration or other transmission parameters for communications between the base station110and a UE120. For example, the base station110may measure an uplink reference signal to estimate a delay spread, SNR, and/or a Doppler parameter (e.g., Doppler shift or Doppler spread) associated with the uplink channel, among other examples. “Doppler shift” refers to a shift or change in a frequency of a signal between a transmitter and a receiver. Doppler shift may sometimes be referred to as a frequency offset. For example, Doppler shift may occur when a transmitter of a signal is moving in relation to the receiver. The relative movement may shift the frequency of the signal, making the frequency of the signal received at the receiver different than the frequency of the signal transmitted at the transmitter. In other words, the frequency of the signal received by the receiver differs from the frequency of the signal that was originally emitted. “Doppler spread” refers to the widening of a spectrum of a narrow-band signal transmitted through a multipath propagation channel. Doppler spread may be caused by different Doppler shifts associated with the multiple propagation paths when there is relative motion between the transmitter and the receiver. For example, when there is no relative motion between the transmitter and the receiver, due to the multipath propagation channel, the receiver can receive the same signal at different times, because one copy of the signal uses a shorter path and arrives quickly, whereas another copy of the signal may user a longer path. Where there is relative motion between the transmitter and the receiver, signals on the different paths may arrive at the receiver at different times and with different frequencies (e.g., due to different Doppler shifts associated with each path). Doppler spread may be a measure of a difference in frequencies of signals on the paths associated with the multipath propagation channel. Doppler spread may sometimes be referred to as a channel time correlation or a channel time coherency characteristic for a multipath propagation channel. In some examples, such as in a high mobility environments (e.g., environments in which the UE120is traveling at high rates of speed, such as 500 kilometers per hour (km/h) or similar speeds), accurate Doppler parameter (e.g. Doppler shift) estimation may be needed for efficient Doppler pre-compensation, such as in a case of a multi TRP transmission to a UE in downlink. For example, in a high speed train (HST) scenario (e.g., where the UE120is mounted on a train or is location inside of a train), the base station110may pre-compensate for a Doppler shift experienced due to the high rate of speed (e.g., the base station110may apply Doppler shift pre-compensation for each TRP based on a Doppler shift reported, indicated to, or measured by the base station110. In some cases, the base station110may apply Doppler shift pre-compensation based on implicit reporting of the Doppler shift by a UE120where the UE120transmits reference signals (e.g., SRSs) using a frequency offset corresponding to, or defined, based on a Doppler shift measured (e.g., by the UE120) using one or more TRSs associated with one or more TRPs involved in a downlink transmission. Therefore, in some cases, the base station110may determine a Doppler shift pre-compensation for multi TRP transmission in downlink based on an SRS transmission (and the corresponding Doppler shift measurements based on the SRS by different TRPs) from the UE120. However, Doppler parameters for the uplink channel (“uplink Doppler parameters”), such as a Doppler shift or a Doppler spread, may not be known, which may prevent the base station110from selecting an uplink DMRS configuration that is properly tailored to the conditions of the uplink channel or from accurately pre-compensating for the Doppler parameters experienced by a UE120. Although the base station110may estimate Doppler parameters by measuring uplink reference signals from the UE120, the estimation may be inaccurate or unreliable because the reference signals transmitted by the UE120may be ill-suited for Doppler parameter estimation. For example, the temporal spacing between repetitions of a reference signal may be too large, small, or inconsistent for an accurate Doppler parameter estimation given the channel characteristics, SNR, UE speed range, subcarrier spacing and carrier frequency applicable for the uplink transmissions from the UE120. Moreover, different Doppler parameter estimation may require different temporal spacings between repetitions of a reference signal. For example, in scenarios where Doppler shift estimation is relevant (e.g., in high mobility scenarios, such as an HST scenario) and where meaningful Doppler spread is also experienced, it may be beneficial to estimate the uplink Doppler shift based on a relatively small time gap between repetitions of a reference signal (e.g., to enable the base station110to decorrelate the Doppler shift estimation from the time coherency decay caused by the Doppler spread experienced). Conversely, a reliability of a Doppler spread estimation may be improved by using a time gap of multiple symbols (e.g., based on a channel profile, UE120speed, and/or other parameters). Therefore, in scenarios where a base station110is to estimate both uplink Doppler shift and uplink Doppler spread (such as an HST scenario) using an uplink reference signal (such as an SRS), different time gaps between repetitions of the uplink reference signal may be required for estimating the different Doppler parameters. Additionally, as channel parameters or deployment parameters (e.g., a subcarrier spacing used by the UE120or a carrier frequency used by the UE120) change, a proper time gap for uplink Doppler parameter estimation may change. Therefore, it may be difficult to configure or dynamically update an uplink reference signal resource set only with two SRS resources or symbols with a fixed time gap between them to enable a base station110to properly estimate multiple Doppler parameters in different scenarios using the same uplink reference signal. As indicated above,FIG.3is provided as an example. Other examples may differ from what is described with regard toFIG.3. FIG.4is a diagram illustrating an example400of SRS resource sets. A base station110may configure a UE120with one or more SRS resource sets to allocate resources for SRS transmissions by the UE120. For example, a configuration for SRS resource sets may be indicated in a radio resource control (RRC) message (e.g., an RRC configuration message or an RRC reconfiguration message). At405, an SRS resource set may include one or more resources (e.g., shown as SRS resources), which may include time resources and/or frequency resources (e.g., a slot, a symbol, a resource block, and/or a periodicity for the time resources). At410, an SRS resource may include one or more antenna ports on which an SRS is to be transmitted (e.g., in a time-frequency resource). Thus, a configuration for an SRS resource set may indicate one or more time-frequency resources in which an SRS is to be transmitted and may indicate one or more antenna ports on which the SRS is to be transmitted in those time-frequency resources. In some aspects, the configuration for an SRS resource set may indicate a use case or type (e.g., in an SRS-SetUse information element or an SRS-ResourceSet information element) for the SRS resource set. For example, an SRS resource set may have a usage type of antenna switching, codebook, non-codebook, beam management, and/or positioning. An antenna switching SRS resource set may be used to measure downlink CSI with reciprocity between an uplink and downlink channel. For example, when there is reciprocity between an uplink channel and a downlink channel, a base station110may use an antenna switching SRS (e.g., an SRS transmitted using a resource of an antenna switching SRS resource set) to acquire downlink CSI (e.g., to determine a downlink precoder to be used to communicate with the UE120). A codebook SRS resource set may be used to assist in acquiring uplink CSI by the base station when a base station110indicates an uplink precoder to the UE120(e.g., codebook based PUSCH). For example, when the base station110is configured to indicate an uplink precoder to the UE120(e.g., using a precoder codebook), the base station110may use a codebook SRS (e.g., an SRS transmitted using a resource of a codebook SRS resource set) to acquire uplink CSI (e.g., to determine an uplink precoder to be indicated to the UE120and used by the UE120to communicate with the base station110). In some aspects, virtual ports (e.g., a combination of two or more antenna ports) with a maximum transmit power may be supported at least for a codebook SRS. A non-codebook SRS resource set may be used to indicate uplink CSI when the UE120selects an uplink precoder (e.g., instead of the base station110indicating an uplink precoder to be used by the UE120). For example, when the UE120is configured to select an uplink precoder, the base station110may use a non-codebook SRS (e.g., an SRS transmitted using a resource of a non-codebook SRS resource set) to assist in selection of transmission parameters for uplink (e.g. number of layers, precoding and/or MCS). In this case, the non-codebook SRS may be precoded using a precoder selected by the UE120(e.g., which may be implicitly indicated to the base station110through different hypotheses of the precoded uplink layers transmitted over the non-codebook based SRS ports). A beam management SRS resource set may be used to assist in UL beam management decisions for millimeter wave communications. An SRS resource can be configured as periodic, semi-persistent (sometimes referred to as semi-persistent scheduling (SPS)), or aperiodic. A periodic SRS resource may be configured via a configuration message that indicates a periodicity of the SRS resource (e.g., a slot-level periodicity, where the SRS resources occurs every Y slots) and a slot offset. In some cases, a periodic SRS resource may always be activated, and may not be dynamically activated or deactivated. A semi-persistent SRS resource may also be configured via a configuration message that indicates a periodicity and a slot offset for the semi-persistent SRS resource, and may be dynamically activated and deactivated (e.g., using DCI or a medium access control (MAC) control element (CE) (MAC-CE)). An aperiodic SRS resource may be triggered dynamically, such as via DCI (e.g., UE-specific DCI or group common DCI) or a MAC-CE. In some aspects, the UE120may be configured with a mapping between SRS ports (e.g., antenna ports) and corresponding SRS resources. The UE120may transmit an SRS on a particular SRS resource using an SRS port indicated in the configuration. In some aspects, an SRS resource may span N adjacent symbols within a slot (e.g., where N equals 1, 2, or 4). The UE120may be configured with X SRS ports (e.g., where X≤4). In some aspects, each of the X SRS ports may mapped to a corresponding symbol of the SRS resource and used for transmission of an SRS in that symbol. As shown inFIG.4, in some aspects, different SRS resource sets indicated to the UE120(e.g., having different usage configuration) may overlap (e.g., in time and/or in frequency, such as in the same slot). For example, at415, a first SRS resource set (e.g., shown as SRS Resource Set 1) is shown as having an antenna switching usage or type. As shown, this example antenna switching SRS resource set includes a first SRS resource (shown as SRS Resource A) and a second SRS resource (shown as SRS Resource B). Thus, antenna switching SRS may be transmitted in SRS Resource A (e.g., a first time-frequency resource) using antenna port 0 and antenna port 1 and may be transmitted in SRS Resource B (e.g., a second time-frequency resource) using antenna port 2 and antenna port 3. At420, a second SRS resource set (e.g., shown as SRS Resource Set 2) may be a codebook use case. As shown, this example codebook SRS resource set includes only the first SRS resource (shown as SRS Resource A). Thus, codebook SRSs may be transmitted in SRS Resource A (e.g., the first time-frequency resource) using antenna port 0 and antenna port 1. In this case, the UE120may not transmit codebook SRSs in SRS Resource B (e.g., the second time-frequency resource) using antenna port 2 and antenna port 3. In some cases, a base station110may compensate for a Doppler shift experienced by a UE120on signals transmitted by multiple TRPs simultaneously on the same resources in high mobility environments based on estimating the uplink Doppler shift using an SRS transmitted by the UE120(e.g., an implicit Doppler shift signaling or indication). A Doppler shift estimation using two repetitions of an SRS with a temporal spacing assumes that the two repetitions experience approximately the same channel (e.g., that the channels experienced by the two repetitions have a time correlation of approximately 1). Therefore, a temporal spacing (e.g., a time gap) between repetitions of an SRS used to estimate uplink Doppler shift may be based on a channel type (e.g., because the channel type may impact time correlation behavior), a Doppler spread (or time coherence) associated with the channel, a speed of the UE120, and/or a possibility of Doppler shift aliasing or phase ambiguity in time, SNR among other examples (as described in more detail below). However, repetitions of SRSs (e.g., the existing SRS configurations) may be improperly spaced for Doppler shift estimation, which may result in an inaccurate Doppler shift estimation that negatively impacts the pre-compensation of the Doppler shift in downlink by the base station110based on Doppler shift estimations associated with the channel of each TRP. As indicated above,FIG.4is provided as an example. Other examples may differ from what is described with regard toFIG.4. FIG.5is a diagram illustrating an example500of uplink Doppler parameter estimation considerations. As described above, a base station110may use uplink Doppler parameters, among other factors, as a basis for selecting an appropriate uplink DMRS configuration that allows an increase in the spectral efficiency of the link. In some examples, the base station may determine the Doppler spread for an uplink channel based on the correlation in time between two repetitions of an uplink reference signal. The correlation in time (also referred to as “correlation” or a “time correlation parameter”) between two repetitions of a reference signal may be determined by measuring corresponding aspects of two repetitions of a reference signal. “Repetitions of a reference signal” may refer to repeated transmissions of the same reference signal. In some examples (e.g., when referring to an SRS), repetitions of a reference signal may be referred to herein as pilots or pilot signals, and the resources used to carry the pilot signals (or reserved for carrying the pilot signals) may be referred to as pilot resources or pilot symbols. The temporal spacing between reference signal repetitions may be referred to herein as repetition spacing, pilot spacing (e.g., for SRS), and/or a reference signal spacing configuration, among other examples. To increase the accuracy of a Doppler spread estimation, a base station110may average the correlations of multiple pairs of reference signals that share a common repetition spacing. However, such a technique may be ineffective if the repetition spacing between pairs of reference signals is inconsistent. The suitability of repetition spacing for Doppler spread estimation may vary with the communication parameters used by the UE120. For example, the repetition spacing that is suitable for Doppler spread estimation may vary with the channel type/characteristics (e.g. LOS, non-LOS (NLOS), directional channel, single frequency network (SFN) channel, Rayleigh, and/or Rician), UE speeds range, subcarrier spacing and carrier frequency used by the UE120to transmit the reference signal. The subcarrier spacing and carrier frequency used by a UE120may be configured for the UE120by the network based on capabilities of the UE120. A repetition spacing may be considered suitable for Doppler spread estimation if the resulting correlation (e.g., channel time correlation) between reference signal repetitions is within an acceptable threshold range (e.g., a time correlation parameter may need to be maintained between 0.4 and 1, to enable the resulting Doppler spread estimation to be done reliably). Table 1 provides an example of suitable repetition spacing for a reference signal, given certain non-limiting pairings of subcarrier spacing and carrier frequency and with a Rayleigh channel type assumption. “Subcarrier spacing” may refer to the frequency gap between subcarriers used for communications between a base station and a UE120. “Carrier frequency” may refer to a frequency band used for communications between a base station110and a UE120. TABLE 1Pilot SpacingSubcarrier SpacingCarrier Frequency3-4 OFDM symbols15 kHz6 GHz6-7 OFDM symbols30 kHz6 GHz12-13 OFDM symbols60 kHz6 GHz In some examples, a base station110may estimate the uplink Doppler spread for a channel by measuring a DMRS or an SRS. For example, the base station110may measure two DMRS repetitions or symbols to determine the uplink Doppler spread, or the base station110may measure two SRS repetitions to determine the uplink Doppler spread. However, Doppler spread estimation using a DMRS may be inaccurate because the spacing between DMRS repetitions varies with PUSCH allocations, DMRS bandwidth depends on a PUSCH allocation bandwidth, DMRS availability depends on PUSCH scheduling and hence may result in unreliable or inconsistent correlation values. Additionally, or alternatively, there may be only one DMRS symbol in a PUSCH allocation, which prevents correlation altogether. Moreover, the suitability of SRS for accurate Doppler spread estimation may be limited to certain communication scenarios (e.g., limited to a subset of possible subcarrier spacing, carrier frequency, channel type and UE speed range combinations) because the network may only support a limited quantity of SRS repetition spacing options. As a result, to configure multiple time gaps between SRS resources or SRS symbols may require a high SRS overhead (e.g., may require a high number of SRS resources to be configured for the UE120). For example, an SRS configuration supported by the network may include four SRS repetitions transmitted consecutively (e.g., transmitted in consecutive SRS symbols). Thus, the maximum spacing between SRS repetitions may be three symbols, which means the SRS configuration may be suitable for reliable Doppler spread estimation (e.g., in the case of a Rayleigh channel assumption) when the UE120uses a subcarrier spacing of 15 kHz and a carrier frequency of 6 GHz, but not when the UE120uses other combinations of subcarrier spacing and carrier frequency. As another example, an SRS configuration supported by a base station110and a UE120may include two SRS repetitions transmitted in the last symbol of the first slot or subframe and in the first valid SRS location of the next slot or subframe (e.g., two consecutive uplink slots or subframes). Thus, the minimum spacing between SRS repetitions may be nine symbols (assuming a fourteen-symbol slot), which means that this SRS configuration may be unsuitable for reliable Doppler spread estimation (e.g., in the case of Rayleigh channel assumption) when the UE120uses any combination of subcarrier spacing and carrier frequency in Table 1. Therefore, in some cases, a base station110may further improve the accuracy of a Doppler spread estimation (e.g., to improve the DMRS configuration selection). In such scenarios, the base station110may estimate the uplink Doppler spread by using an SRS configuration with a repetition spacing that suits the subcarrier spacing and carrier frequency used by the UE120. The base station110may configure the SRS for the Doppler spread estimation instead of the DMRS because the timing of SRS repetitions is independent of PUSCH scheduling and thus more flexible. In some examples, each SRS repetition may occupy a set of resource elements in a symbol, and the resource elements for different repetitions may span the same frequency band. For example, an SRS configuration may include intra-slot SRS repetitions (e.g., where multiple SRS resources are included in the same slot). However, as described above, a temporal spacing (e.g., a time gap) between repetitions of an SRS used to accurately and reliably estimate Doppler spread may be different than a temporal spacing needed to accurately and reliably estimate a Doppler shift. For example, the base station110may estimate or measure a phase difference (Δθ) related to the Doppler shift between repetitions of an SRS. For example, Δθ=2π×fDoppler shift×ΔT, where fDoppler shiftis the frequency offset associated with the Doppler shift and ΔT is the temporal offset (or time gap) between the repetitions of the SRS. The Doppler shift estimation by the base station110may assume that the two repetitions experience approximately the same channel. In other words, the equation Δθ=2π×fDoppler shift×ΔT may assume that the time correlation of the channel between the repetitions of the SRS is equal to 1 (e.g., may assume that the channel experienced by the first repetition and the channel experienced by the second repetition are fully correlated in time). This assumption may hold true when the channel is not experiencing fading, such as where a directional channel with a dominant path of the signal (for example associated with a direct line of sight (LOS) between the transmitter and the receiver) or in any other case where there is no significant multipath (e.g. a LOS channel), because time coherency may be preserved for the channel between the two repetitions of the SRS for these channel scenarios. In such examples, it may be beneficial to use a larger time gap between the repetitions in order to minimize a Doppler shift estimation error. For example, this can be concluded by observing an estimation error variance bound (e.g., a Cramer-Rao lower bound (CRLB) expression for frequency offset estimation) that may be improved if a larger time gap between the repetitions is used, such that a larger time gap may allow for a lower estimation error variance bound. However, if the channel is a non-LOS channel, or if the channel is experiencing fading, then the time coherency of the channel may be limited over time (e.g., a time coherency may only be maintained for the channel for a short period of time). For example, if the channel is a multipath channel, the channel may experience a Doppler spread, which may result in a time coherence of the channel not being maintained between repetitions of an SRS. Therefore, to ensure that time coherence between the repetitions is maintained, a small time gap may be needed between the repetitions to accurately and reliably estimate Doppler shift (e.g., to ensure that a Doppler spread of the channel does not impact the estimation of the Doppler shift). A maximum time gap that can be used between the repetitions for Doppler shift estimation may be limited by a phase ambiguity or Doppler shift aliasing. For example, phase ambiguity or Doppler shift aliasing may require that |Δθ=2π×fDoppler shift×ΔT|<π, such that fDopplershift<12ΔT to ensure that the phase difference is maintained on a single cycle of the phase (e.g., if the phase difference (Δθ) is greater than π, the cycle of phase with which the phase difference is associated may be unclear). Example permissible pilot spacings (e.g., time gaps between repetitions of an SRS) in accordance with the phase ambiguity or Doppler shift aliasing are shown below in Table 2. The examples shown in Table 2 assume an LOS channel, a subcarrier spacing of 30 kHz, and a carrier frequency of 4.5 GHz. TABLE 2UE SpeedDoppler ShiftPermissible Pilot Spacing500 km/h2083 Hz6 OFDM symbols300 km/h1250 Hz11 OFDM symbols200 km/h833 Hz16 OFDM symbols As shown in Table 2, as a speed of the UE120increases, the Doppler shift experienced by the UE120may also increase. As a result, as the UE120speed increases, permissible pilot spacings (e.g., maximum time gaps between repetitions of an SRS) may decrease, to mitigate phase ambiguity or Doppler shift aliasing. For example, at a UE120speed of 500 km/h, a permissible pilot spacing may be limited to 6 OFDM symbols, whereas at a UE120speed of 200 km/h, a permissible pilot spacing may be limited to 16 OFDM symbols. Correspondingly, time gap selection for Doppler shift estimation may be done adaptively per scenario in order to achieve an improved estimation accuracy and reliability. For example, at lower UE120speeds, a larger time gap between repetitions of an SRS may be used for improved accuracy of Doppler shift estimation by a base station110. Therefore, in high mobility scenarios, such as an HST scenario, a configuration of the SRS resource set may need to take a speed of the UE120into account to ensure that the Doppler shift estimation mitigates a risk of phase ambiguity or Doppler shift aliasing. As described above, a time gap (or pilot spacing) for Doppler spread estimation may be selected to ensure that a time correlation parameter for the channel is maintained between 0.4 and 1 to enable the resulting Doppler spread estimation to be done reliably. The time correlation parameter may be based at least in part on a channel type, channel parameters, a UE120speed, and/or deployment parameters (e.g., subcarrier spacing and/or carrier frequency), among other examples. For example, for a Rayleigh channel type, to ensure that the time correlation parameter is maintained between 0.4 and 1, it may be beneficial to use a different time gap (e.g., a different pilot spacing) for different deployment parameters (e.g., as indicated in Table 1). In some cases, a deployment may assume a certain channel type. For example, an HST SFN deployment may assume a clustered delay line (CDL) channel type with a dominant LOS path that is assumed as a typical case for each TRP (e.g., as the HST SFN may assume a train mounted UE120, rather than other possible scenarios). However, an uplink channel experienced in the HST SFN deployment may be similar to a Rician channel type or channel model (e.g., that assumes that a dominant signal may be a phasor sum of two or more dominant signals). Additionally, a UE120that is located inside of a train in an HST SFN deployment may experience an uplink channel that may be similar to a Rayleigh channel type or channel model (e.g., that assumes there is no dominant LOS path). In some examples, a Rayleigh channel type or channel mode may be a case of a Rician channel when there is no LOS signal. Therefore, a specific fixed time gap (or pilot spacing) for Doppler spread estimation may be not appropriate for all the scenarios or may be limiting and may need to be selected adaptively per scenario to ensure accurate Doppler spread estimation (e.g., to enable uplink DMRS selection or to be used for other uplink configuration or demodulation/processing aspects). As a result, to accurately measure or estimate both Doppler shift and Doppler spread, two different time gaps (e.g., two different pilot spacings) may be needed. Moreover, to accurately estimate Doppler spread, a Doppler shift estimation may need to be performed first by a base station110(e.g., to remove a frequency offset associated with the common Doppler shift when estimating the Doppler spread). Therefore, two different measurements may be needed by the base station110to estimate both Doppler shift and Doppler spread. Additionally, the most convenient time gap selection for Doppler shift or Doppler spread estimation should be done adaptively per scenario depending on the channel type, SNR, deployment parameters and UE speed range. For example, as shown inFIG.5, the table505depicts different scenarios and corresponding different time gaps (e.g., for different Doppler parameter estimations). Information depicted in the table505assumes a UE speed of 500 km/h, a subcarrier spacing of 30 kHz, and a carrier frequency of 4.5 GHz. The table505depicts examples of different possible configurations in different scenarios to optimize uplink Doppler spread and uplink Doppler shift estimations by a base station110. For example, in an HST scenario where the UE120is a train mounted UE (e.g., is mounted or deployed in a fixed position on the outside of a train), a channel type may be associated with a slow channel time correlation decay per TRP (for each TRP). In other words, a time correlation for the channel may be maintained over a longer period of time (e.g., as the UE120may be associated with additional antennas and/or may have an improved LOS to a TRP). Therefore, a time gap for an uplink Doppler shift estimation may be configured to be up to 6 OFDM symbols (e.g., assuming a UE120speed of 500 km/h; other possible time gaps for other UE120speeds are shown in Table 2). A time gap for an uplink Doppler spread estimation may be configured to be multiple OFDM symbols to ensure improved time correlation resolution for different UE120speeds. In an HST scenario where the UE120is located inside the train (e.g., is not a train mounted UE), a channel type may be Non-LOS channel (e.g., the UE may not have a direct line of sight to a TRP) and may be associated with a fast channel time correlation decay per TRP (for each TRP). In other words, a time correlation for the channel may not be maintained over longer periods of time, such as over multiple symbols or a slot. Therefore, a time gap for an uplink Doppler shift estimation may be configured to be 1 or 2 OFDM symbols to ensure that a time correlation between the repetitions of the SRS is maintained. Additionally, a time gap for an uplink Doppler spread estimation may be configured to be approximately half a slot (e.g., assuming a Rayleigh channel type) to ensure improved time correlation resolution for different UE120speeds (e.g., examples of time gaps for different deployment scenarios are shown above in Table 1). In a scenario in which the UE120is a mobile UE with a single TRP transmission (e.g., and not in case of a special HST SFN deployment with TRPs densely distributed along the railway track). For example, a mobile UE may be moving on highway and experiencing a fading Non-LOS channel with an FR1 deployment, and/or may be experiencing a channel type associated with a fast channel time correlation decay. In other words, a time correlation for the channel may not be maintained over longer periods of time, such as over multiple symbols or a slot. Therefore, a time gap for an uplink Doppler shift estimation may be configured to be 1 or 2 OFDM symbols to ensure that a time correlation between the repetitions of the SRS is maintained. Additionally, a time gap for an uplink Doppler spread estimation may be configured to be approximately half a slot (e.g., with a Rayleigh channel type assumption) to ensure improved time correlation resolution for different UE120speeds (examples of time gaps for different deployment scenarios are shown above in Table 1). As a result, different channel types, different channel parameters, SNR conditions, different UE120speeds, and/or different deployment parameters may be associated with different optimal time gaps for uplink Doppler parameter estimation. For example, a different time gap between repetitions of an SRS may be configured, depending on a Doppler parameter to be measured, a channel type, one or more channel parameters, a speed of the UE120, and/or one or more deployment parameters. However, current SRS configurations may be limited in a permissible repetition spacing (e.g., repetitions for an SRS resource may be permissible only on consecutive symbols) and/or a number of SRS resources or symbols that can be configured in each slot in order to allow adaptive SRS repetitions spacing for different channel, UE characteristics, deployment scenario combinations, and/or for different Doppler parameters estimation with a high level of accuracy and robustness. Moreover, a base station110may be unable to configure SRS resource sets that include SRS resources that have different temporal spacings. Therefore, current SRS configurations may be unable to provide a required flexibility needed to enable a base station110to perform different uplink Doppler parameter estimations in a robust and accurate way. For example, to enable different Doppler parameter estimations (e.g., that are associated with different time gaps or pilot spacings), a base station110may need to configure a first SRS resource set for a first Doppler parameter estimation and a second SRS resource set for a second Doppler parameter estimation. However, even with the two SRS resource sets, it may not be possible for each of the SRS resource sets to be configured with an optimal time gap for Doppler parameter estimation. Moreover, this increases overhead associated with transmitting SRSs and performing the Doppler parameter estimations, because additional SRSs will need to be transmitted by the UE120(e.g., using the different SRS resource set configurations, such as transmitting 4 SRS symbols). FIG.6is a diagram illustrating an example600associated with configurations and dynamic signaling for Doppler tracking SRS resource sets. As shown inFIG.6, a base station110and a UE120may communicate with one another in a wireless network, such as the wireless network100. Example600may be associated with configurations for Doppler tracking SRS resource sets that are to be used by the base station110to measure and/or estimate uplink Doppler parameters. “Doppler tracking SRS” may refer to an SRS that can be used by a base station110to measure or estimate uplink Doppler spread and/or uplink Doppler shift or uplink frequency offset. For example, an additional SRS usage type may be defined, associated with a Doppler tracking SRS. For example, a Doppler tracking SRS resource set may be used by a base station110to estimate an uplink Doppler shift and/or an uplink Doppler spread. As described in more detail elsewhere herein, the SRS resource set may include multiple SRS resources (e.g., within a single slot) that have a temporal spacing between the SRS resources (e.g., the multiple SRS resources within the single slot may be non-contiguous). In other words, the multiple SRS resources may be associated with non-consecutive OFDM symbols within a single slot. To enable accurate Doppler parameter estimations, the UE120should transmit the multiple SRS resources using the same (or approximately the same) phase. However, in some cases, a UE120may not be capable of maintaining a phase coherence or a phase continuity between transmissions of the multiple SRS resources. Therefore, the configurations described herein may be based at least in part on a capability of the UE120(e.g., the capability to support one or more SRS configurations as described herein, the capability to support a repetitive pattern of an SRS signal with a particular spacing between repetitions and phase coherency between the repetitions), the type of UE120, and/or the mobility status of the UE120, among other factors and deployment parameters (such as a subcarrier spacing or carrier frequency). At610, the base station110may determine an SRS configuration for a Doppler tracking SRS resource set. In some aspects, the base station110may determine the SRS configuration based at least in part on a capability of the UE120(e.g., a phase coherency capability or a capability to support Doppler tracking SRS resource sets). In some aspects, the base station110may determine the SRS configuration based at least in part on a carrier frequency, a subcarrier spacing, a type of deployment, a channel condition, a channel type, and/or movement information associated with the UE120, among other examples. In some aspects, the base station110may determine the SRS configuration base station110. For example, the base station110may determine a configuration for one or more SRS resources (or SRS resource identifiers) associated with the SRS resource set. For example, the base station110may select or determine an SRS configuration to be associated with SRS resources with optimized time gaps (or pilot spacings) for one or more uplink Doppler parameter estimations. As described in more detail elsewhere herein, an optimized time gap (or pilot spacing) for an uplink Doppler parameter estimation may be based at least in part on the Doppler parameter to be estimated, a carrier frequency, a subcarrier spacing, a channel condition, a channel type, and/or movement information associated with the UE120, among other examples. For example, in scenarios in which the base station110is to estimate both uplink Doppler shift and uplink Doppler spread, the base station110may select or determine an SRS configuration to include SRS resources with a first time gap (or pilot spacing) for an uplink Doppler shift estimation and a second time gap (or pilot spacing) for an uplink Doppler spread estimation. For example, within a slot, the base station110may determine starting symbol locations (e.g., intra-slot starting locations) for different SRS resources included in the SRS resource set. “Intra-slot start position” may refer to a starting OFDM symbol location within a slot where an SRS corresponding to the SRS resource identifier is to be transmitted by the UE120. The base station110may determine the starting symbol locations for the different SRS resources to optimize one or more time gaps for different uplink Doppler parameter estimations. In this way, the base station110may be enabled to select or determine an SRS configuration that is optimized for multiple Doppler parameter estimations (e.g., that includes different time gaps between different SRS resources to enable base station110to perform multiple Doppler parameter measurements and/or estimations using the same SRS resource set). At615, the base station110may transmit, and the UE120may receive, configuration information for an SRS resource set. For example, the base station110may transmit the configuration information using an RRC message (e.g., the configuration for the SRS resource set may be an RRC configuration). In some aspects, the configuration information may partially be indicated by another message. For example, for an aperiodic SRS resource set (and aperiodic SRS resources included in the SRS resource set), the base station110may transmit DCI triggering for the aperiodic SRS resource set. Therefore, in some cases, some of the configuration information or updated configuration information may be indicated by the DCI (e.g., rather than all of the configuration information being determined based at least in part on an RRC configuration). Similarly, the configuration information may be partially indicated by a MAC-CE message (e.g., for semi-persistent SRS resources). In some aspects, the configuration information may indicate a use type for the SRS resource set that is associated with Doppler tracking (e.g., indicating that the SRS resource set is to be used for uplink Doppler parameter estimation). For example, the configuration information may indicate the Doppler tracking usage in an SRS-SetUse information element or a usage information element. The use type for the SRS resource set may be indicated in an RRC configuration using a higher layer parameter. In some aspects, the configuration information may indicate one or more SRS resource identifiers. For example, the configuration information may indicate one or more SRS resource identifiers in an SRS-ResourceIDList information element. In some aspects, the configuration information may indicate multiple SRS resource identifiers (e.g., for each SRS resource or symbol associated with the SRS resource set). In some other aspects, the configuration information may indicate a single SRS resource identifier that is associated with multiple SRS resources or symbols. For example, in a first configuration type, the configuration information may indicate two or more SRS resource identifiers associated with the SRS resource set. The configuration information may indicate, for each SRS resource identifier of the two or more SRS resource identifiers, an indication of an intra-slot start position for an SRS resource associated with each SRS resource identifier of the two or more SRS identifiers. In other words, the two or more SRS resource identifiers may be configured with the same configuration, excluding the intra-slot location of the SRS resources. For example, the configuration information may indicate the intra-slot start position for each SRS resource identifier using a startPosition information element (e.g., associated with a resourceMapping information element). In the first configuration type, each SRS resource identifier may be configured as a single port transmission (e.g., using a nrofSRS-Ports information element). In some other aspects, one or more SRS resource identifiers may be configured as a multiple (e.g., two or more) port transmission. For example, for a first frequency band (e.g., an FR1 frequency band), each SRS resource identifier may be configured as a single port transmission. For a second frequency band or bands associated with a high frequency (e.g., an FR2 frequency band another band associated with a frequency range that is higher than the FR2 frequency band), one or more SRS resource identifiers may be configured as a multiple (e.g., two or more) port transmission. In the first configuration type, each SRS resource identifier may be configured as a single symbol transmission (e.g., using an nrofSymbols information element associated with a resourceMapping information element). In a second configuration type, the configuration information may indicate one or more (e.g., N) SRS resource identifiers associated with the SRS resource set. The configuration information may indicate, for each SRS resource identifier, an indication of an intra-slot start position for an SRS resource associated with the SRS resource identifier (e.g., using a startPosition information element associated with a resourceMapping information element). In some aspects, the configuration information may indicate, for each SRS resource identifier, an indication of a number of repetitions for an SRS resource associated with the SRS resource identifier (e.g., using a repetitionFactor information element associated with the resourceMapping information element). In some aspects, the configuration information may indicate, for each SRS resource identifier, an indication of a number of symbols for an SRS resource associated with the SRS resource identifier (e.g., using a nrofSymbols information element associated with a resourceMapping information element) which may be configured consistently with the repetitionFactor information element. For example, in the SRS resource configuration, repetitions may be configured on the consecutive symbols (e.g., an SRS resource configured with 2 symbols and 2 repetitions may occupy 2 consecutive and repetitive OFDM symbols. In the second configuration type, the SRS resource identifiers may be associated with 1 or 2 symbols and 1 or 2 repetitions correspondingly (e.g., each SRS resource identifier may be configured to occupy 1 OFDM symbol or 2 OFDM symbols using the nrofSymbols information element and the corresponding repetitionFactor information element). In the second configuration type, each SRS resource identifier may be configured as a single port transmission (e.g., using the nrofSRS-Ports information element). In some other aspects, one or more SRS resource identifiers may be configured as a multiple (e.g., two or more) port transmission (e.g., for different frequency bands, in a similar manner as described above in connection with the first configuration type). In a third configuration type, the configuration information may indicate a single SRS resource identifier associated with the SRS resource set. The SRS resource may be configured using configuration parameters for SRS resource configuration (e.g., as defined, or otherwise fixed, by a wireless communication standard, such as the 3GPP) except for the intra-slot start positions for the SRS resource. For example, rather than indicating a single value for the intra-slot start positions for the SRS resource, the configuration information may indicate multiple values for the intra-slot start positions for the SRS resource (e.g., using a startPosition information element or another information element associated with the resourceMapping information element). For example, the configuration information may indicate multiple intra-slot start positions associated with the SRS resource identifier using a configuration field associated with the SRS resource identifier. The single configuration field may be capable of conveying multiple values (e.g., may be a multiple value indicator). The number of the multiple intra-slot start positions may (e.g., implicitly) indicate a number of symbols associated with the SRS resource identifier. The multiple intra-slot start positions may indicate intra-slot start positions relative to a last symbol of a slot. For example, if the single intra-slot start position field indicates (2, 4, 7), it may indicate that the Doppler tracking SRS is configured with 3 symbols (e.g., at symbol indices 11, 9, and 6 of a slot) with time gaps of 2 symbols (e.g., between the symbols at symbol indices 9 and 11) and 3 symbols (e.g., between the symbols at symbol indices 6 and 9). For example, in the third configuration type, a number of symbols and/or a number of repetitions may not be indicated and the UE120and/or the base station110may assume that the number of SRS symbols is implicitly defined by a length of the startPosition information element and that each SRS symbol location defined by the startPosition information element has a single repetition (e.g., the nrofSymbols information element and/or the repetitionFactor information element may not be used for the third configuration type). For example, the configuration information may not indicate information associated with a repetition factor or a number of symbols for the SRS resource identifier (e.g., to conserve signaling overhead and an RRC configuration structure volume, as this information may be implicitly indicated by the multi-valued intra-slot start position field). In the third configuration type, the SRS resource identifier may be configured as a single port transmission (e.g., using the nrofSRS-Ports information element). In some other aspects, the SRS resource identifier may be configured as a multiple (e.g., two or more) port transmission (e.g., for different frequency bands, in a similar manner as described above in connection with the first configuration type). In a fourth configuration type, the configuration information may indicate a single SRS resource identifier associated with the SRS resource set. The SRS resource may be configured using configuration parameters for SRS resource configuration (e.g., as defined, or otherwise fixed, by a wireless communication standard, such as the 3GPP) except for the intra-slot start position for the SRS resource and an additional indication of occupied symbols for the SRS resource. For example, the additional indication of occupied symbols for the SRS resource may be a bitmap configuration indicating the occupied symbols for the SRS resource identifier (e.g., starting at the symbol location indicated by the intra-slot start position for the SRS resource). For example, non-zero elements included in the bitmap configuration may indicate occupied symbols within a slot starting based at least in part on the symbol indicated by the intra-slot start position. For example, the intra-slot start position may be defined relative to an end of a slot (e.g., relative to a last symbol in a slot). Therefore, an intra-slot start position of “4” may indicate that the starting symbol position is 4 symbols from the end of the slot (e.g., symbol 10 of a slot, assuming the slot has 14 symbols). In some aspects, the bitmap may be indicated using an information element associated with the resourceMapping information element. For example, if the intra-slot start position for the SRS resource indicates a symbol index of 6 and the bitmap indicates (1, 0, 0, 1, 0, 1), then the SRS resource may be configured with three SRS symbols (e.g., at symbol index 7, symbol index 10, and symbol index 12). Additionally, the SRS resource may be associated with a time gap of two symbols and one symbol (e.g., indicated by the elements with a value of zero in the bitmap). In the fourth configuration type, the SRS resource identifier may be configured as a single port transmission (e.g., using the nrofSRS-Ports information element). In some other aspects, the SRS resource identifier may be configured as a multiple (e.g., two or more) port transmission (e.g., for different frequency bands, in a similar manner as described above in connection with the first configuration type). In some aspects, the bitmap configuration indicating the occupied symbols for the SRS resource identifier may be a constant size regardless of the number of occupied SRS symbols for the SRS configuration. For example, the bitmap configuration may be defined assuming a starting position of a first symbol of a slot and may include a number of elements that is equivalent to a number of symbols in the slot (e.g., may include 14 elements assuming that each slot includes 14 symbols). Therefore, the bitmap configuration may indicate occupied SRS symbols (e.g., using a value of “1” in the bitmap configuration) and may indicate unoccupied SRS symbols (e.g., using a value of “0” in the bitmap configuration) for all symbols in a slot. In such examples, the intra-slot start position may not be included in the SRS configuration. For example, as the bitmap configuration is a constant size and assumes a starting position of a first symbol of a slot, the intra-slot start position may not be needed. For example, the configuration information may not indicate information associated with the intra-slot start position (e.g., to conserve signaling overhead and an RRC configuration structure volume, as this information may be implicitly indicated by the bitmap configuration). In some aspects, the configuration information may indicate that the SRS resource set is a periodic SRS resource set, an aperiodic SRS resource set, or a semi-persistent SRS resource set (e.g., in any of the first, second, third, or fourth configuration types described above). Similarly, one or more of the SRS resources (e.g., one or more of the SRS resource identifiers) may be configured to be periodic, aperiodic, or semi-persistent. For example, the base station110may configure the SRS resource set (or SRS resources) for Doppler tracking to be periodic, semi-persistent, or aperiodic using an information element included in the configuration information. At620, the UE120may determine the SRS configuration based at least in part on receiving the configuration information. At625, the UE120may transmit, and the base station110may receive, a Doppler tracking SRS using SRS resources indicated by the configuration information. For example, the UE120may transmit repetitions of the SRS to base station110in accordance with the configuration information. In one example, the UE120may transmit a set of SRS repetitions in the same subframe or a same slot. For instance, the UE120may transmit a first repetition of the SRS in a first symbol location of a slot, transmit a second repetition of the SRS in a second symbol location of the slot, and transmit a third repetition of the SRS in a third symbol location of the slot. The Doppler tracking SRS repetitions may be transmitted by the UE120using a single port (e.g., antenna port or SRS port) or multiple ports (e.g., multiple antenna ports or multiple SRS ports). For example, the UE120may be configured to use a single port or may be configured to use multiple ports for the Doppler tracking SRSs. When multiple ports are used, the ports may be quasi co-located to facilitate Doppler parameter estimation. In the case of beam-based transmission, the Doppler tracking SRS repetitions may be transmitted using the same transmission beam and/or the same antenna panel. In some aspects, the Doppler tracking SRS repetitions may be transmitted over the entire bandwidth part assigned to UE120. Alternatively, the Doppler tracking SRS repetitions may be transmitted over a portion of the bandwidth part assigned to UE120. At630, the base station110may measure the Doppler tracking SRS received from the UE120. For example, the base station110may measure a first set (e.g., pair) of SRS repetitions that are received in the same subframe or same slot. For instance, the base station110may measure a first repetition pair including the SRS received in a first SRS symbol of a slot and the SRS received in a second SRS symbol of the slot. Additionally, the base station110may measure a second set (e.g., pair) of SRS repetitions that are received in the same subframe or same slot. In some aspects, a time gap associated with the first set (e.g., pair) of SRS repetitions may be different than a time gap associated with the second set (e.g., pair) of SRS repetitions (e.g., to enable the base station110to estimate different Doppler parameters using the first set of SRS repetitions and the second set of SRS repetitions). In some aspects, the first set of SRS repetitions and the second set of SRS repetitions may include one or more common SRS repetitions or SRS symbols. For example, the configuration information may configure the UE120to transmit an SRS on a first symbol, a third symbol, and a sixth symbol of a slot. The first set of SRS repetitions may include the SRS transmitted on the first symbol and the third symbol (e.g., to enable the base station110to estimate an uplink Doppler shift). The second set of SRS repetitions may include the SRS transmitted on the first symbol and the sixth symbol (e.g., to enable the base station110to estimate an uplink Doppler spread). At635, the base station110may estimate one or more uplink Doppler parameters using the SRS transmitted by the UE120. For example, the base station110may measure the SRS messages to estimate a Doppler shift or a frequency offset for the uplink channel. Additionally, or alternatively, the base station110may measure the SRS messages to estimate a Doppler spread or a time correlation for the uplink channel. For example, the base station110may determine a correlation in time between the SRS repetitions based on the measurements performed by the base station110. In some aspects, the base station110may measure a differential phase of a set of SRS repetitions to determine a phase offset between the set (e.g., pair) of SRS repetitions. The base station110may estimate a Doppler shift for the uplink channel based at least in part on the measured phase offset or difference. For example, the base station110may use the Doppler tracking SRSs transmitted by the UE120for implicit Doppler shift signaling (e.g., in a high mobility scenario, such as in an HST SFN scenario). The base station110(e.g., the network) may apply Doppler shift pre-compensation for one or more TRPs in a downlink SFN scenario (e.g., such as when the one or more TRPs are transmitting downlink communications simultaneously using the same time/frequency resources). The Doppler shift pre-compensation may be based at least in part implicit Doppler shift signaling or indication by the UE120(e.g., using Doppler tacking SRSs). Additionally, or alternatively, the base station110may use the Doppler tracking SRSs transmitted by the UE120for improved uplink DMRS configuration determinations or selections. In some aspects, the base station110may determine other characteristics, conditions, parameters, and/or metrics, such as delay spread for the uplink channel, the power level used to transmit the uplink reference signal relative to a power level used to transmit data, and/or a link quality characteristic (e.g., reception SNR) for the uplink channel. In some aspects, the base station110may estimate an uplink Doppler parameter using the Doppler tracking SRS transmitted by the UE120using different time gaps to estimate the Doppler parameter. For example, the base station110may perform a first estimation for the Doppler parameter using a first time gap associated with the Doppler tracking SRS resource(s) transmitted by the UE120. The base station110may perform a second estimation for the Doppler parameter using a second time associated with the Doppler tracking SRS resource(s) transmitted by the UE120. The first time gap may be smaller than the second time gap. For example, the first estimation may provide additional robustness with respect to phase ambiguity or Doppler aliasing (e.g., the smaller time gap may mitigate a risk of phase ambiguity or Doppler aliasing), but the first estimation may provide a lower accuracy due to the smaller time gap. The base station110may use the first estimation to apply a correction factor to compensate for the frequency offset or Doppler shift estimated from the first estimation (e.g., for the second estimation). In some aspects, the second time gap may be based at least in part on (e.g., may be defined) by an accuracy of the first estimation (e.g., may be deterministic or bounded for each SNR), rather than being defined by a speed range of the UE120. By using the larger second time gap, an overall accuracy of the uplink Doppler parameter estimation may be improved. Therefore, by estimating the uplink Doppler parameter using the Doppler tracking SRS using the two-step approach described above (e.g., a first coarse estimation and a second fine estimation), an accuracy of the estimation may be improved and the estimation may have improved robustness to phase ambiguity or Doppler aliasing. As a result, the base station110may be enabled to perform accurate and robust Doppler parameter estimation for an uplink channel using a configured Doppler tracking SRS. For example, the base station110may be enabled to optimize multiple, different, time gaps between symbols associated with an SRS transmission for Doppler parameter estimation. The base station110may be enabled to configure the symbols on which an SRS transmission occurs to optimize the temporal spacing between the SRS transmissions for Doppler parameter estimation. The base station110may be enabled to perform different Doppler parameter estimations using the same SRS resource set and the one or more transmission occurrences associated with the SRS resource set. For example, the base station110may be enabled to perform uplink Doppler shift estimation using a first time gap associated with the SRS resource set and/or may be enabled to perform uplink Doppler spread estimation using a second (different) time gap associated with the SRS resource set. Accurate uplink Doppler parameter estimation may improve uplink DMRS configuration selection, synchronization loop tracking by the base station110, pre-compensation of a frequency offset for the downlink channel (for example in the case of an HST SFN scenario where a transmission scheme 1 (e.g., as defined, or otherwise fixed, by a wireless communication standard) and Doppler shift pre-compensation is employed for downlink transmissions), and/or uplink channel estimation and uplink link adaptation, among other examples. However, in some cases, such as in the example shown inFIG.6, channel conditions, channel parameters, SNR conditions, UE120speeds, and/or deployment parameters may change dynamically over time. Therefore, in some cases, parameters of a Doppler tracking SRS resource set configuration, such as for an aperiodic Doppler tracking SRS resource set configuration, may become suboptimal for estimating uplink Doppler parameters. For example, a number of SRS resources and/or a time gap between SRS resources of an aperiodic Doppler tracking SRS resource set may become suboptimal due to changing channel conditions, channel parameters, SNR conditions, UE120speeds, and/or deployment parameters. Reconfiguring the aperiodic Doppler tracking SRS resource set based on the changing conditions may be difficult and time consuming and would require some interruption in Doppler tracking SRS triggering. For example, the aperiodic Doppler tracking SRS resource set may be reconfigured via RRC signaling. However, RRC procedures may be unable to adapt to changes in channel and reception conditions (e.g., because RRC reconfiguration procedures are non-synchronous and associated with high latency and as a result involve an ambiguity during some time period during which SRS transmission/triggering should be avoided). Some techniques and apparatuses described herein in connection withFIGS.7and8enable dynamic parameter adaptation for aperiodic Doppler tracking SRS resource sets. For example, the base station110may dynamically adapt one or more parameters for an aperiodic Doppler tracking SRS resource set to modify a time gap (e.g., between two resources or symbols associated with the aperiodic Doppler tracking SRS resource set) and/or a number of resources or symbols to be transmitted for an aperiodic Doppler tracking SRS resource set. The base station110may transmit a DCI message (e.g., an SRS triggering DCI message) that indicates the one or more parameters. In some aspects, the DCI may be a non-data-scheduling DCI type (e.g., a DCI that does not schedule a data transmission). In some other aspects, the DCI may be a data-scheduling DCI type (e.g., a DCI that schedules a data transmission). In some aspects, the DCI may indicate an SRS trigger state. The SRS trigger state may indicate (or be configured or associated with) an aperiodic Doppler tracking SRS resource set identifier. In some aspects, the SRS trigger state may additionally indicate one or more parameters associated with the aperiodic Doppler tracking SRS resource set (e.g., explicitly based at least in part on a configuration of the SRS trigger state). Alternatively, the DCI may indicate the SRS trigger state (e.g., that is linked or associated with the aperiodic Doppler tracking SRS resource set) and may indicate (e.g., directly or explicitly) the one or more parameters associated with the aperiodic SRS resource set (e.g., such as when a non-data scheduling DCI is used by the base station110). In this way, the base station110may dynamically adapt one or more parameters of an aperiodic Doppler tracking SRS resource set. Therefore, the base station110may be enabled to modify a time gap between SRS resources associated with the aperiodic Doppler tracking SRS resource set and/or may be enabled to modify a number of SRS resources to be transmitted for the aperiodic Doppler tracking SRS resource set (e.g., the base station110may dynamically activate or deactivate SRS resource identifiers or SRS resource(s) of an SRS resource identifier). As a result, uplink Doppler parameter estimations performed using the aperiodic Doppler tracking SRS may be improved. For example, the base station110may be enabled to dynamically adapt a time gap between SRS resources (e.g., based at least in part on Doppler parameter(s) to be estimated, channel conditions, channel parameters, SNR conditions, UE120speeds, and/or deployment parameters) to optimize the time gap(s) for different Doppler parameter estimations. This may improve an accuracy of Doppler parameter estimations by enabling the base station110to configure different time gaps or pilot spacings between SRS resources (e.g., for different Doppler parameter estimations) within the same SRS resource set. Additionally, the base station110, to reduce an overhead associated with transmitting the Doppler tracking SRS, may be enabled to dynamically indicate different numbers of SRS resources or SRS symbols that may be required to support different Doppler parameter estimations at different time periods or SRS transmission sessions. As indicated above,FIG.6is provided as an example. Other examples may differ from what is described with respect toFIG.6. FIG.7is a diagram illustrating an example700associated with dynamic parameter adaptation for aperiodic Doppler tracking SRS resource sets. As shown inFIG.7, a base station110and a UE120may communicate with one another in a wireless network, such as the wireless network100. Example700may be associated with dynamic parameter adaption for aperiodic Doppler tracking SRS resource sets that are to be used by the base station110to measure and/or estimate uplink Doppler parameters. At705, the base station110may transmit, and the UE120may receive, configuration information. In some aspects, the UE120may receive configuration information from another device (e.g., from another base station or another UE). In some aspects, the UE120may receive the configuration information via RRC signaling and/or MAC-CE signaling. In some aspects, the configuration information may include an indication of one or more configuration parameters (e.g., already known to the UE120) for selection by the UE120and/or explicit configuration information for the UE120to use to configure the UE120. In some aspects, the configuration information may indicate one or more configurations for SRS resource sets. For example, the configuration information may configure one or more SRS resource sets in a similar manner as described elsewhere herein. In some aspects, the configuration information may indicate configurations for SRS resource sets having different usage types, such as antenna switching, codebook, non-codebook, beam management, and/or positioning, among other examples. Additionally, or alternatively, the configuration information may indicate configurations for one or more Doppler tracking SRS resource sets (e.g., SRS resource sets having a usage type of Doppler tracking). For example, the configuration information may configure one or more Doppler tracking SRS resource sets in a similar (or the same) manner as described in connection withFIG.6. For example, the configuration information may indicate a configuration for an aperiodic SRS resource set associated with a Doppler tacking usage type for the SRS resource set and a configuration for one or more SRS resource identifiers associated with the SRS resource set, in a similar manner as described in more detail elsewhere herein. In some aspects, the configuration information may indicate that one or more parameters for an aperiodic Doppler tracking resource set may be changed over time via dynamic signaling from the base station110. For example, the configuration information may indicate that the base station110may transmit SRS scheduling DCI that indicates (e.g., explicitly or implicitly) one or more parameters for the aperiodic Doppler tracking resource set. The configuration information may indicate a DCI type and/or a DCI format to be used by the base station110for the dynamic signaling. For example, in some aspects, the configuration information may indicate that the base station110is to use a non-data-scheduling DCI type to trigger aperiodic SRS. “Non-data-scheduling DCI type” may refer to a DCI that does not schedule any data transmissions (e.g., PDSCH transmissions and/or PUSCH transmissions) and/or that is not associated with CSI. The non-data-scheduling DCI type may also be referred to as a dummy DCI type. The configuration information may indicate that the non-data-scheduling DCI type may use a format that is similar to the DCI format 0_1, 0_2, 1_1, and/or 1_2, among other examples (e.g., as defined, or otherwise fixed, by a wireless communication standard, such as the 3GPP). Alternatively, the configuration information may indicate that the base station110is to use a data-scheduling DCI type (e.g., a DCI type associated with scheduling data transmissions) to trigger aperiodic SRS. The configuration information may indicate that the data-scheduling DCI type may use a DCI format 0_1, 0_2, 1_1, and/or 1_2, among other examples. In some aspects, the configuration information may indicate information for one or more SRS trigger states. “SRS trigger state” may refer to a configuration for a list of one or more SRS resource sets to be triggered by DCI when the DCI indicates the specific SRS trigger state (e.g., via an SRS request field in the DCI). Different SRS trigger states can be indicated or selected dynamically by the base station110. For example, the configuration information may map or link each SRS trigger state to a code point for a DCI field (e.g., the SRS request field) or to another indicator. The base station110may include the code point or indicator in DCI (e.g., in an SRS request field) to dynamically trigger or select the SRS trigger state linked to or associated with an SRS resource set (or several SRS resource sets). In some aspects, the configuration information may configure a linkage or association of different SRS trigger states to different SRS resource sets having different usage types, such as Doppler tracking, antenna switching, codebook, non-codebook, beam management, and/or positioning, among other examples. In some aspects, for an aperiodic Doppler tracking SRS resource set, the configuration information (e.g., for one or more SRS trigger states) may indicate multiple SRS trigger states associated with or linked to the same aperiodic Doppler tracking SRS resource set. For example, each SRS trigger state associated with the same aperiodic Doppler tracking SRS resource set may include different combinations of configuration parameters for the aperiodic Doppler tracking SRS resource set (e.g., may provide a configuration for different time gaps between SRS resources or SRS symbols and/or may configure one or more SRS resources or SRS resource identifiers to be disabled or deactivated). This may provide additional flexibility to the base station110for dynamically modifying time gaps or the number of SRS resources to be transmitted for a specific triggering of an aperiodic Doppler tracking SRS resource set, while also reducing a signaling or dynamic reconfiguration overhead (e.g., as the base station110may only need to indicate a code point or indicator associated with the SRS trigger state, rather than providing explicitly the full configuration or parameters associated with the configuration). In some aspects, the configuration information may indicate that one or more SRS trigger states (e.g., from the configured SRS trigger states) are to be activated (e.g., are to be available for selection or use by the base station110and/or UE120). The configuration information may indicate that the base station110may modify which SRS trigger states are activated over time (e.g., via MAC-CE signaling). In some aspects, an SRS trigger state may be linked or mapped to an identifier of an SRS resource set (e.g., of an aperiodic Doppler tracking SRS resource set). Additionally, the SRS trigger state configuration may indicate that an additional indication of one or more parameters of the aperiodic Doppler tracking SRS resource set (e.g., an SRS trigger sate may explicitly indicate some of the configuration parameters for aperiodic Doppler tracking SRS resource set). The UE120may configure the UE120for communicating with the base station110. In some aspects, the UE120may configure the UE120based at least in part on the configuration information. In some aspects, the UE120may be configured to perform one or more operations described herein. At710, the UE120may determine one or more SRS resource set configurations (e.g., as indicated by the base station110). For example, the UE120may determine or identify the one or more SRS resource set configurations based at least in part on the configuration information. The UE120may determine or identify one or more parameters (e.g., start position parameters(s), and/or bitmap parameters or configurations) for SRS resource identifier(s) that indicate intra-slot locations of SRS resources or SRS symbols associated with the SRS resource sets. At715, the base station110may determine a modification to at least one parameter of an aperiodic Doppler tracking SRS resource set. For example, the base station110may determine that a time gap for an aperiodic Doppler tracking SRS resource set (e.g., as indicated or configured by the configuration information) should be modified. For example, the base station110may determine a modification to at least one start position parameter (e.g., intra-slot start position parameter) for an SRS resource identifier associated with the aperiodic Doppler tracking SRS resource set. As another example, the base station110may determine that one or more SRS resources or SRS resource identifiers should be activated or deactivated. For example, the aperiodic Doppler tracking SRS resource set may be associated with three SRS resources or SRS resource identifiers. The base station110may determine that only two SRS resources or SRS resource identifiers are needed for uplink Doppler parameter estimation. Therefore, the base station110may determine that one of the SRS resources or SRS resource identifiers associated with the aperiodic Doppler tracking SRS resource set should be deactivated or disabled (e.g., to conserve overhead associated with transmitting the aperiodic Doppler tracking SRS resource set). The base station110may determine the modification to at least one parameter of the aperiodic Doppler tracking SRS resource set based at least in part on channel conditions, channel parameters, SNR conditions, UE120speeds, uplink Doppler parameter estimations, a type and/or number of uplink Doppler parameter estimations to be performed in a specific estimation session (e.g., at a given time) based on a specific SRS scheduling (e.g., by the base station110using the aperiodic Doppler tracking SRS resource set), and/or deployment parameters, among other examples. At720, the base station110may transmit, and the UE120may receive, DCI triggering a transmission of an aperiodic Doppler tracking SRS resource set (e.g., of an SRS resource set identifier for an aperiodic Doppler tracking SRS resource set) associated with the one or more SRS resource identifiers. The DCI may indicate one or more parameters for the SRS resource set (e.g., one or more modified parameters from a configuration of the SRS resource set). For example, the one or more parameters may indicate a modified time gap or a modified number of resources associated with the SRS resource set. In some aspects, the DCI may use a non-data-scheduling DCI type. The non-data-scheduling DCI type may use a DCI format for scheduling data transmissions (e.g., DCI format 0_0, 0_1, 1_0, and/or 1_1, among other examples), but one or more fields of the DCI format that are associated with scheduling data transmission may not be used and/or may be available for indicating other information. For example, fields associated with indicating a resource allocation (e.g., a time domain resource allocation and/or a frequency domain resource allocation), an MCS, and/or a hybrid automatic repeat request (HARD) process, among other examples, may be available and/or used for indicating other information (e.g., information other than the type of information associated with the field as indicated by the DCI format as defined or otherwise fixed by a wireless communication standard, such as the 3GPP). If a non-data-scheduling DCI type is used, the DCI may indicate an SRS trigger state that is linked or associated with an aperiodic Doppler tracking SRS resource set. Additionally, the DCI may indicate (e.g., explicitly) one or more parameters for the aperiodic Doppler tracking SRS resource set via one or more fields of the DCI. The one or more fields of the DCI may be repurposed fields of the DCI format of the DCI when the DCI format is used as the non-data-scheduling DCI type (e.g., one or more unused or available fields of the DCI format when the DCI is a non-data-scheduling DCI type). In some aspects, the DCI may indicate (e.g., explicitly) full information for one or more parameters associated with the aperiodic Doppler tracking SRS resource set. For example, if a non-data-scheduling DCI type is used, the DCI may indicate (e.g., explicitly) full information for one or more parameters. In some aspects, the DCI may indicate a start position parameter for at least one SRS resource identifier of one or more SRS resource identifiers associated with the aperiodic Doppler tracking SRS resource set (e.g., if the first configuration type described above in connection withFIG.6is used to configure the aperiodic Doppler tracking SRS resource set). For example, the configuration information may indicate a modified start position parameter (e.g., a modified intra-slot start position parameter) that is to be used for the at least one SRS resource identifier (e.g., the start position parameter indicated by the DCI may replace or overwrite, for the transmission triggered by the DCI, a start position parameter for the SRS resource identifier indicated by the configuration information). In some aspects, the DCI may include an indication to activate or deactivate at least one SRS resource identifier of one or more SRS resource identifiers associated with the aperiodic Doppler tracking SRS resource set (e.g., if the first configuration type described above in connection withFIG.6is used to configure the aperiodic Doppler tracking SRS resource set). For example, the DCI may enable or disable one or more SRS resource identifiers associated with the aperiodic Doppler tracking SRS resource set. If an SRS resource identifier is activated or enabled, then the DCI may indicate that an SRS resource associated with the SRS resource identifier is to be transmitted by the UE120for the SRS transmission triggered by the DCI. If an SRS resource identifier is deactivated or disabled, then the DCI may indicate that an SRS resource associated with the SRS resource identifier is not to be transmitted by the UE120for the SRS transmission triggered by the DCI. For example, the DCI may include a flag or other indicator for a pre-defined SRS resource (e.g., associated with a first or last SRS resource in the time domain as configured by the configuration information). If the flag indicates that the corresponding SRS resource identifier is deactivated or disabled (e.g., if the flag has a value of zero), then the corresponding SRS resource or SRS symbol will not be transmitted for the transmission triggered by the DCI. For example, the at least one SRS resource identifier, that is associated with the flag, may be associated with an SRS resource that occurs first in the time domain, or an SRS resource that occurs last in the time domain, among SRS resources associated with the aperiodic Doppler tracking SRS resource set. In some aspects, the DCI may include an indication of one or more activated SRS resource identifiers of one or more SRS resource identifiers associated with the aperiodic Doppler tracking SRS resource set (e.g., if the second configuration type described above in connection withFIG.6is used to configure the aperiodic Doppler tracking SRS resource set). For example, the indication of one or more activated SRS resource identifiers may include a bitmap. The bitmap may indicate the one or more activated SRS resource identifiers for the triggered transmission of the SRS resource set. For example, the bitmap may have a size of L bits, where a value of L corresponds to the number of SRS resource identifiers associated with the aperiodic Doppler tracking SRS resource set. The SRS trigger state indicated by the DCI may indicate the aperiodic Doppler tracking SRS resource set and the bitmap may indicate a subset of (activated) SRS resource identifiers (from a set of SRS resource identifiers associated with the aperiodic Doppler tracking SRS resource set) that are activated for the SRS transmission triggered by the DCI. SRS resource identifiers that are indicated as activated by the bitmap (e.g., by a value of one in the bitmap) may jointly define the waveform (e.g., the time domain pattern) of the triggered Doppler tracking SRS. In this way, the base station110may dynamically adjust time gaps and/or a number of resources associated with the triggered Doppler tracking SRS. In some aspects, the DCI may include a set of values (e.g., a list of values) for a start position parameter (e.g., an intra-slot start position parameter) for a single SRS resource identifier associated with the aperiodic Doppler tracking SRS resource set (e.g., if the third configuration type described above in connection withFIG.6is used to configure the aperiodic Doppler tracking SRS resource set). For example, the configuration information may configure the single SRS resource identifier. The set of values may indicate intra-slot time domain starting locations of SRS symbols for the triggered transmission of the SRS resource set. The set of values indicated by the DCI may be different than a set of values for the start position parameter indicated by the configuration information. In other words, the base station110may be enabled to modify a time domain starting location for at least one SRS resource or SRS symbol associated with the SRS resource identifier via the set of values for the start position parameter. A size of the set of values (e.g., a number of elements included in the set of values) indicated by the DCI may be based at least in part on a size of the set of values for the start position parameter indicated by the configuration information. In some aspects, the size of the set of values (e.g., a number of elements included in the set of values) indicated by the DCI may be less than the size of the set of values for the start position parameter indicated by the configuration information. For example, the configuration information may indicate a set of values for the start position parameter, with each value corresponding to an element (e.g., the set of values may correspond to a set of elements). In some aspects, the configuration information may indicate that a subset of elements of the set of elements may be modified by DCI. Therefore, the size of the set of values (e.g., a number of elements included in the set of values) indicated by the DCI may be based at least in part on a number of elements included in the subset of elements (e.g., the subset of elements that can be modified by DCI as indicated by the configuration information). In some aspects, the set of values for the start position parameter may include one or more valid values and one or more invalid values. “Valid value” may refer to an intra-slot start location that is possible or available (e.g., based at least in part on a slot format and/or a number of symbols in each slot). “Invalid value” may refer to an intra-slot start location that is unavailable or not possible (e.g., based at least in part on the slot format and/or the number of symbols in each slot). For example, if there are 14 symbols in each slot, then values between 0 and 13 may be valid values for the start position parameter. A value of 14 or higher may be an invalid value for the start position parameter when there are 14 symbols in each slot because a value of 14 would indicate a starting location in a different slot. In some aspects, one or more invalid values may be predefined (e.g., in the configuration information). The one or more valid values in the set of values for the start position parameter indicated by the DCI may indicate SRS symbols for the single SRS resource identifier that are to be associated with the triggered transmission of the SRS resource set. The one or more invalid values may indicate SRS symbols for the single SRS resource identifier that are not to be associated with the triggered transmission of the SRS resource set. In other words, if an element in the set of values for the start position parameter includes an invalid value, then a corresponding SRS resource or symbol may be deactivated or disabled for the triggered transmission of the SRS resource set. The number of valid values signaled by the DCI may indicate (e.g., implicitly) the number of SRS symbols to be transmitted for the triggered transmission of the SRS resource set. Similarly, the number of invalid values signaled by the DCI may indicate (e.g., implicitly) the number of deactivated or disabled SRS symbols for the triggered transmission of the SRS resource set. By using invalid values to indicate the deactivated or disabled SRS symbols for the triggered transmission of the SRS resource set, a size of a field used to indicate the set of values for the start position parameter may remain the same regardless of the number of SRS symbols that are activated or deactivated by a given SRS triggering DCI message. Enabling the size of a field used to indicate the set of values for the start position parameter to remain the same may reduce a complexity associated with transmitting (e.g., by the base station110) and/or decoding (e.g., by the UE120) the DCI. In some aspects, the DCI may include an indication of a bitmap, where the bitmap indicates one or more enabled SRS symbols associated with a single SRS resource identifier to be transmitted for a given triggering or scheduling of the aperiodic Doppler tracking SRS resource set (e.g., if the fourth configuration type described above in connection withFIG.6is used to configure the aperiodic Doppler tracking SRS resource set). For example, the bitmap may indicate intra-slot starting locations for different SRS resources or SRS symbols associated with the single SRS resource identifier. In some aspects, the DCI may include an indication of the bitmap and an indication of a start position parameter, where the start position parameter indicates an intra-slot time domain starting location for SRS symbols indicated (e.g., enabled or disabled) by the bitmap. In some other aspects, the start position parameter associated with the bitmap may be fixed (e.g., at a first symbol in a slot or at another symbol in the slot) and may not be indicated by the DCI. If a start position parameter is indicated by the DCI, then the value of the start position parameter may define or indicate (e.g., implicitly) a size of the bitmap (e.g., based at least in part on the value of the start position parameter and a number of symbols in each slot). The bitmap indicated by the DCI may be used for the triggered Doppler tracking SRS transmission (e.g., rather than a bitmap indicated by the configuration information for the aperiodic Doppler tracking SRS resource set). The value of the start position parameter and/or the bitmap indicated by the DCI may jointly define the waveform (e.g., the time domain pattern) of the triggered Doppler tracking SRS transmission. In some aspects, the one or more parameters for the triggered Doppler tracking SRS transmission may be indicated by an SRS trigger state, rather than being explicitly indicated by the DCI. For example, if a data-scheduling DCI type is used by the base station110to trigger the Doppler tracking SRS transmission, then the DCI may indicate an SRS trigger state associated with or linked to a Doppler tracking SRS resource set. The SRS trigger state may additionally indicate a configuration and/or the one or more parameters for the triggered Doppler tracking SRS transmission. For example, as described above, multiple SRS trigger states may be configured at the UE120(e.g., via the configuration information or another RRC configuration). The configuration of the SRS trigger states may indicate different configuration parameters and/or information similar to the information that may be indicated (e.g., explicitly) by the DCI as described above. For example, an SRS trigger state (e.g., for a Doppler tracking SRS resource set) may indicate a first value for a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers and/or an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers (e.g., if the first configuration type described above in connection withFIG.6is used to configure the aperiodic Doppler tracking SRS resource set). Additionally, or alternatively, an SRS trigger state (e.g., for a Doppler tracking SRS resource set) may indicate a first bitmap indicating one or more activated SRS resource identifiers from the one or more SRS resource identifiers (e.g., if the second configuration type described above in connection withFIG.6is used to configure the aperiodic Doppler tracking SRS resource set). Additionally, or alternatively, an SRS trigger state (e.g., for a Doppler tracking SRS resource set) may indicate a set of values for the start position parameter (e.g., if the third configuration type described above in connection withFIG.6is used to configure the aperiodic Doppler tracking SRS resource set). Additionally, or alternatively, an SRS trigger state (e.g., for a Doppler tracking SRS resource set) may indicate a second bitmap indicating one or more activated SRS symbols for an SRS resource identifier of the one or more SRS resource identifiers, and/or a second value for the start position parameter indicating an intra-slot start position for SRS symbols indicated by the second bitmap (e.g., if the fourth configuration type described above in connection withFIG.6is used to configure the aperiodic Doppler tracking SRS resource set). In other words, the configuration of an SRS trigger state associated with an aperiodic Doppler tracking SRS resource set triggering may indicate information similar to the information that may be explicitly indicated by the DCI, as described elsewhere herein. An SRS trigger state may be mapped to a code point or other indicator. The code point or indicator may be included in a field of the DCI (e.g., in an SRS request field). Therefore, the DCI may indicate the SRS trigger state and the UE120may identify the configuration and/or the one or more parameters for the triggered Doppler tracking SRS transmission based at least in part on the configuration indicated by the triggered SRS trigger state. In some aspects, the base station110may configure SRS trigger states for SRS resource sets having other usage types (e.g., in addition to SRS trigger states for SRS resource sets having a Doppler tracking usage type). For example, the configuration information (or another RRC configuration) may indicate a set of SRS trigger states. The set of SRS trigger states may be associated with a set of SRS resource sets, including the aperiodic Doppler tracking SRS resource set, that can be triggered by the DCI. The set of SRS resource sets may include other Doppler tracking SRS resource sets and/or SRS resource sets having different usage types (e.g., antenna switching, codebook, non-codebook, beam management, and/or positioning). A subset of SRS trigger states, included in the set of the configured SRS trigger states, may be associated with the SRS resource set (e.g., the aperiodic Doppler tracking SRS resource set). Each SRS trigger state included in the subset of SRS trigger states may indicate a different set of parameters for the SRS resource set. In other words, for a single Doppler tracking SRS resource set, multiple SRS trigger states may be configured by the base station110(each SRS trigger state will trigger the Doppler tracking SRS resource set but with a different combination of the one or more parameters). For example, each SRS trigger state, from the subset of SRS trigger states, may be configured to be associated with a different time gap and/or a different number of SRS resources or SRS symbols for the Doppler tracking SRS resource set. An example of different SRS trigger states linked to or associated with a single Doppler tracking SRS resource set is depicted and described in more detail in connection withFIG.8. The DCI transmitted by the base station110may include an indication (e.g., a code point or another indication) associated with an SRS trigger state linked to the aperiodic Doppler tracking SRS resource set triggering. For example, the indication associated with the SRS trigger state may be signaled or included in an SRS request field of the DCI. In some aspects, the field (e.g., the SRS request field) may be a size of two bits (e.g., enabling four different values to be indicated via the field). In some other aspects, the size of the field (e.g., the SRS request field) may be increased (e.g., to be larger than two bits) to enable a larger number of values to be indicated via the field (e.g., and therefore to enable a larger number of SRS trigger states to be indicated via the field). This may provide additional scheduling flexibility for the base station110because the number of SRS trigger states that are available to be indicated by the base station110via DCI may be increased. If a data-scheduling DCI type is used for the DCI, then the UE120may identify the one or more parameters of the triggered Doppler tracking SRS transmission via the configuration of the SRS trigger state. This may conserve resources associated with transmitting the DCI as additional information indicating the one or more parameters may not need to be included in the DCI. In some aspects, the base station110may transmit, and the UE120may receive, an indication of a subset of SRS trigger states, from a set of configured SRS trigger states, that are activated. The base station110may transmit the indication of the subset of activated SRS trigger states via a MAC-CE message. The base station110may indicate the subset of SRS trigger states that may be indicated or selected by the base station110via the DCI to trigger an aperiodic SRS transmission. In some aspects, the subset of activated SRS trigger states may include SRS trigger states associated with Doppler tracking SRS resource sets and/or SRS trigger states associated with other usage types. The indication of the subset of activated SRS trigger states may include an indication of a mapping of each SRS trigger state to a code point or other indicator (e.g., that may be included in DCI to indicate the corresponding SRS trigger state). For example, where four values can be indicated via the field of the DCI (e.g. the SRS request field), a first value (e.g., “00”) may be mapped to an indication that no SRS resource sets are triggered, a second value (e.g., “01”) may be mapped to a first SRS trigger state, a third value (e.g., “10”) may be mapped to a second SRS trigger state, and a fourth value (e.g., “11”) may be mapped to a third SRS trigger state. Therefore, in such examples, the first SRS trigger state, the second SRS trigger state, and the fourth SRS trigger state may correspond to the subset of four activated SRS trigger states. The DCI transmitted by the base station110may include an indication of an SRS trigger state, from the subset of activated SRS trigger states. The SRS trigger state may indicate at least one of the one or more parameters for the triggered transmission of the Doppler tracking SRS resource set, as described in more detail elsewhere herein (e.g., in addition to a linkage with or an indication of one or more SRS resource set identifiers, where one of the SRS resource set identifiers is associated with an aperiodic Doppler tracking SRS resource set). Signaling the subset of activated SRS trigger states may enable the size of the field of the DCI (e.g. the SRS request field) to be reduced or maintained (e.g., at two bits or similar sizes) because the base station110is enabled to semi-statically select the subset of activated SRS trigger states (e.g., which includes less SRS trigger states than the set of configured SRS trigger states). Reducing or maintaining the size of the field of the DCI (e.g. the SRS request field) may conserve overhead associated with transmitting the DCI. At725, the UE120may determine one or more modified parameters for the triggered aperiodic Doppler tracking SRS resource set. For example, the UE120may determine or identify the one or more modified parameters based at least in part on information indicated by the DCI. For example, the DCI may indicate an SRS trigger state. The UE120may identify an SRS resource set (e.g., an aperiodic Doppler tracking SRS resource set) associated, or linked, with the SRS trigger state. In some aspects, the UE120may determine or identify the one or more modified parameters for the triggered aperiodic Doppler tracking SRS resource set based at least in part on explicit information included in the DCI. In some other aspects, the UE120may determine or identify the one or more modified parameters for the triggered aperiodic Doppler tracking SRS resource set based at least in part on a configuration of the indicated SRS trigger state. At730, the UE120may transmit, and the base station110may receive, the SRS (e.g., a Doppler tracking SRS) based at least in part on the one or more parameters (e.g., the one or more parameters indicated by the DCI). For example, the UE120may transmit SRS resources or SRS symbols at time domain starting locations within a slot as defined by a start position parameter indicated by the DCI. Additionally, or alternatively, the UE120may transmit one or more activated or enabled SRS resources or SRS symbols indicated by the DCI. Additionally, or alternatively, the UE120may refrain from transmitting one or more deactivated or disabled SRS resources or SRS symbols indicated by the DCI. In this way, a time gap between SRS resources and/or a number of SRS resource or SRS symbols transmitted by the UE120may be dynamically adapted or changed by the base station110. This may improve Doppler parameter estimation (e.g., as explained in more detail elsewhere herein) and/or may reduce an overhead associated with transmitting the Doppler tracking SRS. At735, the base station110may measure the Doppler tracking SRS received from the UE120. For example, the base station110may measure a first set (e.g., pair) of SRS repetitions that are received in the same subframe or same slot. The base station110may measure the Doppler tracking SRS in a similar manner as described in connection withFIG.6. In some aspects, a time gap associated with the first set (e.g., pair) of SRS repetitions may be different than a time gap associated with the second set (e.g., pair) of SRS repetitions (e.g., to enable the base station110to estimate different Doppler parameters using the first set of SRS repetitions and the second set of SRS repetitions). In some aspects, the first set of SRS repetitions and the second set of SRS repetitions may include one or more common SRS repetitions or SRS symbols. For example, the DCI may trigger the UE120to transmit an SRS on a first symbol, a third symbol, and a sixth symbol of a slot. The first set of SRS repetitions may include the SRS transmitted on the first symbol and the third symbol (e.g., to enable the base station110to estimate an uplink Doppler shift). The second set of SRS repetitions may include the SRS transmitted on the first symbol and the sixth symbol (e.g., to enable the base station110to estimate an uplink Doppler spread). At740, the base station110may estimate one or more uplink Doppler parameters using the SRS transmitted by the UE120. For example, the base station110may measure the SRS messages to estimate a Doppler shift or a frequency offset for the uplink channel. Additionally, or alternatively, the base station110may measure the SRS messages to estimate a Doppler spread or a time correlation for the uplink channel. The base station110may estimate the one or more uplink Doppler parameters using the SRS transmitted by the UE120in a similar manner as described in connection withFIG.6. In some aspects, the base station110may estimate an uplink Doppler parameter using the Doppler tracking SRS transmitted by the UE120using different time gaps to estimate the Doppler parameter. For example, the base station110may perform a first estimation for the Doppler parameter using a first time gap associated with the Doppler tracking SRS resource(s) transmitted by the UE120. The base station110may perform a second estimation for the Doppler parameter using a second time gap associated with the Doppler tracking SRS resource(s) transmitted by the UE120. The first time gap may be smaller than the second time gap. For example, the first estimation may provide additional robustness with respect to phase ambiguity or Doppler aliasing (e.g., the smaller time gap may mitigate a risk of phase ambiguity or Doppler aliasing), but the first estimation may provide a lower accuracy due to the smaller time gap. The base station110may use the first estimation to apply a correction factor to compensate for the frequency offset or Doppler shift estimated from the first estimation (e.g., for the second estimation). In some aspects, the second time gap may be based at least in part on (e.g., may be defined by) an accuracy of the first estimation (e.g., may be bounded for each SNR), rather than being defined by a speed range of the UE120. By using the larger second time gap, an accuracy of the uplink Doppler parameter estimation may be improved. Therefore, by estimating the uplink Doppler parameter using the Doppler tracking SRS using the two-step approach described above (e.g., a first coarse estimation and a second fine estimation), an accuracy of the estimation may be improved and the estimation may have improved robustness to phase ambiguity or Doppler aliasing. As a result, the base station110may be enabled to perform accurate and robust Doppler parameter estimation for an uplink channel using a configured Doppler tracking SRS. For example, the base station110may be enabled to optimize multiple, different, time gaps between symbols associated with an SRS transmission for Doppler parameter estimation dynamically via DCI. The base station110may be enabled to configure the symbols on which an SRS transmission occurs to optimize the temporal spacing between the SRS transmissions for Doppler parameter estimation dynamically via DCI. Accurate uplink Doppler parameter estimation may improve uplink DMRS configuration selection, synchronization loop tracking by the base station110, pre-compensation of a frequency offset for the downlink channel (for example, in a case of an HST SFN scenario where a transmission scheme 1 (e.g., as defined, or otherwise fixed by, a wireless communication standard) and Doppler shift pre-compensation is employed for downlink transmissions), and/or uplink channel estimation and/or uplink link adaptation among other examples. As described herein, the base station110may dynamically adapt one or more parameters of an aperiodic Doppler tracking SRS resource set. Therefore, the base station110may be enabled to modify a time gap between SRS resources associated with the aperiodic Doppler tracking SRS resource set and/or may be enabled to modify a number of SRS resources to be transmitted for the aperiodic Doppler tracking SRS resource set (e.g., the base station110may dynamically activate or deactivate SRS resource identifiers or SRS resource(s) of an SRS resource identifier). As a result, uplink Doppler parameter estimations performed using the aperiodic Doppler tracking SRS may be improved. For example, the base station110may be enabled to dynamically adapt a time gap between SRS resources (e.g., based at least in part on channel conditions, channel parameters, SNR conditions, UE120speeds, and/or deployment parameters) to optimize the time gap(s) for different Doppler parameter estimations. This may improve an accuracy of Doppler parameter estimations by enabling the base station110to configure different time gaps or pilot spacings between SRS resources (e.g., for different Doppler parameter estimations) within the same SRS resource set. Additionally, the base station110may, to reduce an overhead associated with transmitting the Doppler tracing SRS, be enabled to dynamically indicate different numbers of SRS resources or SRS symbols that may be required to support different estimations at different time periods. As indicated above,FIG.7is provided as an example. Other examples may differ from what is described with respect toFIG.7. FIG.8is a diagram illustrating an example800associated with SRS trigger states for an aperiodic Doppler tracking SRS resource set. As shown inFIG.8, a configuration805for an aperiodic (AP) Doppler tracking SRS resource set may include a first SRS resource identifier (SRS ID 1), a second SRS resource identifier (SRS ID 2), and a third SRS resource identifier (SRS ID 3). The first SRS resource identifier may be associated with a first value for the start position parameter (e.g., n1). The second SRS resource identifier may be associated with a second value for the start position parameter (e.g., n2). The third SRS resource identifier may be associated with a third value for the start position parameter (e.g., n3). The configuration805for the aperiodic Doppler tracking SRS resource set may be configured in a similar manner as described in connection withFIGS.6and7. The value of the start position parameter may indicate a starting symbol for an SRS resource relative to a last symbol in a slot. The example800depicts an example where the base station110dynamically indicates (e.g., implicitly) modified parameters of a triggered aperiodic Doppler tracking SRS resource set using configured SRS trigger states, as explained in more detail elsewhere herein. For example, the base station110may transmit a configuration810for one or more SRS trigger states associated with the aperiodic Doppler tracking SRS resource set. For example, the configuration810may include a first SRS trigger state (SRS trigger state 1), a second SRS trigger state (SRS trigger state 2), a third SRS trigger state (SRS trigger state 3), and a fourth SRS trigger state (SRS trigger state 4). As shown inFIG.8, the SRS trigger states may indicate whether an SRS resource identifier (e.g., the first SRS resource identifier) indicated by the configuration805is enabled (e.g., activated) or disabled (e.g., deactivated). Additionally, or alternatively, the SRS trigger states may indicate a modified value for the start position parameter for one or more of the SRS resource identifiers (e.g., for the second SRS resource identifier in the example800). For example, at815, if the first SRS trigger state is indicated by DCI, the UE120may transmit using a resource indicated by the first SRS resource identifier (e.g., because the first SRS trigger state indicates that the first SRS resource identifier is enabled), a resource indicated by the second SRS resource identifier (e.g., at the time domain starting location of n2), and a resource indicated by the third SRS resource identifier. At820, if the second SRS trigger state is indicated by DCI, the UE120may use the resource indicated by the first SRS resource identifier (e.g., because the first SRS trigger state indicates that the first SRS resource identifier is enabled), a resource indicated by the second SRS resource identifier (e.g., at the time domain starting location of n2−x), and the resource indicated by the third SRS resource identifier. For example, the second SRS trigger state may modify the time domain starting location of the resource indicated by the second SRS resource identifier by x symbols. Therefore, the time gap between the resource indicated by the first SRS resource identifier and the resource indicated by the second SRS resource identifier may be increased (e.g., as compared to the SRS transmitted by the UE120at815). Additionally, the time gap between the resource indicated by the second SRS resource identifier and the resource indicated by the third SRS resource identifier may be decreased (e.g., as compared to the SRS transmitted by the UE120at815). At825, if the third SRS trigger state is indicated by DCI, the UE120may refrain from transmitting using the resource indicated by the first SRS resource identifier because the third SRS trigger state indicates that the first SRS resource identifier is disabled. The UE120may transmit using a resource indicated by the second SRS resource identifier (e.g., at the time domain starting location of n2+y), and the resource indicated by the third SRS resource identifier. Therefore, the UE120may transmit using less resources (e.g., as compared to the SRS transmitted by the UE120at815and at820) because the first SRS resource identifier is disabled. The second SRS trigger state may modify the time domain starting location of the resource indicated by the second SRS resource identifier by y symbols. Therefore, the time gap between the resource indicated by the second SRS resource identifier and the resource indicated by the third SRS resource identifier may be increased (e.g., as compared to the SRS transmitted by the UE120at815and/or at820). At830, if the third SRS trigger state is indicated by DCI, the UE120may refrain from transmitting using the resource indicated by the first SRS resource identifier because the third SRS trigger state indicates that the first SRS resource identifier is disabled. The UE120may transmit using a resource indicated by the second SRS resource identifier (e.g., at the time domain starting location of n2−z), and the resource indicated by the third SRS resource identifier. Therefore, the UE120may transmit using less resources (e.g., as compared to the SRS transmitted by the UE120at815and at820) because the first SRS resource identifier is disabled. For example, the second SRS trigger state may modify the time domain starting location of the resource indicated by the second SRS resource identifier by z symbols. Therefore, the time gap between the resource indicated by the second SRS resource identifier and the resource indicated by the third SRS resource identifier may be decreased (e.g., as compared to the SRS transmitted by the UE120at815, at820and/or at825). As a result, as shown inFIG.8, the base station110may be enabled to dynamically adjust one or more time gaps between SRS resources associated with an aperiodic Doppler tracking SRS resource set. Additionally, or alternatively, the base station110may be enabled to dynamically adjust a number of SRS resources transmitted for the aperiodic Doppler tracking SRS resource set. This may improve uplink Doppler parameter estimations performed by the base station110because the time gaps may be dynamically modified using DCI (e.g., which allows a synchronized signaling and is associated with less latency than an RRC reconfiguration that does not allow frequent reconfigurations “on the fly” without some interruption in SRS triggering/transmission). Additionally, this may conserve resources because the base station110may dynamically disable (or enable) SRS resources associated with the aperiodic Doppler tracking SRS resource set as needed by the base station110(e.g., a minimum required number of SRS symbols/repetitions is transmitted for every session of aperiodic Doppler tracking SRS transmission). Therefore, the UE120may not transmit an SRS resource if the base station110does not need the SRS resource for uplink Doppler parameter estimation(s). As indicated above,FIG.8is provided as an example. Other examples may differ from what is described with respect toFIG.8. FIG.9is a flowchart of an example method900of wireless communication. The method900may be performed by, for example, a UE (e.g., UE120). At910, the UE may receive configuration information for an SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set. For example, the UE (e.g., using communication manager140and/or reception component1102, depicted inFIG.11) may receive configuration information for an SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set, as described above in connection with, for example,FIG.7and at705. In some aspects, receiving the configuration information includes receiving an indication of one or more SRS trigger states associated with the SRS resource set, wherein an SRS trigger state, of the one or more SRS trigger states, indicates the one or more parameters for the SRS, and wherein each SRS trigger state, of the one or more SRS trigger states, is mapped to a DCI code point of an SRS request field. At920, the UE may receive DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set. For example, the UE (e.g., using communication manager140and/or reception component1102, depicted inFIG.11) may receive DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set, as described above in connection with, for example,FIG.7and at720. In some aspects, the DCI uses a non-data-scheduling DCI type. In some aspects, the DCI uses a data-scheduling DCI type. In some aspects, the DCI uses a non-data-scheduling DCI type, and wherein receiving the DCI comprises receiving an indication of the one or more parameters via one or more fields of the DCI, wherein the one or more fields are repurposed fields of a DCI format of the DCI when the DCI format is used as the non-data-scheduling DCI type. In some aspects, the DCI uses a non-data-scheduling DCI type, and receiving the DCI includes receiving an indication of a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers, and the start position parameter is used for the at least one SRS resource identifier. In some aspects, the DCI uses a non-data-scheduling DCI type, and receiving the DCI includes receiving an indication to activate or deactivate at least one SRS resource identifier of the one or more SRS resource identifiers. In some aspects, the at least one SRS resource identifier is associated with an SRS resource that occurs first in a time domain, or an SRS resource that occurs last in the time domain, among SRS resources associated with the SRS resource set. In some aspects, the DCI uses a non-data-scheduling DCI type, and receiving the DCI includes receiving an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers. In some aspects, the indication includes a bitmap, wherein the bitmap indicates the one or more activated SRS resource identifiers for the triggered transmission of the SRS resource set. In some aspects, the DCI uses a non-data-scheduling DCI type, the one or more SRS resource identifiers include a single SRS resource identifier, and receiving the DCI includes receiving a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values indicate intra-slot time domain starting locations of SRS symbols for the triggered transmission of the SRS resource set. In some aspects, the DCI uses a non-data-scheduling DCI type, the one or more SRS resource identifiers include a single SRS resource identifier, and receiving the DCI includes receiving a set of values for a start position parameter for the single SRS resource identifier, the set of values includes one or more valid values and one or more invalid values, the one or more valid values indicate SRS symbols for the single SRS resource identifier that are to be associated with the triggered transmission of the SRS resource set, and the one or more invalid values indicate SRS symbols for the single SRS resource identifier that are not to be associated with the triggered transmission of the SRS resource set. In some aspects, the DCI uses a non-data-scheduling DCI type, the one or more SRS resource identifiers include a single SRS resource identifier, and receiving the DCI includes receiving an indication of a bitmap, wherein the bitmap indicates one or more activated SRS symbols associated with the single SRS resource identifier, and the bitmap is used for the triggered transmission of the SRS resource set. In some aspects, receiving the DCI includes receiving the indication of the bitmap and an indication of a start position parameter, for the triggered transmission of the SRS resource set, and the start position parameter indicates an intra-slot time domain starting location for SRS symbols indicated by the bitmap. In some aspects, receiving the DCI includes receiving an indication of an SRS trigger state, wherein the SRS trigger state indicates one or more SRS resource set identifies including an identifier of the SRS resource set and the one or more parameters for the triggered transmission of the SRS resource set. In some aspects, the one or more parameters include at least one of a first value for a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers, an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers, a first bitmap indicating one or more activated SRS resource identifiers from the one or more SRS resource identifiers, a set of values for the start position parameter, a second bitmap indicating one or more activated SRS symbols for an SRS resource identifiers of the one or more SRS resource identifiers, or a second value for the start position parameter indicating an intra-slot start position for SRS symbols indicated by the second bitmap. In some aspects, the SRS trigger state is included in a set of SRS trigger states indicated by the configuration information, wherein the set of SRS trigger states are associated with a set of SRS resource sets, including the SRS resource set, and wherein a subset of SRS trigger states, included in the set of SRS trigger states, are associated with the SRS resource set, wherein each SRS trigger state included in the subset of SRS trigger states indicates a different set of parameters for the SRS resource set. In some aspects, receiving the indication of the SRS trigger state includes receiving the indication of the SRS trigger state via an SRS request field of the DCI. At930, the UE may transmit the SRS resource set based at least in part on the one or more parameters. For example, the UE (e.g., using communication manager140and/or transmission component1104, depicted inFIG.11) may transmit the SRS resource set based at least in part on the one or more parameters, as described above in connection with, for example,FIG.7and at730. The UE may transmit the SRS resource set using a start position parameter or activated resources indicated by the DCI. In some aspects, the configuration information indicates a set of SRS trigger states, and the UE may receive, via a MAC-CE message, an indication of a subset of SRS trigger states, from the set of SRS trigger states, that are activated, and receiving the DCI includes receiving an indication of an SRS trigger state, from the subset of SRS trigger states, wherein the SRS trigger state indicates at least one of the one or more parameters for the triggered transmission of the SRS resource set. AlthoughFIG.9shows example blocks of method900, in some aspects, method900may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.9. Additionally, or alternatively, two or more of the blocks of method900may be performed in parallel. FIG.10is a flowchart of an example method1000of wireless communication. The method1000may be performed by, for example, a base station (e.g., base station110). At1010, the base station may transmit, to a UE, configuration information for an SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set. For example, the base station (e.g., using communication manager150and/or transmission component1304, depicted inFIG.13) may transmit, to a UE, configuration information for an SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set, as described above in connection with, for example,FIG.7and at705. In some aspects, transmitting the configuration information includes transmitting an indication of one or more SRS trigger states associated with the aperiodic SRS resource set, wherein an SRS trigger state, of the one or more SRS trigger states, indicates the one or more parameters for the SRS resource set, and wherein each SRS trigger state, of the one or more SRS trigger states, is mapped to a DCI code point of an SRS request field. At1020, the base station may transmit, to the UE, DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set. For example, the base station (e.g., using communication manager150and/or transmission component1304, depicted inFIG.13) may transmit, to the UE, DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set, as described above in connection with, for example,FIG.7and at720. In some aspects, the DCI uses a non-data-scheduling DCI type. In some aspects, the DCI uses a data-scheduling DCI type. In some aspects, the DCI uses a non-data-scheduling DCI type, and transmitting the DCI includes transmitting an indication of the one or more parameters via one or more fields of the DCI, wherein the one or more fields are repurposed fields of a DCI format of the DCI when the DCI format is used as the non-data-scheduling DCI type. In some aspects, the DCI uses a non-data-scheduling DCI type, and transmitting the DCI includes transmitting an indication of a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers, wherein the start position parameter is used for the at least one SRS resource identifier indicated by the configuration information. In some aspects, the DCI uses a non-data-scheduling DCI type, and transmitting the DCI includes transmitting an indication to activate or deactivate at least one SRS resource identifier of the one or more SRS resource identifiers. In some aspects, the at least one SRS resource identifier is associated with an SRS resource that occurs first in a time domain, or an SRS resource that occurs last in the time domain, among SRS resources associated with the aperiodic SRS resource set. In some aspects, the DCI uses a non-data-scheduling DCI type, and transmitting the DCI includes transmitting an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers. In some aspects, the indication includes a bitmap, wherein the bitmap indicates the one or more activated SRS resource identifiers for the triggered transmission of the SRS resource set. In some aspects, the DCI uses a non-data-scheduling DCI type, the one or more SRS resource identifiers include a single SRS resource identifier, and transmitting the DCI includes transmitting a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values indicate intra-slot time domain starting locations of SRS symbols for the triggered transmission of the SRS resource set. In some aspects, the DCI uses a non-data-scheduling DCI type, the one or more SRS resource identifiers include a single SRS resource identifier, and transmitting the DCI includes transmitting a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values include one or more valid values and one or more invalid values, wherein the one or more valid values indicate SRS symbols for the single SRS resource identifier that are to be associated with the triggered transmission of the SRS resource set, and wherein the one or more invalid values indicate SRS symbols for the single SRS resource identifier that are not to be associated with the triggered transmission of the SRS resource set. In some aspects, the DCI uses a non-data-scheduling DCI type, the one or more SRS resource identifiers include a single SRS resource identifier, and transmitting the DCI includes transmitting an indication of a bitmap, wherein the bitmap indicates one or more activated SRS symbols associated with the single SRS resource identifier, and wherein the bitmap is used for the triggered transmission of the SRS resource set. In some aspects, transmitting the DCI includes transmitting the indication of the bitmap and an indication of a start position parameter for the triggered transmission of the SRS resource set, and the start position parameter indicates an intra-slot time domain starting location for SRS symbols indicated by the bitmap. In some aspects, transmitting the DCI includes transmitting an indication of an SRS trigger state, wherein the SRS trigger state indicates one or more SRS resource set identifies including an identifier of the SRS resource set and the one or more parameters for the triggered transmission of the SRS resource set. In some aspects, the one or more parameters include at least one of a first value for a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers, an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers, a first bitmap indicating one or more activated SRS resource identifiers from the one or more SRS resource identifiers, a set of values for the start position parameter, a second bitmap indicating one or more activated SRS symbols for an SRS resource identifiers of the one or more SRS resource identifiers, or a second value for the start position parameter indicating an intra-slot start position for SRS symbols indicated by the second bitmap. In some aspects, the SRS trigger state is included in a set of SRS trigger states indicated by the configuration information, wherein the set of SRS trigger states are associated with a set of SRS resource sets, including the SRS resource set, and wherein a subset of SRS trigger states, included in the set of SRS trigger states, are associated with the SRS resource set, wherein each SRS trigger state included in the subset of SRS trigger states indicates a different set of parameters for the SRS resource set. In some aspects, transmitting the indication of the SRS trigger state includes transmitting the indication of the SRS trigger state via an SRS request field of the DCI. At1030, the base station may receive, from the UE, the SRS resource set based at least in part on the one or more parameters. For example, the base station (e.g., using communication manager150and/or reception component1302, depicted inFIG.13) may receive, from the UE, the SRS resource set based at least in part on the one or more parameters, as described above in connection with, for example,FIG.7and at730. For example, the base station may receive the SRS resource set using a start position parameter or activated resources indicated by the DCI. In some aspects, the configuration information indicates a set of SRS trigger states, and the base station may transmit, via a MAC-CE message, an indication of a subset of SRS trigger states, from the set of SRS trigger states, that are activated, and transmitting the DCI includes transmitting an indication of an SRS trigger state, from the subset of SRS trigger states, wherein the SRS trigger state indicates at least one of the one or more parameters for the triggered transmission of the SRS resource set. AlthoughFIG.10shows example blocks of method1000, in some aspects, method1000may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.10. Additionally, or alternatively, two or more of the blocks of method1000may be performed in parallel. FIG.11is a diagram of an example apparatus1100for wireless communication. The apparatus1100may be a UE, or a UE may include the apparatus1100. In some aspects, the apparatus1100includes a reception component1102and a transmission component1104, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus1100may communicate with another apparatus1106(such as a UE, a base station, or another wireless communication device) using the reception component1102and the transmission component1104. As further shown, the apparatus1100may include the communication manager140. The communication manager140may include a determination component1108, among other examples. In some aspects, the apparatus1100may be configured to perform one or more operations described herein in connection withFIGS.7and8. Additionally, or alternatively, the apparatus1100may be configured to perform one or more processes described herein, such as process900ofFIG.9, or a combination thereof. In some aspects, the apparatus1100and/or one or more components shown inFIG.11may include one or more components of the UE described in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.11may be implemented within one or more components described in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component1102may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus1106. The reception component1102may provide received communications to one or more other components of the apparatus1100. In some aspects, the reception component1102may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus1106. In some aspects, the reception component1102may include one or more antennas, a modem, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. The transmission component1104may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus1106. In some aspects, one or more other components of the apparatus1106may generate communications and may provide the generated communications to the transmission component1104for transmission to the apparatus1106. In some aspects, the transmission component1104may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus1106. In some aspects, the transmission component1104may include one or more antennas, a modem, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. In some aspects, the transmission component1104may be co-located with the reception component1102in a transceiver. The reception component1102may receive configuration information for an SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set. The reception component1102may receive DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set. The transmission component1104may transmit the SRS resource set based at least in part on the one or more parameters. The determination component1108may determine a configuration for the SRS resource set based at least in part on the configuration information. The determination component1108may determine the modified time gap or the modified number of resources associated with the SRS resource set based at least in part on receiving the DCI. The reception component1102may receive an indication of the one or more parameters via one or more fields of the DCI, wherein the one or more fields are repurposed fields of a DCI format of the DCI when the DCI format is used as the non-data-scheduling DCI type. The reception component1102may receive an indication of a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers, wherein the start position parameter is used for the at least one SRS resource identifier. The reception component1102may receive an indication to activate or deactivate at least one SRS resource identifier of the one or more SRS resource identifiers. The reception component1102may receive an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers. The reception component1102may receive a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values indicate intra-slot time domain starting locations of SRS symbols for the triggered transmission of the SRS resource set. The reception component1102may receive a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values include one or more valid values and one or more invalid values, wherein the one or more valid values indicate SRS symbols for the single SRS resource identifier that are to be associated with the triggered transmission of the SRS resource set, and wherein the one or more invalid values indicate SRS symbols for the single SRS resource identifier that are not to be associated with the triggered transmission of the SRS resource set. The reception component1102may receive an indication of a bitmap, wherein the bitmap indicates one or more activated SRS symbols associated with the single SRS resource identifier, and wherein the bitmap is used for the triggered transmission of the SRS resource set. The reception component1102may receive the indication of the bitmap and an indication of a start position parameter, for the triggered transmission of the SRS resource set, wherein the start position parameter indicates an intra-slot time domain starting location for SRS symbols indicated by the bitmap. The reception component1102may receive an indication of one or more SRS trigger states associated with the aperiodic SRS resource set, wherein an SRS trigger state, of the one or more SRS trigger states, indicates the one or more parameters for the SRS, and wherein each SRS trigger state, of the one or more SRS trigger states, is mapped to a DCI codepoint of an SRS request field. The reception component1102may receive, via the DCI, an indication of an SRS trigger state, wherein the SRS trigger state indicates one or more SRS resource set identifies including an identifier of the SRS resource set and the one or more parameters for the triggered transmission of the SRS resource set. The number and arrangement of components shown inFIG.11are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.11. Furthermore, two or more components shown inFIG.11may be implemented within a single component, or a single component shown inFIG.11may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.11may perform one or more functions described as being performed by another set of components shown inFIG.11. FIG.12is a diagram illustrating an example1200of a hardware implementation for an apparatus1205employing a processing system1210. The apparatus1205may be a UE. The processing system1210may be implemented with a bus architecture, represented generally by the bus1215. The bus1215may include any number of interconnecting buses and bridges depending on the specific application of the processing system1210and the overall design constraints. The bus1215links together various circuits including one or more processors and/or hardware components, represented by the processor1220, the illustrated components, and the computer-readable medium/memory1225. The bus1215may also link various other circuits, such as timing sources, peripherals, voltage regulators, and/or power management circuits. The processing system1210may be coupled to a transceiver1230. The transceiver1230is coupled to one or more antennas1235. The transceiver1230provides a means for communicating with various other apparatuses over a transmission medium. The transceiver1230receives a signal from the one or more antennas1235, extracts information from the received signal, and provides the extracted information to the processing system1210, specifically the reception component1102. In addition, the transceiver1230receives information from the processing system1210, specifically the transmission component1104, and generates a signal to be applied to the one or more antennas1235based at least in part on the received information. The processing system1210includes a processor1220coupled to a computer-readable medium/memory1225. The processor1220is responsible for general processing, including the execution of software stored on the computer-readable medium/memory1225. The software, when executed by the processor1220, causes the processing system1210to perform the various functions described herein for any particular apparatus. The computer-readable medium/memory1225may also be used for storing data that is manipulated by the processor1220when executing software. The processing system further includes at least one of the illustrated components. The components may be software modules running in the processor1220, resident/stored in the computer readable medium/memory1225, one or more hardware modules coupled to the processor1220, or some combination thereof. In some aspects, the processing system1210may be a component of the UE120and may include the memory282and/or at least one of the TX MIMO processor266, the RX processor258, and/or the controller/processor280. In some aspects, the apparatus1205for wireless communication includes means for receiving configuration information for an aperiodic SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set and a configuration for one or more SRS resource identifiers; means for receiving DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set; and/or means for transmitting the SRS resource set based at least in part on the one or more parameters. The aforementioned means may be one or more of the aforementioned components of the apparatus1100and/or the processing system1210of the apparatus1205configured to perform the functions recited by the aforementioned means. As described elsewhere herein, the processing system1210may include the TX MIMO processor266, the RX processor258, and/or the controller/processor280. In one configuration, the aforementioned means may be the TX MIMO processor266, the RX processor258, and/or the controller/processor280configured to perform the functions and/or operations recited herein. FIG.12is provided as an example. Other examples may differ from what is described in connection withFIG.12. FIG.13is a diagram of an example apparatus1300for wireless communication. The apparatus1300may be a base station, or a base station may include the apparatus1300. In some aspects, the apparatus1300includes a reception component1302and a transmission component1304, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus1300may communicate with another apparatus1306(such as a UE, a base station, or another wireless communication device) using the reception component1302and the transmission component1304. As further shown, the apparatus1300may include the communication manager150. The communication manager150may include a determination component1308, among other examples. In some aspects, the apparatus1300may be configured to perform one or more operations described herein in connection withFIGS.7and8. Additionally, or alternatively, the apparatus1300may be configured to perform one or more processes described herein, such as process1000ofFIG.10, or a combination thereof. In some aspects, the apparatus1300and/or one or more components shown inFIG.13may include one or more components of the base station described in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.13may be implemented within one or more components described in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component1302may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus1306. The reception component1302may provide received communications to one or more other components of the apparatus1300. In some aspects, the reception component1302may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus1306. In some aspects, the reception component1302may include one or more antennas, a modem, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the base station described in connection withFIG.2. The transmission component1304may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus1306. In some aspects, one or more other components of the apparatus1306may generate communications and may provide the generated communications to the transmission component1304for transmission to the apparatus1306. In some aspects, the transmission component1304may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus1306. In some aspects, the transmission component1304may include one or more antennas, a modem, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the base station described in connection withFIG.2. In some aspects, the transmission component1304may be co-located with the reception component1302in a transceiver. The transmission component1304may transmit, to a UE, configuration information for an SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set. The transmission component1304may transmit, to the UE, DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set. The reception component1302may receive, from the UE, the SRS resource set based at least in part on the one or more parameters. The determination component1308may determine the configuration information. The determination component1308may determine values for the one or more parameters (e.g., that are different than values for the one or more parameters indicated by the configuration information). The determination component1308may determine the modified time gap or the modified number of resources associated with the SRS resource set. The transmission component1304may transmit an indication of the one or more parameters via one or more fields of the DCI, wherein the one or more fields are repurposed fields of a DCI format of the DCI when the DCI format is used as the non-data-scheduling DCI type. The transmission component1304may transmit an indication of a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers, wherein the start position parameter is used for the at least one SRS resource identifier indicated by the configuration information. The transmission component1304may transmit an indication to activate or deactivate at least one SRS resource identifier of the one or more SRS resource identifiers. The transmission component1304may transmit an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers. The transmission component1304may transmit a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values indicate intra-slot time domain starting locations of SRS symbols for the triggered transmission of the SRS resource set. The transmission component1304may transmit a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values include one or more valid values and one or more invalid values, wherein the one or more valid values indicate SRS symbols for the single SRS resource identifier that are to be associated with the triggered transmission of the SRS resource set, and wherein the one or more invalid values indicate SRS symbols for the single SRS resource identifier that are not to be associated with the triggered transmission of the SRS resource set. The transmission component1304may transmit an indication of a bitmap, wherein the bitmap indicates one or more activated SRS symbols associated with the single SRS resource identifier, and wherein the bitmap is used for the triggered transmission of the SRS resource set. The transmission component1304may transmit the indication of the bitmap and an indication of a start position parameter for the triggered transmission of the SRS resource set, wherein the start position parameter indicates an intra-slot time domain starting location for SRS symbols indicated by the bitmap. The transmission component1304may transmit an indication of one or more SRS trigger states associated with the aperiodic SRS resource set, wherein an SRS trigger state, of the one or more SRS trigger states, indicates the one or more parameters for the SRS resource set, and wherein each SRS trigger state, of the one or more SRS trigger states, is mapped to a DCI codepoint of an SRS request field. The transmission component1304may transmit an indication of an SRS trigger state, wherein the SRS trigger state indicates one or more SRS resource set identifies including an identifier of the SRS resource set and the one or more parameters for the triggered transmission of the SRS resource set. The number and arrangement of components shown inFIG.13are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.13. Furthermore, two or more components shown inFIG.13may be implemented within a single component, or a single component shown inFIG.13may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.13may perform one or more functions described as being performed by another set of components shown inFIG.13. FIG.14is a diagram illustrating an example1400of a hardware implementation for an apparatus1405employing a processing system1410. The apparatus1405may be a base station. The processing system1410may be implemented with a bus architecture, represented generally by the bus1415. The bus1415may include any number of interconnecting buses and bridges depending on the specific application of the processing system1410and the overall design constraints. The bus1415links together various circuits including one or more processors and/or hardware components, represented by the processor1420, the illustrated components, and the computer-readable medium/memory1425. The bus1415may also link various other circuits, such as timing sources, peripherals, voltage regulators, and/or power management circuits. The processing system1410may be coupled to a transceiver1430. The transceiver1430is coupled to one or more antennas1435. The transceiver1430provides a means for communicating with various other apparatuses over a transmission medium. The transceiver1430receives a signal from the one or more antennas1435, extracts information from the received signal, and provides the extracted information to the processing system1410, specifically the reception component1302. In addition, the transceiver1430receives information from the processing system1410, specifically the transmission component1304, and generates a signal to be applied to the one or more antennas1435based at least in part on the received information. The processing system1410includes a processor1420coupled to a computer-readable medium/memory1425. The processor1420is responsible for general processing, including the execution of software stored on the computer-readable medium/memory1425. The software, when executed by the processor1420, causes the processing system1410to perform the various functions described herein for any particular apparatus. The computer-readable medium/memory1425may also be used for storing data that is manipulated by the processor1420when executing software. The processing system further includes at least one of the illustrated components. The components may be software modules running in the processor1420, resident/stored in the computer readable medium/memory1425, one or more hardware modules coupled to the processor1420, or some combination thereof. In some aspects, the processing system1410may be a component of the base station110and may include the memory242and/or at least one of the TX MIMO processor230, the RX processor238, and/or the controller/processor240. In some aspects, the apparatus1405for wireless communication includes means for transmitting, to a UE, configuration information for an SRS resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers; means for transmitting, to the UE, DCI triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set; and/or means for receiving, from the UE, the SRS resource set based at least in part on the one or more parameters. The aforementioned means may be one or more of the aforementioned components of the apparatus1300and/or the processing system1410of the apparatus1405configured to perform the functions recited by the aforementioned means. As described elsewhere herein, the processing system1410may include the TX MIMO processor230, the receive processor238, and/or the controller/processor240. In one configuration, the aforementioned means may be the TX MIMO processor230, the receive processor238, and/or the controller/processor240configured to perform the functions and/or operations recited herein. FIG.14is provided as an example. Other examples may differ from what is described in connection withFIG.14. The following provides an overview of some Aspects of the present disclosure: Aspect 1: A method of wireless communication performed by a user equipment (UE), comprising: receiving configuration information for a sounding reference signal (SRS) resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set; receiving downlink control information (DCI) triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set; and transmitting the SRS resource set based at least in part on the one or more parameters. Aspect 2: The method of Aspect 1, wherein the DCI uses a non-data-scheduling DCI type. Aspect 3: The method of Aspect 1, wherein the DCI uses a data-scheduling DCI type. Aspect 4: The method of any of Aspects 1-2, wherein the DCI uses a non-data-scheduling DCI type, and wherein receiving the DCI comprises: receiving an indication of the one or more parameters via one or more fields of the DCI, wherein the one or more fields are repurposed fields of a DCI format of the DCI when the DCI format is used as the non-data-scheduling DCI type. Aspect 5: The method of any of Aspects 1-2 and 4, wherein the DCI uses a non-data-scheduling DCI type, and wherein receiving the DCI comprises: receiving an indication of a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers, and wherein the start position parameter is used for the at least one SRS resource identifier. Aspect 6: The method of any of Aspects 1-2 and 4-5, wherein the DCI uses a non-data-scheduling DCI type, and wherein receiving the DCI comprises: receiving an indication to activate or deactivate at least one SRS resource identifier of the one or more SRS resource identifiers. Aspect 7: The method of Aspect 6, wherein the at least one SRS resource identifier is associated with an SRS resource that occurs first in a time domain, or an SRS resource that occurs last in the time domain, among SRS resources associated with the SRS resource set. Aspect 8: The method of any of Aspects 1-2 and 4-7, wherein the DCI uses a non-data-scheduling DCI type, and wherein receiving the DCI comprises: receiving an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers. Aspect 9: The method of Aspect 8, wherein the indication includes a bitmap, wherein the bitmap indicates the one or more activated SRS resource identifiers for the triggered transmission of the SRS resource set. Aspect 10: The method of any of Aspects 1-2 and 4-9, wherein the DCI uses a non-data-scheduling DCI type, wherein the one or more SRS resource identifiers include a single SRS resource identifier, and wherein receiving the DCI comprises: receiving a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values indicate intra-slot time domain starting locations of SRS symbols for the triggered transmission of the SRS resource set. Aspect 11: The method of any of Aspects 1-2 and 4-10, wherein the DCI uses a non-data-scheduling DCI type, wherein the one or more SRS resource identifiers include a single SRS resource identifier, and wherein receiving the DCI comprises: receiving a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values include one or more valid values and one or more invalid values, wherein the one or more valid values indicate SRS symbols for the single SRS resource identifier that are to be associated with the triggered transmission of the SRS resource set, and wherein the one or more invalid values indicate SRS symbols for the single SRS resource identifier that are not to be associated with the triggered transmission of the SRS resource set. Aspect 12: The method of any of Aspects 1-2 and 4-11, wherein the DCI uses a non-data-scheduling DCI type, wherein the one or more SRS resource identifiers include a single SRS resource identifier, and wherein receiving the DCI comprises: receiving an indication of a bitmap, wherein the bitmap indicates one or more activated SRS symbols associated with the single SRS resource identifier, and wherein the bitmap is used for the triggered transmission of the SRS resource set. Aspect 13: The method of Aspect 12, wherein receiving the DCI comprises: receiving the indication of the bitmap and an indication of a start position parameter, for the triggered transmission of the SRS resource set, and wherein the start position parameter indicates an intra-slot time domain starting location for SRS symbols indicated by the bitmap. Aspect 14: The method of any of Aspects 1-13, wherein receiving the configuration information comprises: receiving an indication of one or more SRS trigger states associated with the aperiodic SRS resource set, wherein an SRS trigger state, of the one or more SRS trigger states, indicates the one or more parameters for the SRS, and wherein each SRS trigger state, of the one or more SRS trigger states, is mapped to a DCI code point of an SRS request field. Aspect 15: The method of any of Aspects 1-14, wherein receiving the DCI comprises: receiving an indication of an SRS trigger state, wherein the SRS trigger state indicates one or more SRS resource set identifies including an identifier of the SRS resource set and the one or more parameters for the triggered transmission of the SRS resource set. Aspect 16: The method of Aspect 15, wherein the one or more parameters include at least one of: a first value for a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers, an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers, a first bitmap indicating one or more activated SRS resource identifiers from the one or more SRS resource identifiers, a set of values for the start position parameter, a second bitmap indicating one or more activated SRS symbols for an SRS resource identifiers of the one or more SRS resource identifiers, or a second value for the start position parameter indicating an intra-slot start position for SRS symbols indicated by the second bitmap. Aspect 17: The method of any of Aspects 15-16, wherein the SRS trigger state is included in a set of SRS trigger states indicated by the configuration information, wherein the set of SRS trigger states are associated with a set of SRS resource sets, including the SRS resource set, and wherein a subset of SRS trigger states, included in the set of SRS trigger states, are associated with the SRS resource set, wherein each SRS trigger state included in the subset of SRS trigger states indicates a different set of parameters for the SRS resource set. Aspect 18: The method of any of Aspects 15-17, wherein receiving the indication of the SRS trigger state comprises receiving the indication of the SRS trigger state via an SRS request field of the DCI. Aspect 19: The method of any of Aspects 1-18, wherein the configuration information indicates a set of SRS trigger states, the method further comprising: receiving, via a medium access control (MAC) control element (MAC-CE) message, an indication of a subset of SRS trigger states, from the set of SRS trigger states, that are activated; and wherein receiving the DCI comprises: receiving an indication of an SRS trigger state, from the subset of SRS trigger states, wherein the SRS trigger state indicates at least one of the one or more parameters for the triggered transmission of the SRS resource set. Aspect 20: A method of wireless communication performed by a base station, comprising: transmitting, to a user equipment (UE), configuration information for a sounding reference signal (SRS) resource set, wherein the configuration information indicates a Doppler tracking usage type for the SRS resource set, that the SRS resource set is an aperiodic SRS resource set, and a configuration for one or more SRS resource identifiers associated with the SRS resource set; transmitting, to the UE, downlink control information (DCI) triggering a transmission of the SRS resource set associated with the one or more SRS resource identifiers, wherein the DCI indicates one or more parameters for the SRS resource set, and wherein the one or more parameters indicate a modified time gap or a modified number of resources associated with the SRS resource set; and receiving, from the UE, the SRS resource set based at least in part on the one or more parameters. Aspect 21: The method of Aspect 20, wherein the DCI uses a non-data-scheduling DCI type. Aspect 22: The method of Aspect 20, wherein the DCI uses a data-scheduling DCI type. Aspect 23: The method of any of Aspects 20-21, wherein the DCI uses a non-data-scheduling DCI type, and wherein transmitting the DCI comprises: transmitting an indication of the one or more parameters via one or more fields of the DCI, wherein the one or more fields are repurposed fields of a DCI format of the DCI when the DCI format is used as the non-data-scheduling DCI type. Aspect 24: The method of any of Aspects 20-21 and 23, wherein the DCI uses a non-data-scheduling DCI type, and wherein transmitting the DCI comprises: transmitting an indication of a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers, and wherein the start position parameter is used for the at least one SRS resource identifier indicated by the configuration information. Aspect 25: The method of any of Aspects 20-21 and 23-24, wherein the DCI uses a non-data-scheduling DCI type, and wherein transmitting the DCI comprises: transmitting an indication to activate or deactivate at least one SRS resource identifier of the one or more SRS resource identifiers. Aspect 26: The method of Aspect 25, wherein the at least one SRS resource identifier is associated with an SRS resource that occurs first in a time domain, or an SRS resource that occurs last in the time domain, among SRS resources associated with the aperiodic SRS resource set. Aspect 27: The method of any of Aspects 20-21 and 23-26, wherein the DCI uses a non-data-scheduling DCI type, and wherein transmitting the DCI comprises: transmitting an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers. Aspect 28: The method of Aspect 27, wherein the indication includes a bitmap, wherein the bitmap indicates the one or more activated SRS resource identifiers for the triggered transmission of the SRS resource set. Aspect 29: The method of any of Aspects 20-21 and 23-28, wherein the DCI uses a non-data-scheduling DCI type, wherein the one or more SRS resource identifiers include a single SRS resource identifier, and wherein transmitting the DCI comprises: transmitting a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values indicate intra-slot time domain starting locations of SRS symbols for the triggered transmission of the SRS resource set. Aspect 30: The method of any of Aspects 20-21 and 23-29, wherein the DCI uses a non-data-scheduling DCI type, wherein the one or more SRS resource identifiers include a single SRS resource identifier, and wherein transmitting the DCI comprises: transmitting a set of values for a start position parameter for the single SRS resource identifier, wherein the set of values include one or more valid values and one or more invalid values, wherein the one or more valid values indicate SRS symbols for the single SRS resource identifier that are to be associated with the triggered transmission of the SRS resource set, and wherein the one or more invalid values indicate SRS symbols for the single SRS resource identifier that are not to be associated with the triggered transmission of the SRS resource set. Aspect 31: The method of any of Aspects 20-21 and 23-30, wherein the DCI uses a non-data-scheduling DCI type, wherein the one or more SRS resource identifiers include a single SRS resource identifier, and wherein transmitting the DCI comprises: transmitting an indication of a bitmap, wherein the bitmap indicates one or more activated SRS symbols associated with the single SRS resource identifier, and wherein the bitmap is used for the triggered transmission of the SRS resource set. Aspect 32: The method of Aspect 31, wherein transmitting the DCI comprises: transmitting the indication of the bitmap and an indication of a start position parameter for the triggered transmission of the SRS resource set, and wherein the start position parameter indicates an intra-slot time domain starting location for SRS symbols indicated by the bitmap. Aspect 33: The method of any of Aspects 20-32, wherein transmitting the configuration information comprises: transmitting an indication of one or more SRS trigger states associated with the aperiodic SRS resource set, wherein an SRS trigger state, of the one or more SRS trigger states, indicates the one or more parameters for the SRS resource set, and wherein each SRS trigger state, of the one or more SRS trigger states, is mapped to a DCI code point of an SRS request field. Aspect 34: The method of any of Aspects 20-33, wherein transmitting the DCI comprises: transmitting an indication of an SRS trigger state, wherein the SRS trigger state indicates one or more SRS resource set identifies including an identifier of the SRS resource set and the one or more parameters for the triggered transmission of the SRS resource set. Aspect 35: The method of Aspect 34, wherein the one or more parameters include at least one of: a first value for a start position parameter for at least one SRS resource identifier of the one or more SRS resource identifiers, an indication of one or more activated SRS resource identifiers of the one or more SRS resource identifiers, a first bitmap indicating one or more activated SRS resource identifiers from the one or more SRS resource identifiers, a set of values for the start position parameter, a second bitmap indicating one or more activated SRS symbols for an SRS resource identifiers of the one or more SRS resource identifiers, or a second value for the start position parameter indicating an intra-slot start position for SRS symbols indicated by the second bitmap. Aspect 36: The method of any of Aspects 34-35, wherein the SRS trigger state is included in a set of SRS trigger states indicated by the configuration information, wherein the set of SRS trigger states are associated with a set of SRS resource sets, including the SRS resource set, and wherein a subset of SRS trigger states, included in the set of SRS trigger states, are associated with the SRS resource set, wherein each SRS trigger state included in the subset of SRS trigger states indicates a different set of parameters for the SRS resource set. Aspect 37: The method of any of Aspects 34-36, wherein transmitting the indication of the SRS trigger state comprises: transmitting the indication of the SRS trigger state via an SRS request field of the DCI. Aspect 38: The method of Aspect 20-37, wherein the configuration information indicates a set of SRS trigger states, the method further comprising: transmitting, via a medium access control (MAC) control element (MAC-CE) message, an indication of a subset of SRS trigger states, from the set of SRS trigger states, that are activated; and wherein transmitting the DCI comprises: transmitting an indication of an SRS trigger state, from the subset of SRS trigger states, wherein the SRS trigger state indicates at least one of the one or more parameters for the triggered transmission of the SRS resource set. Aspect 39: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 1-19. Aspect 40: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 1-19. Aspect 41: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 1-19. Aspect 42: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 1-19. Aspect 43: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 1-19. Aspect 44: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 20-38. Aspect 45: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 20-38. Aspect 46: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 20-38. Aspect 47: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 20-38. Aspect 48: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 20-38. The foregoing disclosure provides illustration and description but is not intended to be exhaustive or to limit the aspects to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” is intended to be broadly construed as hardware and/or a combination of hardware and software. “Software” shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, and/or functions, among other examples, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. As used herein, a “processor” is implemented in hardware and/or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the aspects. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code, since those skilled in the art will understand that software and hardware can be designed to implement the systems and/or methods based, at least in part, on the description herein. As used herein, “satisfying a threshold” may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. Many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. The disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a+b, a+c, b+c, and a+b+c, as well as any combination with multiples of the same element (e.g., a+a, a+a+a, a+a+b, a+a+c, a+b+b, a+c+c, b+b, b+b+b, b+b+c, c+c, and c+c+c, or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the terms “set” and “group” are intended to include one or more items and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms that do not limit an element that they modify (e.g., an element “having” A may also have B). Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”). | 212,404 |
11943175 | DETAILED DESCRIPTION Technical solutions in embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are part of the embodiments of the present disclosure, but not all of the embodiments. Based on the embodiments in the present disclosure, many other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present disclosure. Reference is made toFIG.1, which is a flowchart of a method for controlling activation of a bandwidth part (BWP) according to an embodiment of the present disclosure. As shown inFIG.1, the method includes the following steps101to103. Step101includes: receiving and saving BWP configuration information transmitted by a base station, where the BWP configuration information includes BWP identification information. The method for controlling activation of a BWP provided by embodiments of the present disclosure is applied in a user equipment (UE) to manage an activated state of BWP(s) corresponding to each component carrier. In this step, the base station first configures a BWP for a UE to access the base station. When configuring the BWP, the base station transmits the BWP configuration information to the UE, and the UE may receive and save the BWP configuration information, i.e., saving a correspondence between each component carrier and a default BWP. Specifically, in a practical application, each component carrier is configured with a BWP set, and information about one or more BWPs is stored in the BWP set, that is, one component carrier may correspond to one or more BWPs. In an embodiment, every BWPs may be numbered. The BWP identification information may be configured to a UE in an explicit or implicit manner. The explicit manner includes that a dedicated information bit string is configured in the configuration information for each BWP to indicate index information of the BWP. The implicit manner includes that a serial number of each BWP in a BWP list of the configuration information is an index of the BWP. As an example, the first BWP in the list is numbered as 0, and BWPs subsequent to the first BWP are respectively numbered as 1, 2, 3, and so on. Step102includes: receiving a BWP activation command transmitted by the base station. The base station may transmit a BWP activation command to the base station through Layer 1 (L1) signaling or Layer 2 (L2) signaling, and the BWP activation command may indicate index information of to-be-activated BWP(s) in an explicit or implicit manner. The explicit manner includes: carrying index information of a target and to-be-activated BWP in an activation signaling. The implicit manner includes: carrying a bitmap in an activation signaling, each bit corresponding to one BWP, indicating to activate the BWP when the corresponding bit takes a first value; and indicating to deactivate the BWP when the corresponding bit takes a second value. A position where each indication bit is located in the bitmap corresponds to an index of a corresponding BWP. As an example, the first bit in the bitmap corresponds to a BWP numbered as 0, the second bit in the bitmap corresponds to a BWP numbered as 1, and so on. In a case that a BWP is in an activated state, no operation is performed when a bit corresponding to the BWP in the received activation signaling takes a first value. Similarly, in a case that a BWP is in a deactivated state, no operation is performed when a bit corresponding to the BWP in the received activation signaling takes a second value. Step103includes: performing BWP activation with a BWP identifier indicated by the BWP activation command. Upon receipt of the BWP activation command, a UE may perform BWP activation with the BWP identifier indicated by the BWP activation command, thereby implementing a control of BWP activation. Thus, in the embodiment of the present disclosure, BWP configuration information transmitted by a base station is received and saved, and the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and a BWP is activated based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Reference is further made toFIG.2, and the above-mentioned BWP configuration information is used to indicate a default BWP corresponding to each component carrier. Subsequent to step101, the method further includes:step104, receiving a component carrier activation signaling transmitted by the base station; andstep105, activating, based on the component carrier activation signaling, a first target component carrier and a default BWP corresponding to the first target component carrier. In a case that a component carrier is activated, data can be normally transmitted and received on the component carrier only when there is an activated BWP on the component carrier. The above-mentioned default BWP refers to a BWP that needs to be activated by default, if a command indicates to activate a component carrier, but the command does not specify a BWP on the component carrier requiring to be activated. The number of the default BWP corresponding to each component carrier may be set according to actual needs, and may be one or more, which is not specifically limited herein. It should be appreciated that when a target component carrier is in a deactivated state, all BWPs on this component carrier are in a deactivated state. In this step, the base station may transmit a component carrier activation signaling to the base station by a control element (CE) of a medium access control (MAC) layer. The component carrier activation signaling carries a first target component carrier that needs to be activated, and all BWPs on the first target component carrier are in a deactivated state. After receiving the component carrier activation signaling, the UE obtains the first target component carrier that needs to be activated, which is indicated by the component carrier activation signaling, then obtains a default BWP corresponding to the first target component carrier according to the previously stored BWP configuration information, and finally activates the first target component carrier and the default BWP(s) corresponding to the first target component carrier. In this embodiment, a default BWP corresponding to each component carrier is configured in BWP configuration information, and then a first target component carrier and a default BWP corresponding to a first target component carrier are directly activated based on the component carrier activation signaling. Therefore, an activated state of the component carrier and an activated state of a BWP on the component carrier can be controlled through a single signaling, thereby reducing signaling overhead. In addition, since a single signaling is used to simultaneously activate a component carrier and activate a BWP on the component carrier, a transmission delay can be avoided, which is caused by controlling an activated state of the component carrier and an activated state of the BWP on the component carrier through separate signalings. It should be appreciated that a manner where a base station configures BWP configuration information for indicating a default BWP corresponding to each component carrier can be set according to actual needs. For example, in an embodiment, the configuring manner can be implemented in any of the following manners:a first manner including that, for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;a second manner including that, for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;a third manner including that, for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP; for example, if the BWP identifier value in a BWP set starts from 0, a BWP with the index value 0 is determined as a default BWP;a fourth manner including that, for a component carrier configured with a BWP set, a BWP with the widest or narrowest bandwidth in BWPs in the BWP set is the default BWP; ora fifth manner including that, for a component carrier configured with a BWP set, a BWP with the lowest or highest starting frequency in BWPs in the BWP set is the default BWP. Further, after the base station transmits the BWP configuration information, the activated BWP may be deactivated based on signaling. Specifically, referring toFIG.3, subsequent to the above step101, the method further includes steps106and107. Step106includes: receiving a BWP deactivation command transmitted by the base station, where the BWP deactivation command is configured to adjust an activated BWP on a second target component carrier to a deactivated state. In this step, the base station may transmit a BWP deactivation command to a UE through L1 or L2 signaling. The signaling may include an identification indication of a BWP that needs to be deactivated. The UE may obtain the second target component carrier where the BWP needs to be deactivated is located, by inquiring the previously saved BWP configuration information. Step107includes: deactivating the second target component carrier, or deactivating the second target component carrier and a corresponding BWP on the second target component carrier, in a case that an activated BWP to be adjusted on the second target component carrier is a last activated BWP. In this step, when an deactivation operation is performed on a BWP on the second target component carrier, in a case that there are multiple activated BWPs on the second target component carrier, a BWP specified in the BWP deactivation command may be directly deactivated; and in a case that only the last activated BWP exists on the second target component carrier, the second target component carrier may be deactivated, or both the second target component carrier and a corresponding BWP (that is, a BWP specified in the deactivation command) on the second target component carrier. In addition, in a case that a BWP specified in the BWP deactivation command includes all activated BWPs on the second target component carrier, which also means performing a deactivation on the last activated BWP on the second target component carrier, the second target component carrier may be deactivated in this case, or both the second target component carrier and a corresponding BWP on the second target component carrier (that is, a BWP specified in the deactivation command) may be deactivated in this case. Since the deactivation of a component carrier can be achieved by only indicating a BWP deactivation during a deactivation process, the signaling overhead is further reduced. Further, after the base station transmits the BWP configuration information, the activated BWP may be deactivated in accordance with a signaling. Specifically, referring toFIG.4, subsequent to the above step101, the method further includes steps108and109. Step108includes: receiving a component carrier deactivation signaling transmitted by the base station. In this step, the base station may transmit a component carrier deactivation signaling to a UE through a control element of a medium access control layer (MAC CE). The component carrier deactivation signaling includes a third target component carrier that needs to be deactivated. The UE may obtain all BWPs on the third target component carrier based on the previously saved BWP configuration information. The third target component carrier may include one or more BWPs in an activated state, and may also include one or more BWPs in a deactivated state. Step109includes: deactivating, based on the component carrier deactivation signaling, a third target component carrier, or a third target component carrier and all activated BWPs on the third target component carrier. In this step, upon receiving the component carrier deactivation signaling, a UE may deactivate the third target component carrier, or may deactivate the third target component carrier and all the activated BWPs on the third target component carrier. Since the deactivation of a BWP can be achieved only based on a component carrier deactivation signaling, the signaling overhead is further reduced. Further, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state. The performing BWP activation with the BWP identifier indicated by the BWP activation command includes: activating the fourth target component carrier and a BWP designated to be activated through the BWP activation command, in a case that the fourth target component carrier is in a deactivated state. In an embodiment, the base station may transmit a BWP activation command to a UE through L1 signaling or L2 signaling, and the BWP activation command includes a BWP that needs to be activated. The UE may obtain the fourth target component carrier corresponding to the BWP(s) that needs to be activated by inquiring the previously saved BWP configuration information. The UE determines whether the fourth target component carrier is in an activated state, may directly activate a BWP designated to be activated through the BWP activation command if the fourth target component carrier is currently in an activated state, and may activate the fourth target component carrier and a BWP designated to be activated through the BWP activation command if the fourth target component carrier is currently in a deactivated state. In this embodiment, a component carrier can be controlled to be activated only through a BWP activation command, thereby further reducing the signaling overhead. It should be noted that in the related art, a UE usually accesses to only one base station, and of course, a UE can also access to two base stations, where one of the base stations is a primary base station and the other one is a secondary base station. In an embodiment, the foregoing BWP configuration information may include BWP configuration information of the primary base station and BWP configuration information of the secondary base station. In a case that a secondary base station needs to be added to the UE, the primary base station configures BWP configuration information of the secondary base station for the UE. In this case, when the UE receives the BWP configuration information of the secondary base station, the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. Subsequent to the above step101, the above method further includes: actively activating, by the UE, a default BWP corresponding to the component carrier where the primary cell in the secondary base station is located. In this embodiment, after receiving the BWP configuration information of the secondary base station, the UE may activate the default BWP corresponding to the component carrier where the primary cell in the secondary base station is located in accordance with indications, thereby implementing data transmission to the secondary base station. It should be understood that for processes of activating and deactivating a BWP corresponding to a component carrier where a secondary cell in the secondary base station is located and a non-default BWP corresponding to a component carrier where a primary cell in the secondary base station is located, reference can be made to the foregoing embodiments, and details are not described herein again. Further, the base station may further perform handover on the primary cell. Specifically, subsequent to the above step101, the method further includes:receiving a primary cell handover command transmitted by the base station, where the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell; andperforming a cell handover based on the default BWP on the target primary cell and the target primary cell, for example, performing a random access procedure on the default BWP of the target primary cell. In this embodiment, a base station can control a UE to be switched to a target primary cell through a primary cell handover command, so as to supply a better service to the UE. In order to enable normal data transmission on the target primary cell, a default BWP on the target primary cell is indicated in the primary cell handover command Therefore, the UE can complete the subsequent target cell handover procedure on the default BWP. Reference is made toFIG.5, and the present disclosure further provides a method for controlling activation of a bandwidth part (BWP), which includes steps501and502. Step501includes: transmitting BWP configuration information to a UE, where the BWP configuration information includes BWP identification information. The method for controlling activation of a BWP provided by embodiments of the present disclosure is applied in a base station to control an activated state of BWP(s) corresponding to each component carrier. In this step, the base station first configures a BWP for a UE to access the base station. When configuring the BWP, the base station transmits the BWP configuration information to the UE, and the UE may receive and save the BWP configuration information, thereby saving a correspondence between each component carrier and a default BWP. Specifically, in a practical application, each component carrier is configured with a BWP set, and information about one or more BWPs is stored in the BWP set, that is, one component carrier may correspond to one or more BWPs. In an embodiment, all the BWPs may be numbered. The BWP identification information may be configured to a UE in an explicit or implicit manner. The explicit manner includes that a dedicated information bit string is configured in the configuration information for each BWP to indicate index information of the BWP. The implicit manner includes that a serial number of each BWP in a BWP list of the configuration information is an index of the BWP. As an example, the first BWP in the list is numbered as 0, and BWPs subsequent to the first BWP are respectively numbered as 1, 2, 3, and so on. Step502includes: transmitting a BWP activation command to the UE, where the activation command is configured for the UE to perform BWP activation with a BWP identifier indicated by the BWP activation command. The base station may transmit a BWP activation command to the UE through L1 signaling or L2 signaling, and the BWP activation command may indicate index information of to-be-activated BWP(s) in an explicit or implicit manner. The explicit manner includes: carrying index information of a target and to-be-activated BWP in an activation signaling. The implicit manner includes: carrying a bitmap in an activation signaling, each bit corresponding to one BWP, indicating to activate the BWP when the corresponding bit takes a first value; and indicating to deactivate the BWP when the corresponding bit takes a second value. A position where each indication bit is located in the bitmap corresponds to an index of a corresponding BWP. As an example, the first bit in the bitmap corresponds to a BWP numbered as 0, the second bit in the bitmap corresponds to a BWP numbered as 1, and so on. In a case that a BWP is in an activated state, no operation is performed when a bit corresponding to the BWP in the received activation signaling takes a first value. Similarly, in a case that a BWP is in a deactivated state, no operation is performed when a bit corresponding to the BWP in the received activation signaling takes a second value. When receiving the BWP activation command, a UE may perform BWP activation with the BWP identifier indicated by the BWP activation command, thereby implementing a control of BWP activation. Thus, in the embodiment of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and BWP activation is performed based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Further, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier. After transmitting the BWP configuration information to the user equipment, the method further includes: transmitting a component carrier activation signaling to the user equipment, where the component carrier activation signaling is used for the user equipment to activate a first target component carrier and a default BWP corresponding to the first target component carrier. When a component carrier is activated, data can be normally transmitted and received on the component carrier only when there is an activated BWP on the component carrier. The above-mentioned default BWP refers to a BWP that needs to be activated by default when a command indicates to activate a BWP, but the command does not indicate that other BWPs on the component carrier require to be activated. The number of the default BWPs corresponding to each component carrier may be set according to actual demands, and may be one or more, which is not specifically limited herein. It should be appreciated that when a target component carrier is in a deactivated state, all BWPs on the target component carrier are in a deactivated state. In this step, the base station may transmit a component carrier activation signaling to the base station by an MAC CE. The component carrier activation signaling carries a first target component carrier that needs to be activated, and all BWPs on the first target component carrier are in a deactivated state. After receiving the component carrier activation signaling, the UE obtains the first target component carrier that needs to be activated, which is indicated by the component carrier activation signaling, then obtains a default BWP corresponding to the first target component carrier according to the previously stored BWP configuration information, and finally activates the first target component carrier and the default BWP(s) corresponding to the first target component carrier. In this embodiment, a default BWP corresponding to each component carrier is configured in BWP configuration information, and then a first target component carrier and a default BWP corresponding to a first target component carrier are directly activated in accordance with the component carrier activation signaling. Therefore, an activated state of the component carrier and an activated state of a BWP on the component carrier can be controlled through a single signaling, thereby reducing signaling overhead. In addition, since a single signaling is used to activate a component carrier and activate a BWP on the component carrier at a same time, a transmission delay can be avoided, which is caused by controlling an activated state of the component carrier and an activated state of the BWP on the component carrier through separate signalings. It should be appreciated that a manner where a base station configures BWP configuration information for indicating a default BWP corresponding to each component carrier can be set according to actual needs. For example, in an embodiment, the configuring manner can be implemented in any of the following manners:a first manner including that, for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;a second manner including that, for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;a third manner including that, for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP; for example, if the BWP identifier value in a BWP set starts from 0, a BWP with the index value 0 is determined as a default BWP;for a component carrier configured with a BWP set, a BWP with a maximum bandwidth or a minimum bandwidth in the BWP set is the default BWP; orfor a component carrier configured with a BWP set, a BWP with a maximum starting frequency or a minimum starting frequency in the BWP set is the default BWP. Further, after the base station transmits the BWP configuration information, the activated BWP may be deactivated through signaling. Specifically, subsequent to the above step501, the method further includes: transmitting a BWP deactivation command to the user equipment. The BWP deactivation command is configured to instruct the user equipment to adjust an activated BWP on a second target component carrier to a deactivated state; and the BWP deactivation command is configured to instruct the user equipment to deactivate the second target component carrier, or deactivate the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted is a last activated BWP on the second target component carrier. In this step, the base station may transmit a BWP deactivation command to a UE through L1 or L2 signaling, which may include a BWP that needs to be deactivated. The UE may obtain the second target component carrier where the BWP needs to be deactivated is located, by inquiring previously saved BWP configuration information. In an embodiment, when a UE performs deactivation on a BWP on the second target component carrier, in a case that there are multiple activated BWPs on the second target component carrier, the UE may directly deactivate a BWP specified in the BWP deactivation command; and in a case that only the last activated BWP exists on the second target component carrier, the UE may deactivate the second target component carrier, or both the second target component carrier and a corresponding BWP (that is, a BWP specified in the deactivation signaling) on the second target component carrier. Since the deactivation of a component carrier can be achieved by only indicating a BWP deactivation during a deactivation process, the signaling overhead is further reduced. Further, after the base station transmits the BWP configuration information, the activated BWP may be deactivated through signaling. Specifically, subsequent to the above step501, the method further includes: transmitting a component carrier deactivation signaling to the user equipment. The component carrier deactivation signaling is configured to instruct the user equipment to deactivate a third target component carrier, or to instruct the user equipment to deactivate a third target component carrier and all activated BWPs on the third target component carrier. In this step, the base station may transmit a component carrier deactivation signaling to a UE through a control element of a medium access control layer (MAC CE). The component carrier deactivation signaling includes a third target component carrier that needs to be deactivated. The UE may obtain all BWPs on the third target component carrier based on the previously saved BWP configuration information. The third target component carrier may include one or more BWPs in an activated state, and may also include one or more BWPs in a deactivated state. In this embodiment, upon receiving the component carrier deactivation signaling, a UE may deactivate the third target component carrier, or may deactivate the third target component carrier and all the activated BWPs on the third target component carrier. Since the deactivation of a BWP can be achieved only based on a component carrier deactivation signaling, the signaling overhead is further reduced. Further, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state; and the user equipment activates the fourth target component carrier and a BWP designated to be activated through the BWP activation command, in a case that the fourth target component carrier is in an inactive state. In this step, the base station may transmit a BWP activation command to a UE through L1 signaling or L2 signaling, and the BWP activation command includes a BWP that needs to be activated. The UE may obtain the fourth target component carrier corresponding to the BWP(s) that needs to be activated by inquiring the previously saved BWP configuration information. The UE determines whether the fourth target component carrier is in an activated state, may directly activate a BWP designated to be activated through the BWP activation command if the fourth target component carrier is currently in an activated state, and may activate the fourth target component carrier and a BWP designated to be activated through the BWP activation command if the fourth target component carrier is currently in a deactivated state. In this embodiment, a component carrier can be controlled to be activated only through a BWP activation command, thereby further reducing the signaling overhead. It should be noted that in the related art, a UE usually accesses to only one base station, and of course, a UE can also access to two base stations, where one of the base stations is a primary base station and the other one is a secondary base station. In an embodiment, the foregoing BWP configuration information may include BWP configuration information of the primary base station and BWP configuration information of the secondary base station. In a case that a secondary base station needs to be added to the UE, the primary base station configures BWP configuration information of the secondary base station for the UE. The BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. In an embodiment, after receiving the BWP configuration information of the secondary base station, the UE may activate the default BWP corresponding to the component carrier where the primary cell of the secondary base station is located based on an indication, thereby implementing data transmission of the secondary base station. It should be understood that, for processes of activating and deactivating a BWP corresponding to a component carrier where a secondary cell in the secondary base station is located, and a non-default BWP corresponding to a component carrier where a primary cell in the secondary base station is located, reference can be made to the foregoing embodiments, and details are not described herein again. Further, the base station may also perform handover on the primary cell. Specifically, subsequent to the above step101, the method further includes: transmitting a primary cell handover command to the user equipment, where the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell. In this embodiment, a base station can control a UE to be switched to a target primary cell through a primary cell handover command, so as to supply a better service to the UE. In order to enable normal data transmission on the target primary cell, a default BWP on the target primary cell is indicated in the primary cell handover command Therefore, the UE can complete the subsequent target cell handover procedure on the default BWP. Reference is made toFIG.6, which is a schematic structural diagram of a UE according to an embodiment of the present disclosure, which can implement details of the method for controlling activation of a bandwidth part (BWP) in the foregoing embodiments, and can achieve the same effects. As shown inFIG.6, the UE includes:a configuration reception module601, configured to receive and save BWP configuration information transmitted by a base station, where the BWP configuration information includes BWP identification information;a command reception module602, configured to receive a BWP activation command transmitted by the base station; anda processing module603, configured to perform BWP activation with a BWP identifier indicated by the BWP activation command. Optionally, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier. The command reception module602is further configured to receive a component carrier activation signaling transmitted by the base station. The processing module603is further configured to activate, based on the component carrier activation signaling, a first target component carrier and a default BWP corresponding to the first target component carrier. Optionally, a manner where the BWP configuration information indicates the default BWP corresponding to each component carrier includes any of the following manners:a manner in which, for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;a manner in which, for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;a manner in which, for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP;a manner in which, for a component carrier configured with a BWP set, a BWP with a maximum bandwidth or a minimum bandwidth in the BWP set is the default BWP; ora manner in which, for a component carrier configured with a BWP set, a BWP with a maximum starting frequency or a minimum starting frequency in the BWP set is the default BWP. Optionally, the command reception module602is further configured to receive a BWP deactivation command transmitted by the base station, and the BWP deactivation command is configured to adjust an activated BWP on a second target component carrier to a deactivated state. The processing module603is further configured to deactivate the second target component carrier, or deactivate the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted is a last activated BWP on the second target component carrier. Optionally, the command reception module602is further configured to receive a component carrier deactivation signaling transmitted by the base station. The processing module603is further configured to deactivate, based on the component carrier deactivation signaling, a third target component carrier, or a third target component carrier and all activated BWPs on the third target component carrier. Optionally, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state. The processing module603is further configured to activate the fourth target component carrier and a BWP designated to be activated through the BWP activation command, in a case that the fourth target component carrier is in an inactive state. Optionally, the command reception module602is further configured to: receive BWP configuration information of a secondary base station, and the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. The processing module603is further configured to activate a default BWP corresponding to the component carrier where the primary cell of the secondary base station is located. Optionally, the command reception module602is further configured to receive a primary cell handover command transmitted by the base station, and the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell. The processing module603is further configured to perform a cell handover based on the default BWP on the target primary cell and the target primary cell. Thus, in the embodiments of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and a BWP is activated based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Referring toFIG.7,FIG.7is a schematic structural diagram of a base station according to an embodiment of the present disclosure, which can implement details of the method for controlling activation of a bandwidth part (BWP) in the foregoing embodiments, and can achieve the same effects. As shown inFIG.7, the base station includes:a configuration transmission module701, configured to transmit BWP configuration information to a user equipment, where the BWP configuration information includes BWP identification information; anda command transmission module702, configured to transmit a BWP activation command to the user equipment, where the BWP activation command is configured for the user equipment to perform BWP activation with a BWP identifier indicated by the BWP activation command. Optionally, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier. The command transmission module is further configured to transmit a component carrier activation signaling to the user equipment, where the component carrier activation signaling is used for the user equipment to activate a first target component carrier and a default BWP corresponding to the first target component carrier. Optionally, the BWP configuration information indicates a default BWP corresponding to each component carrier as follows:for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP with a maximum or minimum bandwidth in the BWP set is the default BWP; or for a component carrier configured with a BWP set, a BWP with a maximum or minimum starting frequency in the BWP set is the default BWP. Optionally, the command transmission module702is further configured to transmit a BWP deactivation command to the user equipment. The BWP deactivation command is configured to instruct the user equipment to adjust an activated BWP on a second target component carrier to a deactivated state; and the BWP deactivation command is configured to instruct the user equipment to deactivate the second target component carrier, or to deactivate the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted is a last activated BWP on the second target component carrier. Optionally, the command transmission module702is further configured to transmit a component carrier deactivation signaling to the user equipment, and the component carrier deactivation signaling is configured to instruct the user equipment to deactivate a third target component carrier, or to deactivate a third target component carrier and all activated BWPs on the third target component carrier. Optionally, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state; and the BWP activation command is configured to instruct the user equipment to activate the fourth target component carrier and a BWP designated to be activated through the BWP activation command, in a case that the fourth target component carrier is in an inactive state. Optionally, the BWP configuration information includes BWP configuration information of a primary base station and BWP configuration information of a secondary base station; and the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. Optionally, the command transmission module702is further configured to transmit a primary cell handover command to the user equipment, and the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell. In view of the above, in the embodiments of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and a BWP is activated based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Reference is made toFIG.8, which is a schematic structural diagram of a UE according to an embodiment of the present disclosure, which can implement details of the method for controlling activation of a bandwidth part (BWP) in the foregoing embodiments, and can achieve the same effects. As shown inFIG.8, the UE800includes: at least one processor801, a memory802, at least one network interface804, and a user interface803. Various modules in the UE800are coupled together through a bus system805. It is understandable that the bus system805is configured to implement connections and communications between these components. The bus system805includes a power bus, a control bus, and a signal status bus in addition to a data bus. However, for the sake of clarity, various buses are denoted by the bus system805inFIG.13. The user interface803may include a display, a keyboard, or a click device (for example, a mouse, a track ball, a touch pad, or a touch screen). It can be understood that the memory802in the embodiments of the present disclosure may be a volatile memory or a non-volatile memory, or may include both the volatile memory and the non-volatile memory. The non-volatile memory may be a read-only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electrically EPROM (EEPROM) or a flash memory. The volatile storage may be a random access memory (RAM), which is used as an external cache. By way of example and without any limitation, many forms of RAMs may be used, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM) and direct Rambus RAM (DRRAM). The memory802of the system and method described in this specification is meant to include, without limitation, these and any other suitable types of memories. In some implementations, the memory802stores the following elements, an executable module or a data structure, or a subset or extension set thereof, such as an operating system8021and an application8022. The operating system8021includes various system programs, such as a framework layer program, a core library layer program and a driver layer program, to implement various fundamental services and process hardware-based tasks. The application8022includes various applications, such as a media player and a browser, to implement a variety of application services. The program implementing the method according to embodiments of the present disclosure may be included in the application8022. In an embodiment of the present disclosure, the UE further includes a computer program stored in the memory802and executable on the processor801, which may be specifically the computer program in the application8022. The computer program is executed by the processor801to implement the following steps:receiving and saving BWP configuration information transmitted by a base station, where the BWP configuration information includes BWP identification information;receiving a BWP activation command transmitted by the base station; andperforming BWP activation with a BWP identifier indicated by the BWP activation command. The methods disclosed in the foregoing embodiments of the present disclosure may be applied in the processor801or implemented by the processor801. The processor801may be an integrated circuit chip with signal processing capabilities. During an implementation process, steps of the methods may be realized in form of hardware by integrated logical circuits in the processor801, or in form of software by instructions. The processor801may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, a discrete gate or a transistor logic component, a discrete hardware transistor logic component, discrete hardware assembly, that is capable of implementing or executing the various methods, steps and logic block diagrams disclosed in the embodiments of the present disclosure. The general purpose processor may be a microprocessor, or any conventional processor, etc. The steps of the methods disclosed with reference to the embodiments of the present disclosure may be embodied in hardware in the form of a coding processor, or performed by the hardware in the coding processor and the software modules in combination. The software modules may reside in well-established storage medium in the art, such as a RAM, flash memory, ROM, PROM or EEPROM, register, etc. The storage medium resides in the memory802. The processor801reads information from the memory802and performs the steps of the methods in combination with its hardware. It is understood that, the embodiments described in the present disclosure may be implemented by hardware, software, firmware, middleware, microcode or a combination thereof. For hardware implementation, processing units may be implemented in one or more application specific integrated circuits (ASIC), digital signal processor (DSP), DSP device (DSPD), programmable logic device (PLD), field-programmable gate array (FPGA), general purpose processor, controller, microcontroller, microprocessor, other electronic unit configured to perform the function described in this specification or a combination thereof. For software implementation, the technical solutions described in the embodiments of the present disclosure may be implemented by a module (e.g., process, or function, etc.) configured to perform the functions described in the embodiments of the present disclosure. Software codes may be stored in a memory and executed by the processor. The memory may be implemented internal or external to the processor. Optionally, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier, and the computer program is executed by the processor801to further implement the following steps:receiving a component carrier activation signaling transmitted by the base station; andactivating, based on the component carrier activation signaling, a first target component carrier and a default BWP corresponding to the first target component carrier. Optionally, a manner where the BWP configuration information indicates the default BWP corresponding to each component carrier includes any of the following manners:for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP with a maximum bandwidth or a minimum bandwidth in the BWP set is the default BWP; orfor a component carrier configured with a BWP set, a BWP with a maximum starting frequency or a minimum starting frequency in the BWP set is the default BWP. Optionally, the computer program is executed by the processor801to further implement the following steps:receiving a BWP deactivation command transmitted by the base station, where the BWP deactivation command is configured to adjust an activated BWP on a second target component carrier to a deactivated state; anddeactivating the second target component carrier, or deactivating the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted on the second target component carrier is a last activated BWP. Optionally, the computer program is executed by the processor801to further implement the following steps:receiving a component carrier deactivation signaling transmitted by the base station; anddeactivating, based on the component carrier deactivation signaling, a third target component carrier, or a third target component carrier and all activated BWPs on the third target component carrier. Optionally, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state; and the fourth target component carrier and a BWP designated to be activated through the BWP activation command is activated, in a case that the fourth target component carrier is in an inactive state. Optionally, the BWP configuration information includes BWP configuration information of a primary base station and BWP configuration information of a secondary base station; and when the BWP configuration information of the secondary base station is received, the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. The computer program is executed by the processor801to further implement the following steps: activating a default BWP corresponding to the component carrier where the primary cell of the secondary base station is located. Optionally, the computer program is executed by the processor801to further implement the following steps:receiving a primary cell handover command transmitted by the base station, where the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell; andperforming a cell handover based on the default BWP on the target primary cell and the target primary cell. In view of the above, in the embodiments of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and a BWP is activated based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Referring toFIG.9,FIG.9is a structural diagram of a UE according to an embodiment of the present disclosure, which can implement details of the method for controlling activation of a bandwidth part (BWP) in the foregoing embodiments, and can achieve the same effects. As shown inFIG.9, the UE900includes a radio frequency (RF) circuit910, a memory920, an input unit930, a display unit940, a processor950, an audio circuit960, a communication module970, and a power supply980, and further includes a camera (not shown). The input unit930may be configured to receive numeric or character information inputted by a user, and to generate signal inputs related to user settings and function control of the UE900. Specifically, in an embodiment of the present disclosure, the input unit930may include a touch panel931. The touch panel931, also referred to as a touch screen, may collect touch operations by the user on or near the touch panel (such as an operation performed by the user using any suitable object or accessory such as a finger or a stylus on the touch panel931), and drive a corresponding connection apparatus according to a predetermined program. Optionally, the touch panel931may include two parts: a touch detection apparatus and a touch controller. The touch detection apparatus is configured to detect a touch position of the user, detect a signal generated due to the touch operation, and transmit the signal to the touch controller; and the touch controller is configured to receive the touch information from the touch detection device, convert the touch information into contact coordinates, send the contact coordinates to the processor950, and receive and execute commands from the processor950. In addition, the touch panel931may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch panel931, the input unit930may further include other input devices932. The input devices932may include, but not limited to, one or more of a physical keyboard, a function button (such as a volume control button and a switch buttons), a trackball, a mouse, or a joystick. The display unit940may be configured to display information inputted by the user or information provided to the user and various menu interfaces of the UE900. The display unit940may include a display panel941. Optionally, the display panel941may be configured in the form of a liquid crystal display (LCD) panel or an organic light-emitting diode (OLED). It should be noted that the touch panel931may cover the display panel941to form a touch display screen, and when the touch display screen detects a touch operation on or near it, the touch operation is transmitted to the processor950to determine the type of the touch event, and then the processor950provides a corresponding visual output on the touch display screen based on the type of touch event. The processor950is the control center of the UE900, which connects various parts of the entire mobile phone by using various interfaces and wirings, performs functions of the UE900and process data by running or executing software programs and/or modules stored in a first memory921and invoking data stored in a second memory922, thereby performing overall monitoring on the UE900. Optionally, the processor950may include one or more processing units. In an embodiment of the present disclosure, by calling a software program and/or a module stored in the first memory921, and/or data stored in the second memory922, the computer program is executed by the processor950to perform the following steps:receiving and saving BWP configuration information transmitted by a base station, where the BWP configuration information includes BWP identification information;receiving a BWP activation command transmitted by the base station; andperforming BWP activation with a BWP identifier indicated by the BWP activation command. Optionally, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier; and the computer program is executed by the processor950to further perform the following steps:receiving a component carrier activation signaling transmitted by the base station; andactivating, based on the component carrier activation signaling, a first target component carrier and a default BWP corresponding to the first target component carrier. Optionally, a manner where the BWP configuration information indicates the default BWP corresponding to each component carrier includes any of the following manners:for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;for a component carrier configured with a BWP set, each BWP has an index value, and a BWP with an initial index value in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP with a maximum bandwidth or a minimum bandwidth in the BWP set is the default BWP; orfor a component carrier configured with a BWP set, a BWP with a maximum starting frequency or a minimum starting frequency in the BWP set is the default BWP. Optionally, the computer program is executed by the processor950to further perform the following steps:receiving a BWP deactivation command transmitted by the base station, where the BWP deactivation command is configured to adjust an activated BWP on a second target component carrier to a deactivated state; anddeactivating the second target component carrier, or deactivating the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted on the second target component carrier is a last activated BWP. Optionally, the computer program is executed by the processor950to further perform the following steps:receiving a component carrier deactivation signaling transmitted by the base station; anddeactivating, based on the component carrier deactivation signaling, a third target component carrier, or a third target component carrier and all activated BWPs on the third target component carrier. Optionally, the BWP activation command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state; and the fourth target component carrier and a BWP designated to be activated through the BWP activation command is activated, in a case that the fourth target component carrier is in an inactive state. Optionally, the BWP configuration information includes BWP configuration information of a primary base station and BWP configuration information of a secondary base station; and when the BWP configuration information of the secondary base station is received, the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. The computer program is executed by the processor950to further perform the following steps:activating a default BWP corresponding to the component carrier where the primary cell of the secondary base station is located. Optionally, the computer program is executed by the processor950to further perform the following steps:receiving a primary cell handover command transmitted by the base station, where the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell; andperforming a cell handover based on the default BWP on the target primary cell and the target primary cell. In view of the above, in the embodiments of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP identification information; a BWP activation command transmitted by the base station is received; and a BWP is activated based on a BWP identifier indicated by the BWP activation command BWPs are numbered, the activation command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. Referring toFIG.10,FIG.10is a schematic structural diagram of a base station according to an embodiment of the present disclosure, which can implement details of the method for controlling activation of a bandwidth part (BWP) in the foregoing embodiments, and can achieve the same effects. As shown inFIG.10, the base station1000includes: a processor1001, a transceiver1002, a memory1003, a user interface1004, and a bus interface. The processor1001is configured to read a program in the memory1003and execute the following processes:transmitting BWP configuration information to a user equipment, where the BWP configuration information includes BWP index information; andtransmitting a BWP activation command to the user equipment, where the BWP activation-related command is configured for the user equipment to perform BWP activation with a BWP index indicated by the BWP activation-related command. InFIG.10, a bus architecture may include any number of interconnected buses and bridges, and may be specifically configured to couple various circuits including one or more processors represented by the processor1001and storages represented by the memory1003. The bus architecture may also couple various other circuits such as peripherals, voltage regulators and power management circuits, which are well known in the art. Therefore, a detailed description thereof is omitted herein. A bus interface provides an interface. The transceiver1002may be multiple elements, i.e., including a transmitter and a receiver, to allow for communication with various other apparatuses on the transmission medium. For different user equipment, the user interface1004may also be an interface capable of may also be an interface capable of externally or internally connecting the required devices, which includes, but not limited to, a keypad, a display, a speaker, a microphone, a joystick, and the like. The processor1001is responsible for the control of the bus architecture and general processing, and the memory1003may store data used by the processor300in performing operations. Optionally, the BWP configuration information is used to indicate a default BWP corresponding to each component carrier; and the program is executed by the processor1001to further perform the following steps:transmitting a component carrier activation-related signaling to the user equipment, where the component carrier activation-related signaling is used for the user equipment to activate a first target component carrier and a default BWP corresponding to the first target component carrier. Optionally, the BWP configuration information indicates a default BWP corresponding to each component carrier as follows:for a component carrier configured with a BWP set, one bit is used to indicate whether each BWP in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP ranked first or last in the BWP set is the default BWP;for a component carrier configured with a BWP set, each BWP has an identification value, and a BWP with an initial identification value in the BWP set is the default BWP;for a component carrier configured with a BWP set, a BWP with a maximum or minimum bandwidth in the BWP set is the default BWP; or for a component carrier configured with a BWP set, a BWP with a maximum or minimum starting frequency in the BWP set is the default BWP. Optionally, the program is executed by the processor1001to further perform the following steps:transmitting a BWP deactivation-related command to the user equipment,where the BWP deactivation-related command is configured to instruct the user equipment to adjust an activated BWP on a second target component carrier to a deactivated state; andthe BWP deactivation-related command is configured to instruct the user equipment to deactivate the second target component carrier, or deactivate the second target component carrier and a corresponding BWP on the second target component carrier, in a case that the activated BWP to be adjusted is a last activated BWP on the second target component carrier. Optionally, the program is executed by the processor1001to further perform the following steps:transmitting a component carrier deactivation-related signaling to the user equipment, where the component carrier deactivation-related signaling is configured to instruct the user equipment to deactivate a third target component carrier, or to deactivate a third target component carrier and all activated BWPs on the third target component carrier. Optionally, the BWP activation-related command is configured to adjust a deactivated BWP on a fourth target component carrier to an activated state; and the BWP activation-related command is configured to instruct the user equipment to activate the fourth target component carrier and a BWP designated to be activated through the BWP activation-related command, in a case that the fourth target component carrier is in an inactive state. Optionally, the BWP configuration information includes BWP configuration information of a primary base station and BWP configuration information of a secondary base station; and the BWP configuration information of the secondary base station is used to indicate a component carrier where a primary cell of the secondary base station is located, and default BWP information corresponding to each component carrier. Optionally, the program is executed by the processor1001to further perform the following steps:transmitting a primary cell handover command to the user equipment, where the primary cell handover command is configured to indicate a target primary cell as a handover cell and a default BWP on the target primary cell. In view of the above, in the embodiments of the present disclosure, BWP configuration information transmitted by a base station is received and saved, where the BWP configuration information includes BWP index information; a BWP activation-related command transmitted by the base station is received; and a BWP is activated based on a BWP index indicated by the BWP activation-related command BWPs are numbered, the activation-related command indicates a BWP that needs to be activated, and the BWP is activated by a UE, thereby improving the flexibility of BWP activation control. An embodiment of the present disclosure further provides a computer-readable storage medium, having stored a computer program thereon. The computer program is executed by a processor to implements steps in a method for controlling activation of a bandwidth part (BWP) in any one of the foregoing method embodiments. A person skilled in the art may be aware that, the exemplary units and algorithm steps described in connection with the embodiments disclosed in this specification may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the disclosure. It may be clearly understood by a person skilled in the art that, for ease of description and conciseness, for a detailed working process of the foregoing system, apparatus, and unit, reference may be made to a corresponding process in the foregoing method embodiments, and details are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed device and method may be implemented in other manners. For example, the described device embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be neglected or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the devices or units may be implemented in electric, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one position, or may be distributed on a plurality of network units. A part or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present disclosure. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit. If the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, essential part or the part contributing to the prior art of the technical solutions of the present disclosure, or a part of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or a part of the steps of the methods described in the embodiments of the disclosure. The foregoing storage medium includes any medium that may store program code, such as a universal serial bus (USB) flash drive, a mobile hard disk, an ROM, an RAM, a magnetic disk, or an optical disc. The above descriptions are merely specific implementations of the present disclosure, but the scope of the present disclosure is not limited thereto. Any modifications and substitutions easily made by a person of ordinary skill in the art without departing from the technical principle of the present disclosure shall fall within the scope of the present disclosure. Therefore, the scope of the present disclosure shall be determined by the claims. | 70,971 |
11943176 | DETAILED DESCRIPTION Some wireless communication systems, such as fifth generation (5G) New Radio (NR) systems, may support using signal repeating devices (e.g., repeaters, relay nodes) to extend coverage of wireless communications services. The terms repeating device and repeater may be used interchangeably. For example, a base station may transmit or receive signaling via a repeater, which may enable the base station to share information with a user equipment (UE) operating outside of a coverage area for the base station or avoid sources of interference. In some implementations, the repeater may be an example of a repeater that detects or receives signals, amplifies the signals, and retransmits the amplified signals without control signaling from the base station. In some implementations, the base station may transmit control signaling to a repeater indicating various aspects that may be relevant for forwarding messages. In some cases, delays or timing offsets between transmitting control signaling and transmitting messages may be based on a type of the repeater. A repeater may be limited by a maximum amplification gain (e.g., Gmax) that may be applied to signals received by the repeater, where the value of the maximum amplification gain may depend on the time division duplex (TDD) pattern of downlink information and uplink information of a time resource. A wireless communications system may use TDD over a channel to communicate both downlink information and uplink information between two wireless nodes (e.g., a UE and a base station). In some cases, a repeater may have information of whether downlink information or uplink information is being communicated (e.g., the repeater may know the TDD pattern of the channel), and may thus improve the performance of the repeater based on the information. For example, if the repeater is continuously scanning to receive both downlink information and uplink information, the repeater may limit the maximum amplification gain that may be applied to retransmitted (e.g., forwarded) signals (e.g., by using unnecessary resources and increasing the chance of coupling between the signal transmitted by the repeater and the signal received by the repeater). In contrast, if the repeater has knowledge of a TDD pattern of the channel, the repeater may apply a higher amplification gain to the retransmitted signals. In some cases, the repeater may be informed of the TDD pattern (e.g., downlink or uplink) for a first subset of time resources, but may lack information of the TDD pattern for a second subset of time resources (e.g., time resources that may be configurable between uplink and downlink). As such, the repeater may operate differently when using the first subset of time resources than when using the second subset of time resources, which may introduce different end-to-end channel states (e.g., signal-to-noise ratio (SNR) values) between the two subsets of time resources. If the value of the amplification gain decreases, the overall quality of communications between wireless nodes may be reduced. Techniques described herein enable a repeater (e.g., a network-controlled repeater) to use TDD pattern detection in wireless communications. In some examples, a repeater may detect a TDD pattern using signaling from the base station. In some examples, a repeater may detect a TDD pattern by detecting channel conditions of the signals being received by the repeater. In some cases, a network control node (e.g., a base station) may receive an indication of the capability of the repeater to detect a TDD pattern of a channel between two network nodes (e.g., the base station and a UE). In some cases, the base station may also receive an indication of a configuration of the repeater. Based on the capability of the repeater, the base station may transmit an indication to the repeater including the TDD pattern of the channel, and the base station may transmit control signaling to the UE, via the repeater, based on the capability and the configuration of the repeater. For example, the base station may transmit parameters and reference signals associated with the UE performing channel measurements that may be based on the configuration of the repeater, or the capability of the repeater, or both. The repeater may determine whether the information transmitted between the base station and the UE includes downlink information, uplink information, or both, and the repeater may adjust its radio frequency components accordingly. The base station may configure different sets of channel measurements for a UE, via the repeater, based on the capability of the repeater and the configuration of the repeater. In some examples, the base station may transmit an indication of a TDD pattern of the channel to the repeater, and may transmit one or more parameters to the UE via the repeater for the UE to use when performing channel measurements. In some cases, the base station may transmit reference signals to the UE via the repeater for the channel measurements based on transmitting the parameters. In some examples, depending on the type of repeater, the indicator of the TDD pattern of the channel may include an indication of a common TDD configuration associated with a cell, a dedicated TDD configuration specific to the UE, or a slot format indicator (SFI). Particular aspects of the subject matter described herein may be implemented to realize one or more advantages. The described techniques may support improvements in TDD pattern detection for repeaters by increasing coverage and reducing signaling overhead. Further, in some examples, the repeater capability to detect TDD patterns as described herein may support a higher amplification gain at the repeater, which may improve the overall quality of communications between wireless nodes, thereby improving latency and reliability for an improved user experience. As such, supported techniques may include improved network operations, and, in some examples, may promote network efficiencies, among other benefits. Aspects of the disclosure are initially described in the context of wireless communications systems. Aspects of the disclosure are then described in the context of process flows. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to TDD pattern detection for repeaters. FIG.1illustrates an example of a wireless communications system100that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The wireless communications system100may include one or more base stations105, one or more UEs115, and a core network130. In some examples, the wireless communications system100may be a Long Term Evolution (LTE) network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network, or a New Radio (NR) network. In some examples, the wireless communications system100may support enhanced broadband communications, ultra-reliable (e.g., mission critical) communications, low latency communications, communications with low-cost and low-complexity devices, or any combination thereof. The base stations105may be dispersed throughout a geographic area to form the wireless communications system100and may be devices in different forms or having different capabilities. The base stations105and the UEs115may wirelessly communicate via one or more communication links125. Each base station105may provide a coverage area110over which the UEs115and the base station105may establish one or more communication links125. The coverage area110may be an example of a geographic area over which a base station105and a UE115may support the communication of signals according to one or more radio access technologies. The UEs115may be dispersed throughout a coverage area110of the wireless communications system100, and each UE115may be stationary, or mobile, or both at different times. The UEs115may be devices in different forms or having different capabilities. Some example UEs115are illustrated inFIG.1. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115, the base stations105, or network equipment (e.g., core network nodes, relay devices, integrated access and backhaul (IAB) nodes, or other network equipment), as shown inFIG.1. The base stations105may communicate with the core network130, or with one another, or both. For example, the base stations105may interface with the core network130through one or more backhaul links120(e.g., via an S1, N2, N3, or other interface). The base stations105may communicate with one another over the backhaul links120(e.g., via an X2, Xn, or other interface) either directly (e.g., directly between base stations105), or indirectly (e.g., via core network130), or both. In some examples, the backhaul links120may be or include one or more wireless links. One or more of the base stations105described herein may include or may be referred to by a person having ordinary skill in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or a giga-NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or other suitable terminology. A UE115may include or may be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” may also be referred to as a unit, a station, a terminal, or a client, among other examples. A UE115may also include or may be referred to as a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some examples, a UE115may include or be referred to as a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or a machine type communications (MTC) device, among other examples, which may be implemented in various objects such as appliances, or vehicles, meters, among other examples. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115that may sometimes act as relays as well as the base stations105and the network equipment including macro eNBs or gNBs, small cell eNBs or gNBs, or relay base stations, among other examples, as shown inFIG.1. The UEs115and the base stations105may wirelessly communicate with one another via one or more communication links125over one or more carriers. The term “carrier” may refer to a set of radio frequency spectrum resources having a defined physical layer structure for supporting the communication links125. For example, a carrier used for a communication link125may include a portion of a radio frequency spectrum band (e.g., a bandwidth part (BWP)) that is operated according to one or more physical layer channels for a given radio access technology (e.g., LTE, LTE-A, LTE-A Pro, NR). Each physical layer channel may carry acquisition signaling (e.g., synchronization signals, system information), control signaling that coordinates operation for the carrier, user data, or other signaling. The wireless communications system100may support communication with a UE115using carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both frequency division duplexing (FDD) and TDD component carriers. In some examples (e.g., in a carrier aggregation configuration), a carrier may also have acquisition signaling or control signaling that coordinates operations for other carriers. A carrier may be associated with a frequency channel (e.g., an evolved universal mobile telecommunication system terrestrial radio access (E-UTRA) absolute radio frequency channel number (EARFCN)) and may be positioned according to a channel raster for discovery by the UEs115. A carrier may be operated in a standalone mode where initial acquisition and connection may be conducted by the UEs115via the carrier, or the carrier may be operated in a non-standalone mode where a connection is anchored using a different carrier (e.g., of the same or a different radio access technology). The communication links125shown in the wireless communications system100may include uplink transmissions from a UE115to a base station105, or downlink transmissions from a base station105to a UE115. Carriers may carry downlink or uplink communications (e.g., in an FDD mode) or may be configured to carry downlink and uplink communications (e.g., in a TDD mode). A carrier may be associated with a particular bandwidth of the radio frequency spectrum, and in some examples the carrier bandwidth may be referred to as a “system bandwidth” of the carrier or the wireless communications system100. For example, the carrier bandwidth may be one of a number of determined bandwidths for carriers of a particular radio access technology (e.g., 1.4, 3, 5, 10, 15, 20, 40, or 80 megahertz (MHz)). Devices of the wireless communications system100(e.g., the base stations105, the UEs115, or both) may have hardware configurations that support communications over a particular carrier bandwidth or may be configurable to support communications over one of a set of carrier bandwidths. In some examples, the wireless communications system100may include base stations105or UEs115that support simultaneous communications via carriers associated with multiple carrier bandwidths. In some examples, each served UE115may be configured for operating over portions (e.g., a sub-band, a BWP) or all of a carrier bandwidth. Signal waveforms transmitted over a carrier may be made up of multiple subcarriers (e.g., using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or discrete Fourier transform spread OFDM (DFT-S-OFDM)). In a system employing MCM techniques, a resource element may consist of one symbol period (e.g., a duration of one modulation symbol) and one subcarrier, where the symbol period and subcarrier spacing are inversely related. The number of bits carried by each resource element may depend on the modulation scheme (e.g., the order of the modulation scheme, the coding rate of the modulation scheme, or both). Thus, the more resource elements that a UE115receives and the higher the order of the modulation scheme, the higher the data rate may be for the UE115. A wireless communications resource may refer to a combination of a radio frequency spectrum resource, a time resource, and a spatial resource (e.g., spatial layers or beams), and the use of multiple spatial layers may further increase the data rate or data integrity for communications with a UE115. The time intervals for the base stations105or the UEs115may be expressed in multiples of a basic time unit which may, for example, refer to a sampling period of Ts=1/(Δfmax·Nf) seconds, where Δfmaxmay represent the maximum supported subcarrier spacing, and Nfmay represent the maximum supported discrete Fourier transform (DFT) size. Time intervals of a communications resource may be organized according to radio frames each having a specified duration (e.g., 10 milliseconds (ms)). Each radio frame may be identified by a system frame number (SFN) (e.g., ranging from 0 to 1023). Each frame may include multiple consecutively numbered subframes or slots, and each subframe or slot may have the same duration. In some examples, a frame may be divided (e.g., in the time domain) into subframes, and each subframe may be further divided into a number of slots. Alternatively, each frame may include a variable number of slots, and the number of slots may depend on subcarrier spacing. Each slot may include a number of symbol periods (e.g., depending on the length of the cyclic prefix prepended to each symbol period). In some wireless communications systems100, a slot may further be divided into multiple mini-slots containing one or more symbols. Excluding the cyclic prefix, each symbol period may contain one or more (e.g., Nf) sampling periods. The duration of a symbol period may depend on the subcarrier spacing or frequency band of operation. A subframe, a slot, a mini-slot, or a symbol may be the smallest scheduling unit (e.g., in the time domain) of the wireless communications system100and may be referred to as a transmission time interval (TTI). In some examples, the TTI duration (e.g., the number of symbol periods in a TTI) may be variable. Additionally or alternatively, the smallest scheduling unit of the wireless communications system100may be dynamically selected (e.g., in bursts of shortened TTIs (sTTIs)). Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier, for example, using one or more of time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. A control region (e.g., a control resource set (CORESET)) for a physical control channel may be defined by a number of symbol periods and may extend across the system bandwidth or a subset of the system bandwidth of the carrier. One or more control regions (e.g., CORESETs) may be configured for a set of the UEs115. For example, one or more of the UEs115may monitor or search control regions for control information according to one or more search space sets, and each search space set may include one or multiple control channel candidates in one or more aggregation levels arranged in a cascaded manner. An aggregation level for a control channel candidate may refer to a number of control channel resources (e.g., control channel elements (CCEs)) associated with encoded information for a control information format having a given payload size. Search space sets may include common search space sets configured for sending control information to multiple UEs115and UE-specific search space sets for sending control information to a specific UE115. Each base station105may provide communication coverage via one or more cells, for example a macro cell, a small cell, a hot spot, or other types of cells, or any combination thereof. The term “cell” may refer to a logical communication entity used for communication with a base station105(e.g., over a carrier) and may be associated with an identifier for distinguishing neighboring cells (e.g., a physical cell identifier (PCID), a virtual cell identifier (VCID), or others). In some examples, a cell may also refer to a geographic coverage area110or a portion of a geographic coverage area110(e.g., a sector) over which the logical communication entity operates. Such cells may range from smaller areas (e.g., a structure, a subset of structure) to larger areas depending on various factors such as the capabilities of the base station105. For example, a cell may be or include a building, a subset of a building, or exterior spaces between or overlapping with geographic coverage areas110, among other examples. A macro cell generally covers a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by the UEs115with service subscriptions with the network provider supporting the macro cell. A small cell may be associated with a lower-powered base station105, as compared with a macro cell, and a small cell may operate in the same or different (e.g., licensed, unlicensed) frequency bands as macro cells. Small cells may provide unrestricted access to the UEs115with service subscriptions with the network provider or may provide restricted access to the UEs115having an association with the small cell (e.g., the UEs115in a closed subscriber group (CSG), the UEs115associated with users in a home or office). A base station105may support one or multiple cells and may also support communications over the one or more cells using one or multiple component carriers. In some examples, a carrier may support multiple cells, and different cells may be configured according to different protocol types (e.g., MTC, narrowband IoT (NB-IoT), enhanced mobile broadband (eMBB)) that may provide access for different types of devices. In some examples, a base station105may be movable and therefore provide communication coverage for a moving geographic coverage area110. In some examples, different geographic coverage areas110associated with different technologies may overlap, but the different geographic coverage areas110may be supported by the same base station105. In other examples, the overlapping geographic coverage areas110associated with different technologies may be supported by different base stations105. The wireless communications system100may include, for example, a heterogeneous network in which different types of the base stations105provide coverage for various geographic coverage areas110using the same or different radio access technologies. Some UEs115, such as MTC or IoT devices, may be low cost or low complexity devices and may provide for automated communication between machines (e.g., via Machine-to-Machine (M2M) communication). M2M communication or MTC may refer to data communication technologies that allow devices to communicate with one another or a base station105without human intervention. In some examples, M2M communication or MTC may include communications from devices that integrate sensors or meters to measure or capture information and relay such information to a central server or application program that makes use of the information or presents the information to humans interacting with the application program. Some UEs115may be designed to collect information or enable automated behavior of machines or other devices. Examples of applications for MTC devices include smart metering, inventory monitoring, water level monitoring, equipment monitoring, healthcare monitoring, wildlife monitoring, weather and geological event monitoring, fleet management and tracking, remote security sensing, physical access control, and transaction-based business charging. Some UEs115may be configured to employ operating modes that reduce power consumption, such as half-duplex communications (e.g., a mode that supports one-way communication via transmission or reception, but not transmission and reception simultaneously). In some examples, half-duplex communications may be performed at a reduced peak rate. Other power conservation techniques for the UEs115include entering a power saving deep sleep mode when not engaging in active communications, operating over a limited bandwidth (e.g., according to narrowband communications), or a combination of these techniques. For example, some UEs115may be configured for operation using a narrowband protocol type that is associated with a defined portion or range (e.g., set of subcarriers or resource blocks (RBs)) within a carrier, within a guard-band of a carrier, or outside of a carrier. The wireless communications system100may be configured to support ultra-reliable communications or low-latency communications, or various combinations thereof. For example, the wireless communications system100may be configured to support ultra-reliable low-latency communications (URLLC) or mission critical communications. The UEs115may be designed to support ultra-reliable, low-latency, or critical functions (e.g., mission critical functions). Ultra-reliable communications may include private communication or group communication and may be supported by one or more mission critical services such as mission critical push-to-talk (MCPTT), mission critical video (MCVideo), or mission critical data (MCData). Support for mission critical functions may include prioritization of services, and mission critical services may be used for public safety or general commercial applications. The terms ultra-reliable, low-latency, mission critical, and ultra-reliable low-latency may be used interchangeably herein. In some examples, a UE115may also be able to communicate directly with other UEs115over a device-to-device (D2D) communication link135(e.g., using a peer-to-peer (P2P) or D2D protocol). One or more UEs115utilizing D2D communications may be within the geographic coverage area110of a base station105. Other UEs115in such a group may be outside the geographic coverage area110of a base station105or be otherwise unable to receive transmissions from a base station105. In some examples, groups of the UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some examples, a base station105facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out between the UEs115without the involvement of a base station105. In some systems, the D2D communication link135may be an example of a communication channel, such as a sidelink communication channel, between vehicles (e.g., UEs115). In some examples, vehicles may communicate using vehicle-to-everything (V2X) communications, vehicle-to-vehicle (V2V) communications, or some combination of these. A vehicle may signal information related to traffic conditions, signal scheduling, weather, safety, emergencies, or any other information relevant to a V2X system. In some examples, vehicles in a V2X system may communicate with roadside infrastructure, such as roadside units, or with the network via one or more network nodes (e.g., base stations105) using vehicle-to-network (V2N) communications, or with both. The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC) or 5G core (5GC), which may include at least one control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management function (AMF)) and at least one user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). The control plane entity may manage non-access stratum (NAS) functions such as mobility, authentication, and bearer management for the UEs115served by the base stations105associated with the core network130. User IP packets may be transferred through the user plane entity, which may provide IP address allocation as well as other functions. The user plane entity may be connected to IP services150for one or more network operators. The IP services150may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched Streaming Service. Some of the network devices, such as a base station105, may include subcomponents such as an access network entity140, which may be an example of an access node controller (ANC). Each access network entity140may communicate with the UEs115through one or more other access network transmission entities145, which may be referred to as radio heads, smart radio heads, or transmission/reception points (TRPs). Each access network transmission entity145may include one or more antenna panels. In some configurations, various functions of each access network entity140or base station105may be distributed across various network devices (e.g., radio heads and ANCs) or consolidated into a single network device (e.g., a base station105). The wireless communications system100may operate using one or more frequency bands, typically in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). Generally, the region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band because the wavelengths range from approximately one decimeter to one meter in length. The UHF waves may be blocked or redirected by buildings and environmental features, but the waves may penetrate structures sufficiently for a macro cell to provide service to the UEs115located indoors. The transmission of UHF waves may be associated with smaller antennas and shorter ranges (e.g., less than 100 kilometers) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. The wireless communications system100may also operate in a super high frequency (SHF) region using frequency bands from 3 GHz to 30 GHz, also known as the centimeter band, or in an extremely high frequency (EHF) region of the spectrum (e.g., from 30 GHz to 300 GHz), also known as the millimeter band. In some examples, the wireless communications system100may support millimeter wave (mmW) communications between the UEs115and the base stations105, and EHF antennas of the respective devices may be smaller and more closely spaced than UHF antennas. In some examples, this may facilitate use of antenna arrays within a device. The propagation of EHF transmissions, however, may be subject to even greater atmospheric attenuation and shorter range than SHF or UHF transmissions. The techniques disclosed herein may be employed across transmissions that use one or more different frequency regions, and designated use of bands across these frequency regions may differ by country or regulating body. The wireless communications system100may utilize both licensed and unlicensed radio frequency spectrum bands. For example, the wireless communications system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz industrial, scientific, and medical (ISM) band. When operating in unlicensed radio frequency spectrum bands, devices such as the base stations105and the UEs115may employ carrier sensing for collision detection and avoidance. In some examples, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, P2P transmissions, or D2D transmissions, among other examples. A base station105or a UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. The antennas of a base station105or a UE115may be located within one or more antenna arrays or antenna panels, which may support MIMO operations or transmit or receive beamforming. For example, one or more base station antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some examples, antennas or antenna arrays associated with a base station105may be located in diverse geographic locations. A base station105may have an antenna array with a number of rows and columns of antenna ports that the base station105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. Additionally or alternatively, an antenna panel may support radio frequency beamforming for a signal transmitted via an antenna port. The base stations105or the UEs115may use MIMO communications to exploit multipath signal propagation and increase the spectral efficiency by transmitting or receiving multiple signals via different spatial layers. Such techniques may be referred to as spatial multiplexing. The multiple signals may, for example, be transmitted by the transmitting device via different antennas or different combinations of antennas. Likewise, the multiple signals may be received by the receiving device via different antennas or different combinations of antennas. Each of the multiple signals may be referred to as a separate spatial stream and may carry bits associated with the same data stream (e.g., the same codeword) or different data streams (e.g., different codewords). Different spatial layers may be associated with different antenna ports used for channel measurement and reporting. MIMO techniques include single-user MIMO (SU-MIMO), where multiple spatial layers are transmitted to the same receiving device, and multiple-user MIMO (MU-MIMO), where multiple spatial layers are transmitted to multiple devices. Beamforming, which may also be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (e.g., a base station105, a UE115) to shape or steer an antenna beam (e.g., a transmit beam, a receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that some signals propagating at particular orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying amplitude offsets, phase offsets, or both to signals carried via the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a particular orientation (e.g., with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). A base station105or a UE115may use beam sweeping techniques as part of beam forming operations. For example, a base station105may use multiple antennas or antenna arrays (e.g., antenna panels) to conduct beamforming operations for directional communications with a UE115. Some signals (e.g., synchronization signals, reference signals, beam selection signals, or other control signals) may be transmitted by a base station105multiple times in different directions. For example, the base station105may transmit a signal according to different beamforming weight sets associated with different directions of transmission. Transmissions in different beam directions may be used to identify (e.g., by a transmitting device, such as a base station105, or by a receiving device, such as a UE115) a beam direction for later transmission or reception by the base station105. Some signals, such as data signals associated with a particular receiving device, may be transmitted by a base station105in a single beam direction (e.g., a direction associated with the receiving device, such as a UE115). In some examples, the beam direction associated with transmissions along a single beam direction may be determined based on a signal that was transmitted in one or more beam directions. For example, a UE115may receive one or more of the signals transmitted by the base station105in different directions and may report to the base station105an indication of the signal that the UE115received with a highest signal quality or an otherwise acceptable signal quality. In some examples, transmissions by a device (e.g., by a base station105or a UE115) may be performed using multiple beam directions, and the device may use a combination of digital precoding or radio frequency beamforming to generate a combined beam for transmission (e.g., from a base station105to a UE115). The UE115may report feedback that indicates precoding weights for one or more beam directions, and the feedback may correspond to a configured number of beams across a system bandwidth or one or more sub-bands. The base station105may transmit a reference signal (e.g., a cell-specific reference signal (CRS), a channel state information reference signal (CSI-RS)), which may be precoded or unprecoded. The UE115may provide feedback for beam selection, which may be a precoding matrix indicator (PMI) or codebook-based feedback (e.g., a multi-panel type codebook, a linear combination type codebook, a port selection type codebook). Although these techniques are described with reference to signals transmitted in one or more directions by a base station105, a UE115may employ similar techniques for transmitting signals multiple times in different directions (e.g., for identifying a beam direction for subsequent transmission or reception by the UE115) or for transmitting a signal in a single direction (e.g., for transmitting data to a receiving device). A receiving device (e.g., a UE115) may try multiple receive configurations (e.g., directional listening) when receiving various signals from the base station105, such as synchronization signals, reference signals, beam selection signals, or other control signals. For example, a receiving device may try multiple receive directions by receiving via different antenna subarrays, by processing received signals according to different antenna subarrays, by receiving according to different receive beamforming weight sets (e.g., different directional listening weight sets) applied to signals received at multiple antenna elements of an antenna array, or by processing received signals according to different receive beamforming weight sets applied to signals received at multiple antenna elements of an antenna array, any of which may be referred to as “listening” according to different receive configurations or receive directions. In some examples, a receiving device may use a single receive configuration to receive along a single beam direction (e.g., when receiving a data signal). The single receive configuration may be aligned in a beam direction determined based on listening according to different receive configuration directions (e.g., a beam direction determined to have a highest signal strength, highest SNR, or otherwise acceptable signal quality based on listening according to multiple beam directions). The wireless communications system100may be a packet-based network that operates according to a layered protocol stack. In the user plane, communications at the bearer or Packet Data Convergence Protocol (PDCP) layer may be IP-based. A Radio Link Control (RLC) layer may perform packet segmentation and reassembly to communicate over logical channels. A Medium Access Control (MAC) layer may perform priority handling and multiplexing of logical channels into transport channels. The MAC layer may also use error detection techniques, error correction techniques, or both to support retransmissions at the MAC layer to improve link efficiency. In the control plane, the Radio Resource Control (RRC) protocol layer may provide establishment, configuration, and maintenance of an RRC connection between a UE115and a base station105or a core network130supporting radio bearers for user plane data. At the physical layer, transport channels may be mapped to physical channels. The UEs115and the base stations105may support retransmissions of data to increase the likelihood that data is received successfully. Hybrid automatic repeat request (HARQ) feedback is one technique for increasing the likelihood that data is received correctly over a communication link125. HARQ may include a combination of error detection (e.g., using a cyclic redundancy check (CRC)), forward error correction (FEC), and retransmission (e.g., automatic repeat request (ARQ)). HARQ may improve throughput at the MAC layer in poor radio conditions (e.g., low signal-to-noise conditions). In some examples, a device may support same-slot HARQ feedback, where the device may provide HARQ feedback in a specific slot for data received in a previous symbol in the slot. In other cases, the device may provide HARQ feedback in a subsequent slot, or according to some other time interval. Some wireless communication systems100, such as 5G NR systems, may support using signal repeating devices (e.g., repeaters) to extend coverage of wireless communications services. For example, a base station105may transmit or receive signaling via a repeater, which may enable the base station105to share information with a UE115operating outside of a coverage area supported by the base station105or may enable a signal strength of a channel to improve by avoiding sources of interference. The repeater (e.g., a relay node) may use an amplify-and-forward operation between two wireless nodes (e.g., the base station105and the UE115), which may be a simple and cost-effective way to improve network coverage in the wireless communications system100. In some cases, the relay node may be a decode-and-forward relay node such as an IAB node. The performance of a repeater may be improved with the addition of side information, which may include timing information (e.g., a slot, symbol, subframe, or frame boundary), a TDD pattern of the channel (e.g., whether a resource is a downlink resource, an uplink resource, or a flexible resource), ON-OFF scheduling, spatial information for beam management, or a combination thereof. In some cases, the wireless communications system100may include various types of repeaters. For example, a traditional repeater may be used without side information for amplify-and-forward operations. An autonomous smart repeater may, by itself, acquire or infer at least part of information (e.g., the side information) about the channel it may use. The autonomous smart repeater may acquire the information by receiving and decoding broadcast channels. A network-controlled repeater may be configured (e.g., controlled) with side information by a network node (e.g., the base station105) via an established control interface. In some cases, for a network-controlled repeater, the side information may be provided (e.g., controlled) by the base station105. In some cases, part of the side information may be configured (e.g., controlled) by the base station, while the remaining side information may be acquired or inferred by the network-controlled repeater, which may reduce control overhead, latency, or both. FIG.2illustrates an example of a wireless communications system200that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. In some examples, the wireless communications system200may implement aspects of the wireless communications system100or may be implemented by aspects of the wireless communications system100. For example, the wireless communications system200may include a UE115-aand a base station105-a, which may be examples of corresponding devices described herein with reference toFIG.1. The wireless communications system200may include features for improved communications between the UEs115, among other benefits. In some cases, a repeater205may be used in the wireless communications system200to extend coverage and enable communications between a first wireless node and a second wireless node (e.g., the base station105-aand the UE115-a) for an access network, or between a first UE and a second UE for sidelink communications. In some cases, a network control node may be one of the first wireless node and the second wireless node (e.g., the base station105-afor an access network) or another node different from the first wireless node and the second wireless node (e.g., the base station105-a) for sidelink communications. In some cases, the base station105-amay use a repeater205(e.g., a network-controlled repeater) to extend coverage and enable communications with the UE115-a. For example, the UE115-amay operate outside of a coverage area of the base station105-a, or there may be an interference215which may block transmissions between the UE115-aand the base station105-a. The interference215may be a building or other physical blockage, or may be a distance (e.g., if the distance between the UE115-aand the base station105-ais too far, then the repeater205may be used). The base station105-amay transmit signaling to the repeater205via a communications link210-a, and the repeater205may retransmit the signaling to the UE115-avia a communications link210-b. In some cases, the performance of the repeater205may be impacted by TDD patterns of the signals being repeated by the repeater205. For example, the repeater205may be limited by a maximum amplification gain (e.g., Gmax) that may be applied to signals received by the repeater due to a stability concern. For example, if the gain of the repeater is too high, the signal transmitted by the repeater may be detected by the receivers of the repeater, thereby creating a feedback loop that distorts and interferes with the signal transmitted by the repeater (e.g., similar to feedback interference that may occur when a microphone comes too close to a speaker associated with the microphone). For instance, the value of Gmax that may be applied to the signals may depend on whether the repeater205is aware of TDD configuration or pattern of a time resource. In some examples, without information the TDD pattern of a channel, the repeater205may keep two radio frequency chains on (e.g., a downlink radio frequency chain and an uplink radio frequency chain). In some cases, the coupling between the two active radio frequency chains (e.g., coupling between beams pointing in a similar direction) may impact the stability of the repeater205and lead to a relatively low value for Gmax (e.g., 50 dB) (e.g., to ensure stability). If the repeater205has information regarding whether the if communications are uplink communications or downlink communications at a time resource, the repeater205may adjust parameters of one radio frequency chain for the active direction and adjust parameters of the other radio frequency chain (e.g., set a low gain value for the other radio frequency chain) or deactivate the other radio frequency chain. As such, the coupling between the two radio frequency chains may be reduced and the active radio frequency chain may work at a higher Gmax (e.g., 100 dB). In 5G NR, a time resource may be indicated as a downlink resource, an uplink resource, or a flexible resource. A flexible resource (e.g., a resource that could be assigned for downlink communications or uplink communications) may be overridden (e.g., converted) into a downlink resource or an uplink resource at a later time by another signaling message. In some cases, a TDD pattern (or TDD configuration, as it may be called in some instances) may be conveyed in different signaling and with different levels of specificity. For example, the TDD pattern may be indicated by a common TDD configuration associated with a cell (e.g., a cell-specific TDDConfigCommon), which may be broadcast by the base station105-ain a system information block (SIB) (e.g., SIB1). In some cases, the TDDConfigCommon may indicate a slot or symbol as a downlink resource, an uplink resource, or a flexible resource. Additionally or alternatively, the TDD pattern may be indicated by a dedicated TDD configuration specific to the UE115-a(e.g., a UE-specific TDDConfigDedicated), which may be sent via an RRC reconfiguration message. In some examples, the TDD pattern may be indicated by an SFI, which may be sent via physical downlink control channel (PDCCH) control signaling such as downlink control information (DCI) (e.g., DCI format DCI2_0). In some cases, rules may be defined for overriding flexible resources. For example, the TDDConfigDedicated may override flexible resources indicated by TDDConfigCommon into downlink resources and uplink resources, and the SFI may override flexible resources indicated by TDDConfigCommon, TDDConfigDedicated, or both into downlink resources and uplink resources. Depending on the implementation, the repeater205may have information regarding whether the communications are downlink communications or are uplink communications for a first subset of time resources (e.g., a resource subset235-a), but may lack information regarding whether the communications are downlink or uplink for a second subset of time resources (e.g., a resource subset235-b). For example, the side information for the repeater205may indicate that some communication resources are to be used for downlink or uplink for a subset of resources (e.g., downlink resources240and uplink resources245in the resource subset235-a) and may indicate the remaining resources as flexible resources (e.g., flexible resources250in the resource subset235-b). In some cases, the downlink resources240and the uplink resources245in the resource subset235-amay be TDMed as shown, or FDMed. Flexible resources may be resources that may be used for uplink or downlink and may be scheduled at a later time. The repeater205may lack information on whether the flexible resources250may be overridden to the downlink resources240or the uplink resources245. In some cases, an autonomous smart repeater may be able to decode the broadcast SIB1 message to receive the common TDD configuration associated with a cell (e.g., a cell-specific TDDConfigCommon), and may be unaware of the dedicated TDD configuration specific to the UE115-a(e.g., a UE-specific TDDConfigDedicated) or the SFI or both. As such, for the flexible resources indicated by TDDConfigCommon, the autonomous smart repeater may lack the techniques to determine whether the resources may be downlink resources or uplink resources because those flexible resources may be scheduled by the dedicated TDD configuration or by the SFI or both. In some cases, a network-controlled repeater (e.g., the repeater205) may be provided (e.g., by the base station105-a) with semi-static TDD information (e.g., TDDConfigCommon and TDDConfigDedicated), but may lack dynamic SFI. As such, for the flexible resources250indicated by TDDConfigDedicated, the repeater205may lack the techniques to determine whether the flexible resources250may be downlink resources240or uplink resources245. In some cases, using a power detection algorithm adopted by the repeater205, the repeater205may be able to detect a subset of resources as downlink or uplink with high confidence, but may treat remaining resources as flexible resources250that may be configured for uplink communications or downlink communications. The repeater205may operate differently between the first subset of time resources (e.g., the resource subset235-athat may be configured for uplink communications or downlink communications by the repeater205) and the second subset of time resources (e.g., the repeater205may not know whether the resource subset235-bis configured for uplink communications or downlink communications), which may lead to different end-to-end channel states (e.g., signal-to-interference and noise ratio (SINR) values) between the two subsets of time resources. In some examples, the repeater205may apply different Gmax constraints on its various radio frequency chains at a time resource depending on whether or not the repeater205knows that the resources are configured for uplink communications or downlink communications. In some cases, the repeater205may have one or more radio frequency chains for receiving, transmission, uplink, downlink, or any combination thereof. For example, the repeater205may adopt three options for its amplification constraints (e.g., Gmax1=70 dB, Gmax2=50 dB, Gmax3=30 dB). At the first subset of resources configured for uplink communications or downlink communications, the repeater205may apply Gmax1=70 dB on the radio frequency chain for the active uplink communications or downlink communications and Gmax3=30 dB on the radio frequency chain for the inactive uplink communications or downlink communications. At the second subset of resources for which the repeater205may not know whether the second set of resources is configured for uplink communications or downlink communications, the repeater205may apply Gmax2=50 dB on one or more radio frequency chains. In some cases, a network control node (e.g., the base station105-a) may be aware of the presence of the repeater205and the capability of the repeater205on TDD pattern detection of a channel. For example, the repeater205may transmit its capability220to detect a TDD pattern via the communications link210-a. The base station105-amay configure different sets of channel measurements for the UE115-abased on the capability, which may include transmitting one or more parameters associated with the UE115-aperforming channel measurements and reference signals for the channel measurements. In some cases, the base station105-amay also receive an indication of a configuration of the repeater205, which may be used to transmit the parameters to the UE115-a. The base station105-amay be aware of the configuration of the repeater205and the capability220of the repeater205via an operation administration and maintenance (OAM) configuration or by a signaling report from the repeater205. In some cases, the base station105-amay schedule communication resources230for communications with the UE115-avia the repeater205. Upon retransmitting information (e.g., the parameters and reference signals) from the base station105-ato the UE115-ausing the communication resources230, the repeater205may determine whether the information includes downlink information or uplink information, and may use that determination to detect TDD patterns. The base station105-amay transmit a TDD configuration225to the repeater205via the communications link210-abased on the capability220of the repeater205. For example, the base station105-amay configure a first set of channel measurements on resources in the resource subset235-aconfigured for uplink communications or downlink communications at the repeater205, and may configure a second set of channel measurements on resources in the resource subset235-bwhich the repeater205may not know are configured for uplink communications or downlink communications at the repeater205. As such, the base station105-amay transmit the TDD configuration225based on its knowledge of the capability220of the repeater205on TDD pattern detection, or based on a feedback message from the repeater205(e.g., if such a control interface exists). In some examples, if the repeater205may detect TDD patterns via acquiring TDDConfigComm, then the base station105-amay configure the first set of channel measurements on the resource subset235-a, which may include downlink resources240and uplink resources245indicated by TDDConfigComm, and may configure the second set of channel measurements on the resource subset235-b, which may include flexible resources250indicated by TDDConfigComm. In some cases, if the repeater205has knowledge of both TDDConfigComm and TDDConfigDedicated via a network-provided control interface, then the base station105-amay configure the first set of channel measurements on the downlink resources240and the uplink resources245indicated by TDDConfigDedicated, and may configure the second set of channel measurements on the flexible resources250indicated by TDDConfigDedicated. In some cases, the repeater205may report the resource subsets235whether or not the resources are configured for uplink communications or downlink communications, and the base station105-amay configure channel measurements for the UE115-abased on the report. In some cases, a 5G NR CSI framework may support multiple channel measurements via a proper configuration of CSI-RS resource sets. The base station105-amay make different scheduling decisions, configurations, or both on different sets of time resources (e.g., the resource subset235-aand the resource subset235-b) for communication between the base station105-aand the UE115-bbased on whether the repeater knows the TDD pattern for the communications. Additionally or alternatively, the base station105-amay make different scheduling decisions, configurations, or both on different sets of time resources (e.g., the resource subset235-aand the resource subset235-b) for communication between the base station105-aand the UE115-bbased on the measurement results on the resource subsets235by the UE115-a. The scheduling decisions, configurations, or both may be based on the configuration of the repeater205and the capability220of the repeater on TDD pattern detection. The capability of the repeater205may refer to an indication of what types of configurations the repeater205is capable of using for its operation. The configuration of the repeater205may refer to an indication of exactly what type the repeater is currently using. The base station105-amay schedule different configurations of reference signals depending on whether the repeater205has knowledge of the TDD pattern through signaling, whether the repeater205has knowledge of the TDD pattern through detecting signals, or whether the repeater205does not have knowledge of the TDD pattern, or combination thereof. The base station105-amay make scheduling decisions for communications between the base station105-aand the UE115-aincluding a modulation and coding scheme (MCS), a rank, a quantity of beams, or a combination thereof. In some examples, the base station105-amay configure the scheduled node (e.g., the base station105-a, the UE115-a, or both) with power control parameters, a resource configuration, or a combination thereof. The base station105-amay also transmit a configuration to the repeater205(e.g., a network-controlled repeater) including a power configuration associated with the channel, additional TDD parameters associated with the channel, or both. FIG.3illustrates an example of a process flow300that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The process flow300may implement aspects of wireless communications systems100and200, or may be implemented by aspects of the wireless communications system100and200. For example, the process flow300may illustrate operations between a base station105-b, a repeater305, and a UE115-bwhich may be examples of corresponding devices described with reference toFIGS.1and2. In the following description of the process flow300, the operations between the base station105-b, the repeater305, and the UE115-bmay be transmitted in a different order than the example order shown, or the operations performed by the base station105-b, the repeater305, and the UE115-bmay be performed in different orders or at different times. Some operations may also be omitted from the process flow300, and other operations may be added to the process flow300. At310, the base station105-b(e.g., a network control node) may receive a first indication of a capability of the repeater305to detect a TDD pattern of a channel that communicates information between two nodes (e.g., the base station105-band the UE115-b) using the repeater305. Examples of different capabilities the repeater305may include detect the TDD pattern may include the capability to decode SFI indications (e.g., in DCI), the capability to decode TDDConfigDedicated indications (e.g., in RRC reconfiguration messages), the capability to decode TDDConfigComm indications (e.g., in SIB), the capability to detect channel conditions as the UE115-band base station105-bcommunicate via the repeater305, or any combination thereof. In some cases, the first indication may include a first capability that the repeater305is capable of decoding a SIB, a second capability that the repeater305is capable of decoding an RRC reconfiguration message, a third capability that the repeater305is capable of decoding DCI, a fourth capability that the repeater305is capable of detecting whether a communication resource is used to communicate uplink information or downlink information, or a combination thereof. In some cases, the base station105-bmay configure different sets of channel measurements for the UE115-bbased on the capability of the repeater305. At315, the base station105-bmay receive a second indication of a configuration of the repeater305. For example, the repeater305may communicate to the base station105-bwhether the repeater305is operating in a mode that allows the decoding of some types of messages (e.g., decoding of SIBs, RRC messages, or DCI). The configuration of the repeater305may include other parameters being used by the repeater305, such as gain for radio frequency chains and other components. In some cases, the base station105-bmay make scheduling decisions, configurations, or both on multiple sets of time resources between the base station105-band the UE115-bbased on the capability and the configuration of the repeater305. At320, the base station105-bmay transmit a second indication of a TDD configuration of the channel. In some cases, the second indication may include an information element communicated in a SIB which includes a common TDD configuration associated with a cell of the wireless network (e.g., a cell-specific TDDConfigCommon), an information element communicated in an RRC reconfiguration message, which includes a dedicated TDD configuration that is specific to the UE115-b(e.g., a UE-specific TDDConfigDedicated), or an SFI communicated in DCI (e.g., DCI format DCI2_0). For example, as described with reference toFIG.2, the base station105-bmay configure a first set of channel measurements on resources configured for uplink communications or downlink communications at the repeater305, and a second set of channel measurements on resources which the repeater305may not know are configured for uplink communications or downlink communications at the repeater305. In some cases, the indication of the TDD configuration320may be broadcast to other devices generally, may be transmitted to the repeater305or may be transmitted to the UE115-b, or any combination thereof. At325, the base station105-bmay transmit, to the UE115-band via the repeater305, one or more parameters associated with the UE115-bperforming one or more channel measurements based on the capability of the repeater to detect the TDD pattern. In some cases, the base station105-bmay transmit the parameters based on the configuration of the repeater305. The parameters may comprise a power control parameter, a resource configuration, or a combination thereof. In some cases, the base station105-bmay transmit a third indication to perform a first set of channel measurements on a first set of resources indicated by the TDD pattern as uplink or downlink, and a fourth indication to perform a second set of channel measurements on a second set of resources indicated by the TDD pattern as flexibly configurable as either uplink or downlink. At330, the base station105-bmay transmit, to the UE115-bvia the repeater305, reference signals for the one or more channel measurements based on transmitting the one or more parameters. At335, the base station105-bmay schedule communication resources for communication with the UE115-bvia the repeater305based on the capability of the repeater to detect the TDD pattern of the channel. The base station may schedule the communication resources based on receiving the first indication. At340, the base station105-bmay transmit, to the UE115-band via the repeater305, a message that schedules the communication resources. As described with reference toFIG.2, the communication resources may include downlink resources, uplink resources, flexible resources, or a combination thereof. FIG.4illustrates an example of a process flow400that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The process flow400may implement aspects of wireless communications systems100and200, or may be implemented by aspects of the wireless communications system100and200. For example, the process flow400may illustrate operations between a base station105-c, a repeater405, and a UE115-cwhich may be examples of corresponding devices described with reference toFIGS.1and2. In the following description of the process flow400, the operations between the base station105-c, the repeater405, and the UE115-cmay be transmitted in a different order than the example order shown, or the operations performed by the base station105-c, the repeater405, and the UE115-cmay be performed in different orders or at different times. Some operations may also be omitted from the process flow400, and other operations may be added to the process flow400. At410, the base station105-c(e.g., a network control node) may receive a first indication of a capability of the repeater405(e.g., a network-controlled repeater) to detect a TDD pattern of a channel that communicates information between two nodes (e.g., the base station105-cand the UE115-c) using the repeater405. Examples of different capabilities the repeater405may include detect the TDD pattern may include the capability to decode SFI indications (e.g., in DCI), the capability to decode TDDConfigDedicated indications (e.g., in RRC reconfiguration messages), the capability to decode TDDConfigComm indications (e.g., in SIB), the capability to detect channel conditions as the UE115-cand base station105-ccommunicate via the repeater405, or any combination thereof. In some cases, the first indication may include a first capability that the repeater405is capable of decoding a SIB, a second capability that the repeater405is capable of decoding an RRC reconfiguration message, a third capability that the repeater405is capable of decoding DCI, a fourth capability that the repeater405is capable of detecting whether a communication resource is used to communicate uplink information or downlink information, or a combination thereof. In some cases, the base station105-cmay configure different sets of channel measurements for the UE115-cbased on the capability of the repeater405. At415, the base station105-cmay receive a second indication of a configuration of the repeater405. For example, the repeater405may communicate to the base station105-bwhether the repeater405is operating in a mode that allows the decoding of some types of messages (e.g., decoding of SIBs, RRC messages, or DCI). The configuration of the repeater405may include other parameters being used by the repeater405, such as gain for radio frequency chains and other components. In some cases, the base station105-bmay make scheduling decisions, configurations, or both on multiple sets of time resources between the base station105-band the UE115-bbased on the capability and the configuration of the repeater305. At420, the base station105-cmay transmit a second indication of a TDD configuration of the channel. In some cases, the second indication may include an information element communicated in a SIB which includes a common TDD configuration associated with a cell of the wireless network (e.g., a cell-specific TDDConfigCommon), an information element communicated in an RRC reconfiguration message, which includes a dedicated TDD configuration that is specific to the UE115-c(e.g., a UE-specific TDDConfigDedicated), or an SFI communicated in DCI (e.g., DCI format DCI2_0). For example, as described with reference toFIG.2, the base station105-cmay configure a first set of channel measurements on resources configured for uplink communications or downlink communications at the repeater405, and a second set of channel measurements on resources which the repeater405may not know are configured for uplink communications or downlink communications at the repeater405. In some cases, the indication of the TDD configuration420may be broadcast to other devices generally, may be transmitted to the repeater405or may be transmitted to the UE115-c, or any combination thereof. At425, the repeater405may detect one or more conditions of information communicated between the two nodes (e.g., the base station105-cand the UE115-c) via the repeater405. At430, the repeater405may determine whether the information includes downlink information or uplink information based on detecting the one or more conditions. For example, as described with reference toFIG.2, the repeater405may know that a set of resources for a first set of channel measurements by the UE115-care configured for uplink communications or downlink communications, but may not know that a set of resources for a second set of channel measurements are configured for uplink communications or downlink communications. In some cases, the repeater405may know downlink and uplink information based on a TDDConfigComm, a TDDConfigDedicated, or an SFI. At435, the repeater405may adjust one or more radio frequency components of the repeater405based on the second indication of the TDD pattern. In some cases, the repeater405may adjust the one or more radio frequency resources based on determining whether the information includes downlink information or uplink information. At440, the repeater405may receive from the base station105-ca message that schedules communication resources for communication between the base station105-cand the UE115-cbased on the capability of the repeater. The repeater405may retransmit the message that schedules the communication resources to the UE115-cbased on receiving the message. In some cases, the scheduled resources may be downlink resources, uplink resources, flexible resources, or a combination thereof. FIG.5shows a block diagram500of a device505that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The device505may be an example of aspects of a network control node as described herein. The device505may include a receiver510, a transmitter515, and a communications manager520. The device505may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver510may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to TDD pattern detection for repeaters). Information may be passed on to other components of the device505. The receiver510may utilize a single antenna or a set of multiple antennas. The transmitter515may provide a means for transmitting signals generated by other components of the device505. For example, the transmitter515may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to TDD pattern detection for repeaters). In some examples, the transmitter515may be co-located with a receiver510in a transceiver module. The transmitter515may utilize a single antenna or a set of multiple antennas. The communications manager520, the receiver510, the transmitter515, or various combinations thereof or various components thereof may be examples of means for performing various aspects of TDD pattern detection for repeaters as described herein. For example, the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally or alternatively, in some examples, the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a central processing unit (CPU), an ASIC, an FPGA, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager520may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver510, the transmitter515, or both. For example, the communications manager520may receive information from the receiver510, send information to the transmitter515, or be integrated in combination with the receiver510, the transmitter515, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager520may support wireless communication at a network control node in accordance with examples as disclosed herein. For example, the communications manager520may be configured as or otherwise support a means for receiving a first indication of a capability of a repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The communications manager520may be configured as or otherwise support a means for transmitting, to a UE and via the repeater, one or more parameters associated with the UE performing one or more channel measurements based on the capability of the repeater to detect the TDD pattern. The communications manager520may be configured as or otherwise support a means for transmitting, to the UE via the repeater, reference signals for the one or more channel measurements based on transmitting the one or more parameters. By including or configuring the communications manager520in accordance with examples as described herein, the device505(e.g., a processor controlling or otherwise coupled to the receiver510, the transmitter515, the communications manager520, or a combination thereof) may support techniques for TDD pattern detection for repeaters, which may increase coverage and reduce signaling overhead. Further, in some examples, the repeater capability to detect TDD patterns as described herein may support a higher amplification gain at the repeater, which may improve the overall quality of communications between wireless nodes, thereby improving latency and reliability for an improved user experience. FIG.6shows a block diagram600of a device605that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The device605may be an example of aspects of a device505or a network control node as described herein. The device605may include a receiver610, a transmitter615, and a communications manager620. The device605may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver610may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to TDD pattern detection for repeaters). Information may be passed on to other components of the device605. The receiver610may utilize a single antenna or a set of multiple antennas. The transmitter615may provide a means for transmitting signals generated by other components of the device605. For example, the transmitter615may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to TDD pattern detection for repeaters). In some examples, the transmitter615may be co-located with a receiver610in a transceiver module. The transmitter615may utilize a single antenna or a set of multiple antennas. The device605, or various components thereof, may be an example of means for performing various aspects of TDD pattern detection for repeaters as described herein. For example, the communications manager620may include a capability reception component625, a parameter transmission component630, a reference signal transmission component635, or any combination thereof. The communications manager620may be an example of aspects of a communications manager520as described herein. In some examples, the communications manager620, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver610, the transmitter615, or both. For example, the communications manager620may receive information from the receiver610, send information to the transmitter615, or be integrated in combination with the receiver610, the transmitter615, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager620may support wireless communication at a network control node in accordance with examples as disclosed herein. The capability reception component625may be configured as or otherwise support a means for receiving a first indication of a capability of a repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The parameter transmission component630may be configured as or otherwise support a means for transmitting, to a UE and via the repeater, one or more parameters associated with the UE performing one or more channel measurements based on the capability of the repeater to detect the TDD pattern. The reference signal transmission component635may be configured as or otherwise support a means for transmitting, to the UE via the repeater, reference signals for the one or more channel measurements based on transmitting the one or more parameters. FIG.7shows a block diagram700of a communications manager720that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The communications manager720may be an example of aspects of a communications manager520, a communications manager620, or both, as described herein. The communications manager720, or various components thereof, may be an example of means for performing various aspects of TDD pattern detection for repeaters as described herein. For example, the communications manager720may include a capability reception component725, a parameter transmission component730, a reference signal transmission component735, a communication resource scheduling component740, a configuration reception component745, a TDD pattern transmission component750, a feedback reception component755, a communication parameter transmission component760, a power configuration transmission component765, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager720may support wireless communication at a network control node in accordance with examples as disclosed herein. The capability reception component725may be configured as or otherwise support a means for receiving a first indication of a capability of a repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The parameter transmission component730may be configured as or otherwise support a means for transmitting, to a UE and via the repeater, one or more parameters associated with the UE performing one or more channel measurements based on the capability of the repeater to detect the TDD pattern. The reference signal transmission component735may be configured as or otherwise support a means for transmitting, to the UE via the repeater, reference signals for the one or more channel measurements based on transmitting the one or more parameters. In some examples, the communication resource scheduling component740may be configured as or otherwise support a means for scheduling communication resources for communication with the UE via the repeater based on the capability of the repeater to detect the TDD pattern of the channel based on receiving the first indication. In some examples, the communication resource scheduling component740may be configured as or otherwise support a means for transmitting, to the UE via the repeater, a message that schedules the communication resources. In some examples, the configuration reception component745may be configured as or otherwise support a means for receiving a second indication of a configuration of the repeater, where transmitting the one or more parameters is based on the configuration of the repeater, where the one or more parameters associated with the UE performing the one or more channel measurements are based on the configuration of the repeater and the capability of the repeater. In some examples, the TDD pattern transmission component750may be configured as or otherwise support a means for transmitting a second indication of TDD pattern of the channel, where transmitting the one or more parameters is based on transmitting the second indication. In some examples, the second indication includes an information element communicated in a SIB, the information element including a common TDD configuration associated with a cell of a wireless network. In some examples, the second indication includes an information element communicated in an RRC reconfiguration message, the information element including a dedicated TDD configuration that is specific to the UE. In some examples, the second indication includes an SFI communicated in DCI, a specific slot format in the SFI associated with the UE in a serving cell. In some examples, to support transmitting the one or more parameters associated with the UE performing the one or more channel measurements, the parameter transmission component730may be configured as or otherwise support a means for transmitting, based on the second indication, a third indication to perform a first set of channel measurements on a first set of resources indicated by the TDD pattern as uplink or downlink and a fourth indication to perform a second set of channel measurements on a second set of resources indicated by the TDD pattern as flexibly configurable as either uplink or downlink. In some examples, the feedback reception component755may be configured as or otherwise support a means for receiving, from the repeater, a feedback message indicating one or more conditions associated with the repeater. In some examples, the parameter transmission component730may be configured as or otherwise support a means for transmitting, to the UE and via the repeater, the one or more parameters associated with the UE performing the one or more channel measurements based on receiving the feedback message. In some examples, the one or more parameters includes a power control parameter, a resource configuration, or a combination thereof. In some examples, the first indication includes a first capability that the repeater is capable of decoding system block information, a second capability that the repeater is capable of decoding an RRC reconfiguration message, a third capability that the repeater is capable of decoding DCI, a fourth capability that the repeater is capable of detecting whether a communication resource is used to communicate uplink information or downlink information, or a combination thereof. In some examples, the repeater is a traditional repeater configured to receive and amplify signals independent of control information about the TDD pattern of the channel, an autonomous repeater configured to identify information about the TDD pattern of the channel based on channel conditions at the repeater, or a network-controlled repeater configured to receive the control information about the TDD pattern of the channel. In some examples, the communication parameter transmission component760may be configured as or otherwise support a means for transmitting a second indication of communication parameters for communications between the UE and the network control node based on the capability of the repeater, the communication parameters including an MCS, a rank, a quantity of beams, or a combination thereof. In some examples, the power configuration transmission component765may be configured as or otherwise support a means for transmitting, to the repeater, a power configuration associated with the channel, the TDD pattern associated with the channel, or a combination thereof based on the capability of the repeater. In some examples, the network control node includes a base station, a UE, or a combination thereof. FIG.8shows a diagram of a system800including a device805that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The device805may be an example of or include the components of a device505, a device605, or a network control node as described herein. The device805may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager820, an input/output (I/O) controller810, a transceiver815, an antenna825, a memory830, code835, and a processor840. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus845). The I/O controller810may manage input and output signals for the device805. The I/O controller810may also manage peripherals not integrated into the device805. In some cases, the I/O controller810may represent a physical connection or port to an external peripheral. In some cases, the I/O controller810may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. Additionally or alternatively, the I/O controller810may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller810may be implemented as part of a processor, such as the processor840. In some cases, a user may interact with the device805via the I/O controller810or via hardware components controlled by the I/O controller810. In some cases, the device805may include a single antenna825. However, in some other cases, the device805may have more than one antenna825, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver815may communicate bi-directionally, via the one or more antennas825, wired, or wireless links as described herein. For example, the transceiver815may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver815may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas825for transmission, and to demodulate packets received from the one or more antennas825. The transceiver815, or the transceiver815and one or more antennas825, may be an example of a transmitter515, a transmitter615, a receiver510, a receiver610, or any combination thereof or component thereof, as described herein. The memory830may include random-access memory (RAM) and read-only memory (ROM). The memory830may store computer-readable, computer-executable code835including instructions that, when executed by the processor840, cause the device805to perform various functions described herein. The code835may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code835may not be directly executable by the processor840but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory830may contain, among other things, a basic input/output system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor840may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor840may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor840. The processor840may be configured to execute computer-readable instructions stored in a memory (e.g., the memory830) to cause the device805to perform various functions (e.g., functions or tasks supporting TDD pattern detection for repeaters). For example, the device805or a component of the device805may include a processor840and memory830coupled to the processor840, the processor840and memory830configured to perform various functions described herein. The communications manager820may support wireless communication at a network control node in accordance with examples as disclosed herein. For example, the communications manager820may be configured as or otherwise support a means for receiving a first indication of a capability of a repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The communications manager820may be configured as or otherwise support a means for transmitting, to a UE and via the repeater, one or more parameters associated with the UE performing one or more channel measurements based on the capability of the repeater to detect the TDD pattern. The communications manager820may be configured as or otherwise support a means for transmitting, to the UE via the repeater, reference signals for the one or more channel measurements based on transmitting the one or more parameters. By including or configuring the communications manager820in accordance with examples as described herein, the device805may support techniques for TDD pattern detection for repeaters, which may increase coverage and reduce signaling overhead. Further, in some examples, the repeater capability to detect TDD patterns as described herein may support a higher amplification gain at the repeater, which may improve the overall quality of communications between wireless nodes, thereby improving latency and reliability for an improved user experience. In some examples, the communications manager820may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver815, the one or more antennas825, or any combination thereof. Although the communications manager820is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager820may be supported by or performed by the processor840, the memory830, the code835, or any combination thereof. For example, the code835may include instructions executable by the processor840to cause the device805to perform various aspects of TDD pattern detection for repeaters as described herein, or the processor840and the memory830may be otherwise configured to perform or support such operations. FIG.9shows a block diagram900of a device905that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The device905may be an example of aspects of a repeater as described herein. The device905may include a receiver910, a transmitter915, and a communications manager920. The device905may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver910may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to TDD pattern detection for repeaters). Information may be passed on to other components of the device905. The receiver910may utilize a single antenna or a set of multiple antennas. The transmitter915may provide a means for transmitting signals generated by other components of the device905. For example, the transmitter915may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to TDD pattern detection for repeaters). In some examples, the transmitter915may be co-located with a receiver910in a transceiver module. The transmitter915may utilize a single antenna or a set of multiple antennas. The communications manager920, the receiver910, the transmitter915, or various combinations thereof or various components thereof may be examples of means for performing various aspects of TDD pattern detection for repeaters as described herein. For example, the communications manager920, the receiver910, the transmitter915, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager920, the receiver910, the transmitter915, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally or alternatively, in some examples, the communications manager920, the receiver910, the transmitter915, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager920, the receiver910, the transmitter915, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager920may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver910, the transmitter915, or both. For example, the communications manager920may receive information from the receiver910, send information to the transmitter915, or be integrated in combination with the receiver910, the transmitter915, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager920may support wireless communication at a repeater in accordance with examples as disclosed herein. For example, the communications manager920may be configured as or otherwise support a means for transmitting, to a network control node, a first indication of a capability of the repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The communications manager920may be configured as or otherwise support a means for receiving a second indication of the TDD pattern of the channel based on transmitting the capability of the repeater. The communications manager920may be configured as or otherwise support a means for adjusting one or more radio frequency components of the repeater based on the second indication of the TDD pattern. By including or configuring the communications manager920in accordance with examples as described herein, the device905(e.g., a processor controlling or otherwise coupled to the receiver910, the transmitter915, the communications manager920, or a combination thereof) may support techniques for TDD pattern detection for repeaters, which may increase coverage and reduce signaling overhead. Further, in some examples, the repeater capability to detect TDD patterns as described herein may support a higher amplification gain at the repeater, which may improve the overall quality of communications between wireless nodes, thereby improving latency and reliability for an improved user experience. FIG.10shows a block diagram1000of a device1005that supports TDD detection for repeaters in accordance with aspects of the present disclosure. The device1005may be an example of aspects of a device905or a repeater as described herein. The device1005may include a receiver1010, a transmitter1015, and a communications manager1020. The device1005may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1010may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to TDD pattern detection for repeaters). Information may be passed on to other components of the device1005. The receiver1010may utilize a single antenna or a set of multiple antennas. The transmitter1015may provide a means for transmitting signals generated by other components of the device1005. For example, the transmitter1015may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to TDD pattern detection for repeaters). In some examples, the transmitter1015may be co-located with a receiver1010in a transceiver module. The transmitter1015may utilize a single antenna or a set of multiple antennas. The device1005, or various components thereof, may be an example of means for performing various aspects of TDD pattern detection for repeaters as described herein. For example, the communications manager1020may include a capability transmission component1025, a TDD pattern reception component1030, a radio frequency adjusting component1035, or any combination thereof. The communications manager1020may be an example of aspects of a communications manager920as described herein. In some examples, the communications manager1020, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver1010, the transmitter1015, or both. For example, the communications manager1020may receive information from the receiver1010, send information to the transmitter1015, or be integrated in combination with the receiver1010, the transmitter1015, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager1020may support wireless communication at a repeater in accordance with examples as disclosed herein. The capability transmission component1025may be configured as or otherwise support a means for transmitting, to a network control node, a first indication of a capability of the repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The TDD pattern reception component1030may be configured as or otherwise support a means for receiving a second indication of the TDD pattern of the channel based on transmitting the capability of the repeater. The radio frequency adjusting component1035may be configured as or otherwise support a means for adjusting one or more radio frequency components of the repeater based on the second indication of the TDD pattern. FIG.11shows a block diagram1100of a communications manager1120that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The communications manager1120may be an example of aspects of a communications manager920, a communications manager1020, or both, as described herein. The communications manager1120, or various components thereof, may be an example of means for performing various aspects of TDD pattern detection for repeaters as described herein. For example, the communications manager1120may include a capability transmission component1125, a TDD pattern reception component1130, a radio frequency adjusting component1135, an information detection component1140, a communication resource component1145, a configuration transmission component1150, a communication parameter reception component1155, a power configuration reception component1160, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager1120may support wireless communication at a repeater in accordance with examples as disclosed herein. The capability transmission component1125may be configured as or otherwise support a means for transmitting, to a network control node, a first indication of a capability of the repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The TDD pattern reception component1130may be configured as or otherwise support a means for receiving a second indication of the TDD pattern of the channel based on transmitting the capability of the repeater. The radio frequency adjusting component1135may be configured as or otherwise support a means for adjusting one or more radio frequency components of the repeater based on the second indication of the TDD pattern. In some examples, the information detection component1140may be configured as or otherwise support a means for detecting one or more conditions of information communicated between the two nodes via the repeater. In some examples, the information detection component1140may be configured as or otherwise support a means for determining whether the information includes downlink information or uplink information based on detecting the one or more conditions, where adjusting the one or more radio frequency components is based on the determination. In some examples, the communication resource component1145may be configured as or otherwise support a means for receiving, from the network control node, a message that schedules communication resources for communication between the network control node and a UE based on the capability of the repeater. In some examples, the communication resource component1145may be configured as or otherwise support a means for retransmitting the message that schedules the communication resources to the UE based on receiving the message. In some examples, the second indication includes an information element communicated in a SIB, the information element including a common TDD configuration associated with a cell of a wireless network. In some examples, the second indication includes an information element communicated in an RRC reconfiguration message, the information element including a dedicated TDD configuration that is specific to the UE. In some examples, the second indication includes an SFI communicated in DCI, a specific slot format in the SFI associated with the UE in a serving cell. In some examples, the configuration transmission component1150may be configured as or otherwise support a means for transmitting a third indication of a configuration of the repeater, where receiving the second indication is based on the configuration of the repeater. In some examples, the first indication includes a first capability that the repeater is capable of decoding system block information, a second capability that the repeater is capable of decoding an RRC reconfiguration message, a third capability that the repeater is capable of decoding DCI, a fourth capability that the repeater is capable of detecting whether a communication resource is used to communicate uplink information or downlink information, or a combination thereof. In some examples, the repeater includes a network-controlled repeater configured to receive control information about the TDD pattern of the channel. In some examples, the communication parameter reception component1155may be configured as or otherwise support a means for receiving a third indication of communication parameters for communications between the UE and the network control node based on the capability of the repeater, the communication parameters including an MCS, a rank, a quantity of beams, or a combination thereof. In some examples, the power configuration reception component1160may be configured as or otherwise support a means for receiving, from the network control node, a power configuration associated with the channel, the TDD pattern associated with the channel, or a combination thereof based on the capability of the repeater to detect the TDD pattern. FIG.12shows a diagram of a system1200including a device1205that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The device1205may be an example of or include the components of a device905, a device1005, or a repeater as described herein. The device1205may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager1220, a network communications manager1210, a transceiver1215, an antenna1225, a memory1230, code1235, a processor1240, and an inter-station communications manager1245. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus1250). The network communications manager1210may manage communications with a core network130(e.g., via one or more wired backhaul links). For example, the network communications manager1210may manage the transfer of data communications for client devices, such as one or more UEs115. In some cases, the device1205may include a single antenna1225. However, in some other cases the device1205may have more than one antenna1225, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver1215may communicate bi-directionally, via the one or more antennas1225, wired, or wireless links as described herein. For example, the transceiver1215may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver1215may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas1225for transmission, and to demodulate packets received from the one or more antennas1225. The transceiver1215, or the transceiver1215and one or more antennas1225, may be an example of a transmitter915, a transmitter1015, a receiver910, a receiver1010, or any combination thereof or component thereof, as described herein. The memory1230may include RAM and ROM. The memory1230may store computer-readable, computer-executable code1235including instructions that, when executed by the processor1240, cause the device1205to perform various functions described herein. The code1235may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code1235may not be directly executable by the processor1240but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory1230may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor1240may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor1240may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor1240. The processor1240may be configured to execute computer-readable instructions stored in a memory (e.g., the memory1230) to cause the device1205to perform various functions (e.g., functions or tasks supporting TDD pattern detection for repeaters). For example, the device1205or a component of the device1205may include a processor1240and memory1230coupled to the processor1240, the processor1240and memory1230configured to perform various functions described herein. The inter-station communications manager1245may manage communications with other base stations105, and may include a controller or scheduler for controlling communications with UEs115in cooperation with other base stations105. For example, the inter-station communications manager1245may coordinate scheduling for transmissions to UEs115for various interference mitigation techniques such as beamforming or joint transmission. In some examples, the inter-station communications manager1245may provide an X2 interface within an LTE/LTE-A wireless communications network technology to provide communication between base stations105. The communications manager1220may support wireless communication at a repeater in accordance with examples as disclosed herein. For example, the communications manager1220may be configured as or otherwise support a means for transmitting, to a network control node, a first indication of a capability of the repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The communications manager1220may be configured as or otherwise support a means for receiving a second indication of the TDD pattern of the channel based on transmitting the capability of the repeater. The communications manager1220may be configured as or otherwise support a means for adjusting one or more radio frequency components of the repeater based on the second indication of the TDD pattern. By including or configuring the communications manager1220in accordance with examples as described herein, the device1205may support techniques for TDD pattern detection for repeaters, which may increase coverage and reduce signaling overhead. Further, in some examples, the repeater capability to detect TDD patterns as described herein may support a higher amplification gain at the repeater, which may improve the overall quality of communications between wireless nodes, thereby improving latency and reliability for an improved user experience. In some examples, the communications manager1220may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver1215, the one or more antennas1225, or any combination thereof. Although the communications manager1220is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager1220may be supported by or performed by the processor1240, the memory1230, the code1235, or any combination thereof. For example, the code1235may include instructions executable by the processor1240to cause the device1205to perform various aspects of TDD pattern detection for repeaters as described herein, or the processor1240and the memory1230may be otherwise configured to perform or support such operations. FIG.13shows a flowchart illustrating a method1300that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The operations of the method1300may be implemented by a network control node or its components as described herein. For example, the operations of the method1300may be performed by a network control node as described with reference toFIGS.1through8. In some examples, a network control node may execute a set of instructions to control the functional elements of the network control node to perform the described functions. Additionally or alternatively, the network control node may perform aspects of the described functions using special-purpose hardware. At1305, the method may include receiving a first indication of a capability of a repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The operations of1305may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1305may be performed by a capability reception component725as described with reference toFIG.7. At1310, the method may include transmitting, to a UE and via the repeater, one or more parameters associated with the UE performing one or more channel measurements based on the capability of the repeater to detect the TDD pattern. The operations of1310may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1310may be performed by a parameter transmission component730as described with reference toFIG.7. At1315, the method may include transmitting, to the UE via the repeater, reference signals for the one or more channel measurements based on transmitting the one or more parameters. The operations of1315may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1315may be performed by a reference signal transmission component735as described with reference toFIG.7. FIG.14shows a flowchart illustrating a method1400that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The operations of the method1400may be implemented by a network control node or its components as described herein. For example, the operations of the method1400may be performed by a network control node as described with reference toFIGS.1through8. In some examples, a network control node may execute a set of instructions to control the functional elements of the network control node to perform the described functions. Additionally or alternatively, the network control node may perform aspects of the described functions using special-purpose hardware. At1405, the method may include receiving a first indication of a capability of a repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The operations of1405may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1405may be performed by a capability reception component725as described with reference toFIG.7. At1410, the method may include transmitting, to a UE and via the repeater, one or more parameters associated with the UE performing one or more channel measurements based on the capability of the repeater to detect the TDD pattern. The operations of1410may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1410may be performed by a parameter transmission component730as described with reference toFIG.7. At1415, the method may include transmitting, to the UE via the repeater, reference signals for the one or more channel measurements based on transmitting the one or more parameters. The operations of1415may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1415may be performed by a reference signal transmission component735as described with reference toFIG.7. At1420, the method may include scheduling communication resources for communication with the UE via the repeater based on the capability of the repeater to detect the TDD pattern of the channel based on receiving the first indication. The operations of1420may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1420may be performed by a communication resource scheduling component740as described with reference toFIG.7. At1425, the method may include transmitting, to the UE via the repeater, a message that schedules the communication resources. The operations of1425may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1425may be performed by a communication resource scheduling component740as described with reference toFIG.7. FIG.15shows a flowchart illustrating a method1500that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The operations of the method1500may be implemented by a network control node or its components as described herein. For example, the operations of the method1500may be performed by a network control node as described with reference toFIGS.1through8. In some examples, a network control node may execute a set of instructions to control the functional elements of the network control node to perform the described functions. Additionally or alternatively, the network control node may perform aspects of the described functions using special-purpose hardware. At1505, the method may include receiving a first indication of a capability of a repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The operations of1505may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1505may be performed by a capability reception component725as described with reference toFIG.7. At1510, the method may include receiving a second indication of a configuration of the repeater, where transmitting the one or more parameters is based on the configuration of the repeater, where the one or more parameters associated with the UE performing the one or more channel measurements are based on the configuration of the repeater and the capability of the repeater. The operations of1510may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1510may be performed by a configuration reception component745as described with reference toFIG.7. At1515, the method may include transmitting a second indication of the TDD pattern of the channel, where transmitting the one or more parameters is based on transmitting the second indication. The operations of1515may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1515may be performed by a TDD pattern transmission component750as described with reference toFIG.7. At1520, the method may include transmitting, to a UE and via the repeater, one or more parameters associated with the UE performing one or more channel measurements based on the capability of the repeater to detect the TDD pattern. The operations of1520may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1520may be performed by a parameter transmission component730as described with reference toFIG.7. At1525, the method may include transmitting, to the UE via the repeater, reference signals for the one or more channel measurements based on transmitting the one or more parameters. The operations of1525may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1525may be performed by a reference signal transmission component735as described with reference toFIG.7. FIG.16shows a flowchart illustrating a method1600that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The operations of the method1600may be implemented by a repeater or its components as described herein. For example, the operations of the method1600may be performed by a repeater as described with reference toFIGS.1through4and9through12. In some examples, a repeater may execute a set of instructions to control the functional elements of the repeater to perform the described functions. Additionally or alternatively, the repeater may perform aspects of the described functions using special-purpose hardware. At1605, the method may include transmitting, to a network control node, a first indication of a capability of the repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The operations of1605may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1605may be performed by a capability transmission component1125as described with reference toFIG.11. At1610, the method may include receiving a second indication of the TDD pattern of the channel based on transmitting the capability of the repeater. The operations of1610may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1610may be performed by a TDD pattern reception component1130as described with reference toFIG.11. At1615, the method may include adjusting one or more radio frequency components of the repeater based on the second indication of the TDD pattern. The operations of1615may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1615may be performed by a radio frequency adjusting component1135as described with reference toFIG.11. FIG.17shows a flowchart illustrating a method1700that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The operations of the method1700may be implemented by a repeater or its components as described herein. For example, the operations of the method1700may be performed by a repeater as described with reference toFIGS.1through4and9through12. In some examples, a repeater may execute a set of instructions to control the functional elements of the repeater to perform the described functions. Additionally or alternatively, the repeater may perform aspects of the described functions using special-purpose hardware. At1705, the method may include transmitting, to a network control node, a first indication of a capability of the repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The operations of1705may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1705may be performed by a capability transmission component1125as described with reference toFIG.11. At1710, the method may include receiving a second indication of the TDD pattern of the channel based on transmitting the capability of the repeater. The operations of1710may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1710may be performed by a TDD pattern reception component1130as described with reference toFIG.11. At1715, the method may include detecting one or more conditions of information communicated between the two nodes via the repeater. The operations of1715may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1715may be performed by an information detection component1140as described with reference toFIG.11. At1720, the method may include determining whether the information includes downlink information or uplink information based on detecting the one or more conditions, where adjusting the one or more radio frequency components is based on the determination. The operations of1720may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1720may be performed by an information detection component1140as described with reference toFIG.11. At1725, the method may include adjusting one or more radio frequency components of the repeater based on the second indication of the TDD pattern. The operations of1725may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1725may be performed by a radio frequency adjusting component1135as described with reference toFIG.11. FIG.18shows a flowchart illustrating a method1800that supports TDD pattern detection for repeaters in accordance with aspects of the present disclosure. The operations of the method1800may be implemented by a repeater or its components as described herein. For example, the operations of the method1800may be performed by a repeater as described with reference toFIGS.1through4and9through12. In some examples, a repeater may execute a set of instructions to control the functional elements of the repeater to perform the described functions. Additionally or alternatively, the repeater may perform aspects of the described functions using special-purpose hardware. At1805, the method may include transmitting, to a network control node, a first indication of a capability of the repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater. The operations of1805may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1805may be performed by a capability transmission component1125as described with reference toFIG.11. At1810, the method may include receiving a second indication of the TDD pattern of the channel based on transmitting the capability of the repeater. The operations of1810may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1810may be performed by a TDD pattern reception component1130as described with reference toFIG.11. At1815, the method may include adjusting one or more radio frequency components of the repeater based on the second indication of the TDD pattern. The operations of1815may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1815may be performed by a radio frequency adjusting component1135as described with reference toFIG.11. At1820, the method may include receiving, from the network control node, a message that schedules communication resources for communication between the network control node and a UE based on the capability of the repeater. The operations of1820may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1820may be performed by a communication resource component1145as described with reference toFIG.11. At1825, the method may include retransmitting the message that schedules the communication resources to the UE based on receiving the message. The operations of1825may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1825may be performed by a communication resource component1145as described with reference toFIG.11. The following provides an overview of aspects of the present disclosure: Aspect 1: A method for wireless communication at a network control node, comprising: receiving a first indication of a capability of a repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater; transmitting, to a UE and via the repeater, one or more parameters associated with the UE performing one or more channel measurements based at least in part on the capability of the repeater to detect the TDD pattern; and transmitting, to the UE via the repeater, reference signals for the one or more channel measurements based at least in part on transmitting the one or more parameters. Aspect 2: The method of aspect 1, further comprising: scheduling communication resources for communication with the UE via the repeater based at least in part on the capability of the repeater to detect the TDD pattern of the channel based at least in part on receiving the first indication; and transmitting, to the UE via the repeater, a message that schedules the communication resources. Aspect 3: The method of any of aspects 1 through 2, further comprising: receiving a second indication of a configuration of the repeater, wherein transmitting the one or more parameters is based at least in part on the configuration of the repeater, wherein the one or more parameters associated with the UE performing the one or more channel measurements are based at least in part on the configuration of the repeater and the capability of the repeater. Aspect 4: The method of any of aspects 1 through 3, further comprising: transmitting a second indication of the TDD pattern of the channel, wherein transmitting the one or more parameters is based at least in part on transmitting the second indication. Aspect 5: The method of aspect 4, wherein the second indication comprises an information element communicated in an SIB, the information element comprising a common TDD configuration associated with a cell of a wireless network. Aspect 6: The method of any of aspects 4 through 5, wherein the second indication comprises an information element communicated in an RRC reconfiguration message, the information element comprising a dedicated TDD configuration that is specific to the UE. Aspect 7: The method of any of aspects 4 through 6, wherein the second indication comprises an SFI communicated in DCI, a specific slot format in the SFI associated with the UE in a serving cell. Aspect 8: The method of any of aspects 4 through 7, wherein transmitting the one or more parameters associated with the UE performing the one or more channel measurements further comprises: transmitting, based at least in part on the second indication, a third indication to perform a first set of channel measurements on a first set of resources indicated by the TDD pattern as uplink or downlink and a fourth indication to perform a second set of channel measurements on a second set of resources indicated by the TDD pattern as flexibly configurable as either uplink or downlink. Aspect 9: The method of any of aspects 1 through 8, further comprising: receiving, from the repeater, a feedback message indicating one or more conditions associated with the repeater; and transmitting, to the UE and via the repeater, the one or more parameters associated with the UE performing the one or more channel measurements based at least in part on receiving the feedback message. Aspect 10: The method of any of aspects 1 through 9, wherein the one or more parameters comprises a power control parameter, a resource configuration, or a combination thereof. Aspect 11: The method of any of aspects 1 through 10, wherein the first indication comprises a first capability that the repeater is capable of decoding system block information, a second capability that the repeater is capable of decoding an RRC reconfiguration message, a third capability that the repeater is capable of decoding DCI, a fourth capability that the repeater is capable of detecting whether a communication resource is used to communicate uplink information or downlink information, or a combination thereof. Aspect 12: The method of any of aspects 1 through 11, wherein the repeater is a traditional repeater configured to receive and amplify signals independent of control information about the TDD pattern of the channel, an autonomous repeater configured to identify information about the TDD pattern of the channel based at least in part on channel conditions at the repeater, or a network-controlled repeater configured to receive the control information about the TDD pattern of the channel. Aspect 13: The method of any of aspects 1 through 12, further comprising: transmitting a second indication of communication parameters for communications between the UE and the network control node based at least in part on the capability of the repeater, the communication parameters comprising an MCS, a rank, a quantity of beams, or a combination thereof. Aspect 14: The method of any of aspects 1 through 13, further comprising: transmitting, to the repeater, a power configuration associated with the channel, the TDD pattern associated with the channel, or a combination thereof based at least in part on the capability of the repeater. Aspect 15: The method of any of aspects 1 through 14, wherein the network control node comprises a base station, a UE, or a combination thereof. Aspect 16: A method for wireless communication at a repeater, comprising: transmitting, to a network control node, a first indication of a capability of the repeater to detect a TDD pattern of a channel that communicates information between two nodes using the repeater; receiving a second indication of the TDD pattern of the channel based at least in part on transmitting the capability of the repeater; and adjusting one or more radio frequency components of the repeater based at least in part on the second indication of the TDD pattern. Aspect 17: The method of aspect 16, further comprising: detecting one or more conditions of information communicated between the two nodes via the repeater; and determining whether the information comprises downlink information or uplink information based at least in part on detecting the one or more conditions, wherein adjusting the one or more radio frequency components is based at least in part on the determination. Aspect 18: The method of any of aspects 16 through 17, further comprising: receiving, from the network control node, a message that schedules communication resources for communication between the network control node and a UE based at least in part on the capability of the repeater; and retransmitting the message that schedules the communication resources to the UE based at least in part on receiving the message. Aspect 19: The method of any of aspects 16 through 18, wherein the second indication comprises an information element communicated in an SIB, the information element comprising a common TDD configuration associated with a cell of a wireless network. Aspect 20: The method of any of aspects 16 through 19, wherein the second indication comprises an information element communicated in an RRC reconfiguration message, the information element comprising a dedicated TDD configuration that is specific to the UE. Aspect 21: The method of any of aspects 16 through 20, wherein the second indication comprises an SFI communicated in DCI, a specific slot format in the SFI associated with the UE in a serving cell. Aspect 22: The method of any of aspects 16 through 21, further comprising: transmitting a third indication of a configuration of the repeater, wherein receiving the second indication is based at least in part on the configuration of the repeater. Aspect 23: The method of any of aspects 16 through 22, wherein the first indication comprises a first capability that the repeater is capable of decoding system block information, a second capability that the repeater is capable of decoding an RRC reconfiguration message, a third capability that the repeater is capable of decoding DCI, a fourth capability that the repeater is capable of detecting whether a communication resource is used to communicate uplink information or downlink information, or a combination thereof. Aspect 24: The method of any of aspects 16 through 23, wherein the repeater comprises a network-controlled repeater configured to receive control information about the TDD pattern of the channel. Aspect 25: The method of any of aspects 16 through 24, further comprising: receiving a third indication of communication parameters for communications between the UE and the network control node based at least in part on the capability of the repeater, the communication parameters comprising an MCS, a rank, a quantity of beams, or a combination thereof. Aspect 26: The method of any of aspects 16 through 25, further comprising: receiving, from the network control node, a power configuration associated with the channel, the TDD pattern associated with the channel, or a combination thereof based at least in part on the capability of the repeater to detect the TDD pattern. Aspect 27: An apparatus for wireless communication at a network control node, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 1 through 15. Aspect 28: An apparatus for wireless communication at a network control node, comprising at least one means for performing a method of any of aspects 1 through 15. Aspect 29: A non-transitory computer-readable medium storing code for wireless communication at a network control node, the code comprising instructions executable by a processor to perform a method of any of aspects 1 through 15. Aspect 30: An apparatus for wireless communication at a repeater, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 16 through 26. Aspect 31: An apparatus for wireless communication at a repeater, comprising at least one means for performing a method of any of aspects 16 through 26. Aspect 32: A non-transitory computer-readable medium storing code for wireless communication at a repeater, the code comprising instructions executable by a processor to perform a method of any of aspects 16 through 26. It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. Although aspects of an LTE, LTE-A, LTE-A Pro, or NR system may be described for purposes of example, and LTE, LTE-A, LTE-A Pro, or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE, LTE-A, LTE-A Pro, or NR networks. For example, the described techniques may be applicable to various other wireless communications systems such as Ultra Mobile Broadband (UMB), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, as well as other systems and radio technologies not explicitly mentioned herein. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a CPU, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of computer-readable medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” The term “determine” or “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (such as via looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (such as receiving information), accessing (such as accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and other such similar actions. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label, or other subsequent reference label. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “example” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. | 146,193 |
11943177 | DETAILED DESCRIPTION One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and enterprise-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure. As used herein, the term “computing system” refers to an electronic computing device such as, but not limited to, a single computer, virtual machine, virtual container, host, server, laptop, and/or mobile device, or to a plurality of electronic computing devices working together to perform the function described as being performed on or by the computing system. As used herein, the term “medium” refers to one or more non-transitory, computer-readable physical media that together store the contents described as being stored thereon. Embodiments may include non-volatile secondary storage, read-only memory (ROM), and/or random-access memory (RAM). As used herein, the term “application” refers to one or more computing modules, programs, processes, workloads, threads and/or a set of computing instructions executed by a computing system. Example embodiments of an application include software modules, software objects, software instances and/or other types of executable code. As used herein, an “agent” refers to an administrative agent and/or a computer generated intelligent virtual agent. As used herein, the term “script” refers to a computer code implemented on a device or system within a computing network. Example embodiments include a client script and a server script. Client script refers to computer code executed in a client or local environment. Example embodiments include code executed on a portal or local application running on a client device, such as a web browser on client's computer. Server script refers to computer code executed in the service or cloud provider environment. Example embodiments include code executed by a server in the provider environment in response to a request from the client environment, such as a request to change the appearance or behavior of a portal page running on the client device. The server script may generate the requested portal page and send it to the client portal. As discussed herein, a client using a portal to access applications in the provider environment may request to chat with an administrative agent. Once the request for the chat has been received by the provider system, the chat may typically be assigned at random. By way of example, the client may navigate through various pages and documents of the portal prior to requesting to chat with an agent. Thus, after a client may have already spent time searching for information related to a topic in the portal, the client may be asked initial conversation questions related to the same search. Accordingly, it is now appreciated that there is a need to manage routing client-agent conversations using contextual information so as to reduce or eliminate the time used to answer initial conversation questions. However, determining the relevant information used to determine the appropriate agent may be difficult to implement in practice. With the preceding in mind, the following figures relate to various types of generalized system architectures or configurations that may be employed to provide services to an organization in a cloud-computing framework and on which the present approaches may be employed. Correspondingly, these system and platform examples may also relate to systems and platforms on which providing contextual information relating to portal usage as discussed herein may be implemented or otherwise utilized. Turning now toFIG.1, a schematic diagram of an embodiment of a cloud computing system10, where embodiments of the present disclosure may operate, is illustrated. The cloud computing system10may include a client network12, a network14(e.g., the Internet), and a cloud-based platform16. In some implementations, the cloud-based platform16may be a configuration management database (CMDB) platform. In one embodiment, the client network12may be a local private network, such as local area network (LAN) having a variety of network devices that include, but are not limited to, switches, servers, and routers. In another embodiment, the client network12represents an enterprise network that may include one or more LANs, virtual networks, data centers18, and/or other remote networks. As shown inFIG.1, the client network12is able to connect to one or more client devices20A,20B, and20C so that the client devices20are able to communicate with each other and/or with the network hosting the platform16. The client devices20may be computing systems and/or other types of computing devices generally referred to as Internet of Things (IoT) devices that access cloud computing services, for example, via a web browser application, a portal, or via an edge device22that may act as a gateway between the client devices20and the platform16. In some implementations, client devices20using a portal to access the cloud computing services may include client scripts, such that the client script provides functionality (e.g., enabling selection of objects or a link) in the portal. Moreover, the client script may be used to track portal usage (e.g., user clicking on particular topics or buttons within the portal that is enabled by the client script) with respect to local or localized behaviors or actions on the respective device20, which may be used to provide contextual information to the provider system (e.g., network hosting the platform16). The contextual information may be used to optimize chat conversations using the techniques described herein. FIG.1also illustrates that the client network12includes an administration or managerial device, agent, or server, such as a management, instrumentation, and discovery (MID) server24that facilitates communication of data between the network hosting the platform16, other external applications, data sources, and services, and the client network12. Although not specifically illustrated inFIG.1, the client network12may also include a connecting network device (e.g., a gateway or router) or a combination of devices that implement a customer firewall or intrusion protection system. As depicted, the client network12may be coupled to a network14. The network14may include one or more computing networks, such as other LANs, wide area networks (WAN), the Internet, and/or other remote networks, to transfer data between the client devices20and the network hosting the platform16. For example, client script and/or server script data indicating portal usage may be provided to the platform16via the network14, such as to provide contextual information that may be used by agents, developers, and other IT or application developing personnel. Each of the computing networks within network14may contain wired and/or wireless programmable devices that operate in the electrical and/or optical domain. For example, network14may include wireless networks, such as cellular networks (e.g., Global System for Mobile Communications (GSM) based cellular network), IEEE 802.11 networks, and/or other suitable radio-based networks. The network14may also employ any number of network communication protocols, such as Transmission Control Protocol (TCP) and Internet Protocol (IP). Although not explicitly shown inFIG.1, network14may include a variety of network devices, such as servers, routers, network switches, and/or other network hardware devices configured to transport data over the network14. InFIG.1, the network hosting the platform16may be a remote network (e.g., a cloud network) that is able to communicate with the client devices20via the client network12and network14. The network hosting the platform16provides additional computing resources to the client devices20and/or the client network12. For example, by utilizing the network hosting the platform16, users of the client devices20are able to build and execute applications for various enterprises, IT, and/or other organization-related functions. In one embodiment, the network hosting the platform16is implemented on the one or more data centers18, where each data center may correspond to a different geographic location. Each of the data centers18includes a plurality of virtual servers26(also referred to herein as application nodes, application servers, virtual server instances, application instances, or application server instances), where each virtual server26may be implemented on a physical computing system, such as a single electronic computing device (e.g., a single physical hardware server) or across multiple-computing devices (e.g., multiple physical hardware servers). Examples of virtual servers26include, but are not limited to a web server (e.g., a unitary Apache installation), an application server (e.g., unitary JAVA Virtual Machine), and/or a database server (e.g., a unitary relational database management system (RDBMS) catalog). In some implementations, server scripts may be executed on the server (e.g., webserver or virtual servers26) on the server side (e.g., network hosting the platform16), to produce a customized response to a client's request in the portal. As such, the server script executing on the server may also be used to track the client's activity in the portal (e.g., server script data), indicating the pages viewed, documents downloaded, client portal settings, or functionalities enabled within the portal by the server script. Moreover, this data may be stored in a database in the provider environment, such as the server or database that may be accessed by server. The client may not have access to this database in the provider environment. The server script data may be used alone or in conjunction with the client script data to determine contextual information related to portal usage. To utilize computing resources within the platform16, network operators may choose to configure the data centers18using a variety of computing infrastructures. In one embodiment, one or more of the data centers18are configured using a multi-tenant cloud architecture, such that one of the server26instances handles requests from and serves multiple customers. Data centers18with multi-tenant cloud architecture commingle and store data from multiple customers, where multiple customer instances are assigned to one of the virtual servers26. In a multi-tenant cloud architecture, the particular virtual server26distinguishes between and segregates data and other information of the various customers. For example, a multi-tenant cloud architecture may assign a particular identifier for each customer in order to identify and segregate the data from each customer. Generally, implementing a multi-tenant cloud architecture may suffer from various drawbacks, such as a failure of a particular one of the server26instances causing outages for all customers allocated to the particular server instance. In another embodiment, one or more of the data centers18are configured using a multi-instance cloud architecture to provide every customer its own unique customer instance or instances. For example, a multi-instance cloud architecture may provide each customer instance with its own dedicated application server and dedicated database server. In other examples, the multi-instance cloud architecture may deploy a single physical or virtual server26and/or other combinations of physical and/or virtual servers26, such as one or more dedicated web servers, one or more dedicated application servers, and one or more database servers, for each customer instance. In a multi-instance cloud architecture, multiple customer instances may be installed on one or more respective hardware servers, where each customer instance is allocated certain portions of the physical server resources, such as computing memory, storage, and processing power. By doing so, each customer instance has its own unique software stack that provides the benefit of data isolation, relatively less downtime for customers to access the platform16, and customer-driven upgrade schedules. An example of implementing a customer instance within a multi-instance cloud architecture will be discussed in more detail below with reference toFIG.2. As discussed herein, as part of enhancing customer experience of a computer environment, such as those described above, agents may provide customer service to users via a portal chat. Once a chat has been requested, the client script and/or server script data may be used to determine contextual information related to the client's portal usage prior to requesting the chat. As will be discussed in detail inFIGS.6-8, this data may be used to truncate the conversation, such as by automating answers for questions that may otherwise be asked routinely to start the conversation, or to improve the quality or relevance of information provided in response to an inquiry. FIG.2is a schematic diagram of an embodiment of a multi-instance cloud architecture40where embodiments of the present disclosure may operate.FIG.2illustrates that the multi-instance cloud architecture40includes the client network12and the network14that connect to two (e.g., paired) data centers18A and18B that may be geographically separated from one another. UsingFIG.2as an example, network environment and service provider cloud infrastructure client instance102(also referred to herein as a client instance102) is associated with (e.g., supported and enabled by) dedicated virtual servers (e.g., virtual servers26A,26B,26C, and26D) and dedicated database servers (e.g., virtual database servers104A and104B). Stated another way, the virtual servers26A-26D and virtual database servers104A and104B are not shared with other client instances and are specific to the respective client instance102. In the depicted example, to facilitate availability of the client instance102, the virtual servers26A-26D and virtual database servers104A and104B are allocated to two different data centers18A and18B so that one of the data centers18acts as a backup data center. Other embodiments of the multi-instance cloud architecture40may include other types of dedicated virtual servers, such as a web server. For example, the client instance102may be associated with (e.g., supported and enabled by) the dedicated virtual servers26A-26D, dedicated virtual database servers104A and104B, and additional dedicated virtual web servers (not shown inFIG.2). AlthoughFIGS.1and2illustrate specific embodiments of a cloud computing system10and a multi-instance cloud architecture40, respectively, the disclosure is not limited to the specific embodiments illustrated inFIGS.1and2. For instance, althoughFIG.1illustrates that the platform16is implemented using data centers, other embodiments of the platform16are not limited to data centers and may utilize other types of remote network infrastructures. Moreover, other embodiments of the present disclosure may combine one or more different virtual servers into a single virtual server or, conversely, perform operations attributed to a single virtual server using multiple virtual servers. For instance, usingFIG.2as an example, the virtual servers26A,26B,26C,26D and virtual database servers104A,104B may be combined into a single virtual server. Moreover, the present approaches may be implemented in other architectures or configurations, including, but not limited to, multi-tenant architectures, generalized client/server implementations, and/or even on a single physical processor-based device configured to perform some or all of the operations discussed herein. Similarly, though virtual servers or machines may be referenced to facilitate discussion of an implementation, physical servers may instead be employed as appropriate. The use and discussion ofFIGS.1and2are only examples to facilitate ease of description and explanation and are not intended to limit the disclosure to the specific examples illustrated therein. As may be appreciated, the respective architectures and frameworks discussed with respect toFIGS.1and2incorporate computing systems of various types (e.g., servers, workstations, client devices, laptops, tablet computers, cellular telephones, and so forth) throughout. For the sake of completeness, a brief, high level overview of components typically found in such systems is provided. As may be appreciated, the present overview is intended to merely provide a high-level, generalized view of components typical in such computing systems and should not be viewed as limiting in terms of components discussed or omitted from discussion. With this in mind, and by way of background, it may be appreciated that the present approach may be implemented using one or more processor-based systems such as shown inFIG.3. Likewise, applications and/or databases utilized in the present approach stored, employed, and/or maintained on such processor-based systems. As may be appreciated, such systems as shown inFIG.3may be present in a distributed computing environment, a networked environment, or other multi-computer platform or architecture. Likewise, systems such as that shown inFIG.3, may be used in supporting or communicating with one or more virtual environments or computational instances on which the present approach may be implemented. With this in mind, an example computer system may include some or all of the computer components depicted inFIG.3.FIG.3generally illustrates a block diagram of example components of a computing system80and their potential interconnections or communication paths, such as along one or more busses. As illustrated, the computing system80may include various hardware components such as, but not limited to, one or more processors82, one or more busses84, memory86, input devices88, a power source90, a network interface92, a user interface94, and/or other computer components useful in performing the functions described herein. The one or more processors82may include one or more microprocessors capable of performing instructions stored in the memory86. For example, instructions may include implementing a set of rules for determining a conversation topic using the client script data and/or server script data. Additionally or alternatively, the one or more processors82may include application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or other devices designed to perform some or all of the functions discussed herein without calling instructions from the memory86. With respect to other components, the one or more busses84include suitable electrical channels to provide data and/or power between the various components of the computing system80. The memory86may include any tangible, non-transitory, and computer-readable storage media. Although shown as a single block inFIG.1, the memory86may be implemented using multiple physical units of the same or different types in one or more physical locations. The input devices88correspond to structures to input data and/or commands to the one or more processors82. For example, the input devices88may include a mouse, touchpad, touchscreen, keyboard and the like. The power source90may be any suitable source for power of the various components of the computing system80, such as line power and/or a battery source. The network interface92may include one or more transceivers capable of communicating with other devices over one or more networks (e.g., a communication channel). The network interface92may provide a wired network interface or a wireless network interface. A user interface94may include a display that is configured to display text or images transferred to it from the one or more processors82. In addition and/or alternative to the display, the user interface94may include other devices for interfacing with a user, such as lights (e.g., LEDs), speakers, and the like. With the preceding in mind,FIG.4is a block diagram illustrating an embodiment in which a hosted instance300supports and enables the client instance102, according to one or more disclosed embodiments. More specifically,FIG.4illustrates an example of a portion of a service provider cloud infrastructure, including the cloud-based platform16discussed above. The cloud-based platform16is connected to a client device20D via the network14to provide a user interface to network applications executing within the client instance102(e.g., via a portal on the client device20D). Client instance102is supported by virtual servers26similar to those explained with respect toFIG.2, and is illustrated here to show support for the disclosed functionality described herein within the client instance102. Cloud provider infrastructures are generally configured to support a plurality of end-user devices, such as client device20D, concurrently, wherein each end-user device is in communication with the single client instance102. Also, cloud provider infrastructures may be configured to support any number of client instances, such as client instance102, concurrently, with each of the instances in communication with one or more end-user devices. As such, single or multiple client instances102or the hosted instance300may execute the server script to provide a customized response for each of the client devices using the portal and receive relevant contextual information about the portal usage. With the preceding in mind,FIG.5is a flow diagram500depicting use of a server script502and a client script506to determine a conversation topic that may be subsequently used to direct a client-agent conversation to an appropriate agent, in accordance with aspects of the present disclosure. In the depicted embodiment, server script502and client script506may be code or a set of instructions that automate execution of tasks, such as client request on the portal (e.g., request for a particular functionality on a web browser). The portal may provide a user interface to applications of the network hosting the platform16. By way of example, for client requests in the portal, such as a request for a particular portal page (e.g., webpage), the script may be instructions for the server (e.g., virtual server26ofFIG.2) to provide the requested portal page or instructions for the client device20to update the present page based on client input. More specifically, the server script502is executed on a physical or virtual server and/or database in the provider environment (e.g., server side scripting) while the client script506is executed locally on the portal running on the client device20in the client environment (e.g., client side scripting). The server script502may be executed directly on a server to generate custom and dynamic pages to fulfill a client's request and then send the pages to the portal. Moreover, the provider environment may create a path from the client's portal to the database504(e.g., virtual database server104ofFIG.2), such as to store portal usage data to the database504. Since both the server26and the database504are in the provider environment, the generated response for the client may be based on client data stored on the server26and/or database504. For example, the stored client data may indicate the client's access rights, custom portal settings, etc. As the client navigates along the portal, the server26may track and store data related to portal usage using the server script502to the database504. On the other hand, the client script506may be executed on the client's device and since the client device20may include the client script506or may be attached to the portal running on the client device20, the portal may be directly altered in response to the client's inputs (e.g., does not send a page request to the server). Specifically, each time the client script506is enabled, such as to respond with a particular function provided by the client script506, the response data may be communicated to the server (e.g., server26) in the provider environment via the network14or by tracking the changes in the portal. By way of example, the client script506may be used for page navigation, such as to provide clickable features, format, and data validation. Thus, the number of mouse clicks for an enabled button on the portal may be tracked and communicated to the server. In some embodiments, the server script502and/or the client script506data, and as discussed in detail inFIGS.6-8, may be used for routing a conversation topic508for a chat request. In particular, the database504may be queried for data related to the server script502, the client's specific portal settings, etc. to determine a conversation topic. Data associated with the client script506that indicates the client's movement (e.g., page presently viewed or pages clicked through client journey in the portal) may also be used to determine conversation topic. In some implementations, certain data may be prioritized when making conversation topic determinations. For example, the provider and/or client may set certain data to be more relevant than others or be associated with varying weights (e.g., time spent viewing a page is most relevant). Moreover, these preferential settings may be unique for each client portal based on client settings (e.g., stored in database104). Thus, the data associated with the server script502and/or client script506may be used to leverage specific context for client portals. Often, a client browsing the portal may view documents and pages in search of particular information. By way of example, the client may search for keywords or information relating to a particular router manufactured by the provider. However, after viewing multiple pages or links in the portal, the client may request to chat with an agent for assistance. In such instances, rather than starting the conversation with a general agent (e.g., not specialized) without context related to the client's search in portal prior to the chat request, linking the client with the appropriate agent may facilitate a more efficient and productive conversation. Accordingly,FIG.6illustrates a process520for routing a chat request to the appropriate agent for a particular conversation topic based on the client script506. While the process520, and other processes described herein (e.g., process540ofFIG.7and process600ofFIG.8) is described according to a certain sequence, it should be understood that the present disclosure contemplates that the described steps may be performed in different sequences than the sequence illustrated, and certain described steps may be skipped or not performed altogether. In other embodiments, the process520may be implemented at least in part by executing instructions stored in a tangible, non-transitory, computer-readable medium, such as the memory86, using processing circuitry, such as the processor82. Additionally or alternatively, the process520may be implemented at least in part by circuit connections and/or control logic implemented in a cloud computing system80. In some implementations, the process520may be implemented by the server26ofFIG.2, which may be associated with an underlying hardware implementation that includes a memory86and processor82. As shown, the process600may include detecting (block522) a client using the portal on the client device20. As previously mentioned, the portal may provide a user interface to applications and features provided in the network hosting the platform16. By way of example, the portal may include a web browser interface on the client device20. As such, the web browser may run the client script506installed or executed on the client device20to change the web browser pages according to inputs on the web browser. Moreover, the client script506may enable functionalities in the portal, and since the portal is an interface to the provider's platform16, the provider environment may track each functionality enablement or response. Thus, the process520may include tracking (block524) the client movement in the portal. Tracking the client movement may include, for example, pages (e.g., catalogues, technical specifications, articles, blogs) the client clicks on to view, how long the client may spend viewing such pages, where a client's mouse is located during the viewing, etc. To summarize, the tracked data associated with the client script506may include local information pertaining to the portal and client device20. After navigating through different pages in the portal, the client may request to chat with an agent for additional information, for example, for information the client was unable to find or that may be unavailable in the portal. Thus, the process520may include receiving a request (block526) to chat. A request for chatting may request the network hosting platform16to locate an agent (e.g., virtual or non-virtual) for the client. Often, the chat may initiate preliminary questions (e.g., what can I help you with today?). However, these preliminary questions may be answered by viewing the client's movement through the portal prior to the chat request. To facilitate a contextual chat that may quickly resolve the client's reasoning for requesting the chat, the process520may include receiving (block528) the tracked movement. The tracked movement may provide relevant content, such as data related to the pages viewed, how many times the pages were viewed, etc. Using this data, the process520may determine (block530) contextual information to direct the conversation to the appropriate IT group or agent and/or to facilitate a shorter conversation by automating answers for questions that may otherwise be asked. Based on the contextual information, the process520may determine (block532) a conversation topic. Continuing with the example of the client searching for information related to the particular router manufactured by the provider, the tracked movement may indicate relevant router portal pages selected throughout the client's journey prior to the request. Thus, using this data associated with the client script506, the topic of conversation may be easily determined. For example, the pages may be related to a particular router, a family of routers and/or model type, etc. and thus, the conversation topic may be determined as a particular model of a particular router. After determining the conversation topic, the process520may include routing (block534) the conversation to an agent for the particular conversation topic. In this manner, the client may not be routed to a general agent or irrelevant IT group. By routing to the most appropriate agent by using the client script506data, the client may save time that may otherwise be spent answering the preliminary questions to get started and/or being rerouted between different departments. In addition to selecting the appropriate agent, the agent may be provided with the client's tracked history. In some implementations, such as when data associated with the client script506is unavailable or if additional data may provide a better context for routing the conversation, server script502data may be utilized. As illustrated inFIG.7, a chat may be routed to an agent for a particular conversation topic using the server script502. In the depicted example, the process540may include detecting (block542) a client using the portal on the client device20. As previously mentioned, server side scripting may occur when the client initiates a server26request, such as for the portal to provide a particular functionality. This request may be in response to a functionality that client side scripting may not provide. For example, the client script506does not have access to databases in the provider's environment (e.g., virtual database server104ofFIG.2). Thus, portal functionality or requirements that may involve data or processing in the provider environment, such as by using server26(e.g., virtual server26ofFIG.2) and/or database104, may be provided by server side scripting. Other examples may include providing dynamic portal pages, which may be generated by the server script502. Moreover, since the portal is connected to the server26, the functionalities provided by the server script502that is executed on the server26may be tracked and stored on server26and/or database104. Thus, the process540may include tracking (block544) the client history in the portal. Rather than tracking number of clicks on a particular portal page, as provided by the client script506data, the server script502data may allow tracking the particular portal the client is using (e.g., a portal with specific functionalities unique to the particular client), portal pages the client has viewed over time and prior to requesting a chat, documents downloaded during the session, etc. Thus, server side scripting may provide data that may not otherwise be available by the client script506and this data may be stored (block546) in database104. Next, the process540may include receiving a request (block548) to chat. Continuing with the example of the client searching for information relating to the particular router manufactured by the provider, as discussed inFIGS.5and6, the tracked movement may indicate server26requests based on the server script502, such as technical documents relating to the router that may be downloaded and are accessible by the server26. However, the client may not find the information of interested, and thus, the client may request chatting with an agent for more information. To provide context for the client's reasoning for requesting a chat, the process540may include receiving (block550) tracked user history from the database104. In particular, information related to requests processed using the server script502and/or information related to the client may be tracked. The data may indicate the client's interest in the particular router and indicate activity that may be more contextually useful than other information, such as configuration manuals (e.g., technical documents) that were downloaded related to the router. Thus, the process540may include determining (block552) contextual information using the server script502data. Specifically, the server script502data may indicate the particular topic of interest to the client. Using the contextual information, the process may include determining (block554) conversation topic. For example, the topic may be related to router hardware, and more specifically, configuring routers. Thus, a specific conversation topic may be determined for the reasoning behind a client's request for chatting. Accordingly, the process540may include routing (block556) to an agent for the conversation topic of configuring a router. However, in some instances, although the client downloaded a particular document, which may be indicated using the server script502, the client may have also clicked on a particular portal page multiple times, which may be indicated by the client script506. In some instances, the fact that the client visited a portal page often or spent a long time viewing the page, may be more indicative contextual information. Similarly, although a client may have spent an extended time period viewing a particular page, which may be determined using the client script506, the client may also have downloaded multiple related documents of interest, which may be indicated by the server script502. For example, the client may have spent a long time on the particular page because they requested the portal page and then left the client device20idle. Thus, the client script506may indicate a long duration spent on a particular page but that data may not be the most accurate context. Accordingly, in some instances, both the server script502and client script506data may be combined to provide precise contextual information. To illustrate,FIG.8depicts a process600for routing a conversation to the appropriate agent using both the server script502and client script506. As depicted, portions of process520ofFIG.6and process540ofFIG.7may be combined to determine contextual information, which may then be used to determine conversation topic and routing the conversation to the appropriate agent. In particular, process blocks522,524,526, and528may be implemented as discussed with respect toFIG.6, and process blocks542,544,546,548, and550may be implemented as discussed with respect toFIG.7, as indicated by the dashed box. In the current embodiment, these processes may be implemented individually and up to the point of receiving tracked client data using the respective scripts502and506. After receiving tracked client data from the server side using server script502and the client side using client script506, the process600may include determining (block602) contextual information. For example, a combination of server side and client side tracked data may provide a more detailed insight into the client's movement. For example, the combined data may indicate information related to clicks on the portal pages, time spent on particular pages, client settings for the portal, documents downloaded from the portal, etc. Thus any false positives or less relevant data indicated from any one script (e.g., client script506) may be overcome by the other script (e.g., server script502). In this manner, determining relevant information in the context of all the data may be easier and contextual information may be more accurate. Next, the process600may include determining (block602) conversation topic based on the contextual information. Since the relevant information may be more narrowly focused, the conversation topic may be more precisely determined. Continuing with the example of the client viewing router information in the portal ofFIGS.5-7, determining an accurate conversation topic may be possible. While neither server script502nor client script506may indicate a narrow topic of conversation, using a combination of both may indicate the client's interest in information related to a particular software configuration for the particular router model manufactured by the provider. Upon determining the conversation topic, the process600may include routing (block604) to an agent for the conversation topic. Since a particular conversation topic is determined, routing to the most appropriate or relevant agent may be possible. For example, routing using the client script506may route to an agent for router hardware while routing using the server script502may route to an agent for software configurations. Although these conversation topics may be relevant, and thus, the client may still skip answering preliminary chat questions, using both server script502and client script506may provide a more accurate and precise indication for the conversation topic, and thus, the corresponding agent. In general, a set of rules may be used to route to an agent. For example, if a particular topic has been searched, then route to agent A, while if a different topic has been searched in conjunction with documents downloaded relating to another topic, then route to agent B, etc. Moreover, the set of rules for routing may be customized for the specific client portal. As briefly mentioned above, a client may benefit from precise conversation topic determinations since initial chat questions may be automatically populated using the contextual information. More specifically, and particularly for virtual agents, conversation questions and answers may be based on a decision tree design. For example, the agent may ask an initial question (e.g., root of the tree) and from there, additional information may be asked to facilitate the conversation (e.g., move down tree branches) to provide the client with the information for which the request for chat was received. Thus, there may be a series of questions that may be automatically fulfilled using the contextual information determined using the techniques described herein. As such, the virtual agent may jump from tree branches of the tree questionnaire. Similarly, in a non-virtual environment, the chat may be directed to the most appropriate IT group or live agent for the topic. The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure. The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f). | 41,918 |
11943178 | DETAILED DESCRIPTION While various embodiments of the invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions may occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed. As utilized herein, terms “component,” “system,” “interface,” “unit” and the like are intended to refer to a computer-related entity, hardware, software (e.g., in execution), and/or firmware. For example, a component can be a processor, a process running on a processor, an object, an executable, a program, a storage device, and/or a computer. By way of illustration, an application running on a server and the server can be a component. One or more components can reside within a process, and a component can be localized on one computer and/or distributed between two or more computers. Further, these components can execute from various computer readable media having various data structures stored thereon. The components can communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network, e.g., the Internet, a local area network, a wide area network, etc. with other systems via the signal). As another example, a component can be an apparatus with specific functionality provided by mechanical parts operated by electric or electronic circuitry; the electric or electronic circuitry can be operated by a software application or a firmware application executed by one or more processors; the one or more processors can be internal or external to the apparatus and can execute at least a part of the software or firmware application. As yet another example, a component can be an apparatus that provides specific functionality through electronic components without mechanical parts; the electronic components can include one or more processors therein to execute software and/or firmware that confer(s), at least in part, the functionality of the electronic components. In some cases, a component can emulate an electronic component via a virtual machine, e.g., within a cloud computing system. Provided herein are methods and systems for providing a multi-channel messaging system. The multi-channel messaging system may comprise providing a digital assistant through multiple communication channels. The digital assistant may be a virtual assistant such as a chatbot. The method and system of the presenting disclosure may allow a user to have a conversation with a chatbot via existing communication channels. The user may interact with the chatbot in a same manner as the user establishing a communication using the respective communication channel. This beneficially provides a smooth experience to users. The multiple channels may be communicating with the user using different communications protocols, rules, formats, communication interfaces or user interfaces. For example, one channel may be communicating using a text messaging protocol (e.g. SMS), one channel may be communicating using electronic mail (email) over a mobile phone network, and one channel may be communicating using an instant messaging services proprietary communications protocol over a wireless internet network (e.g. Wi-Fi). The provided systems can support multiple channels and can be integrated with any existing industrial or enterprise systems (e.g., insurance, bank, social media, etc). A method of operating chatbots through multiple communication channels may comprise selecting one or more communication channels. The one or more communication channels may include, but not limited to, a website channel, email channel, text message channel, digital virtual assistant, smart home device such as Alexa®, interactive voice response (IVR) systems, social media channel and messenger APIs (application programming interfaces) such as Facebook channel, Twilio SMS channel, Skype channel, Slack channel, WeChat channel, Telegram channel, Viber channel, Line channel, Microsoft Team channel, Cisco Spark channel, and Amazon Chime channel, and various others. The virtual assistant may be a chatbot. The chatbot may be accessed through a social media ID/profile, an email address, a number, or other contact based on the related communication channel. The website channel may enable a chatbot to interact with a user through a website. The website may be a personal website. The website may be an enterprise/company website. The chatbot may be implemented by way of an existing communication channel. A user may be in communication with the chatbot without a need of a change in existing communication channel. For example, when a user is on a given social media channel (e.g., Facebook), the user may access the chatbot that has a social media profile, via an existing messenger API (e.g., Facebook messenger). The email channel may allow the chatbot to interact with the user through the existing user interface i.e., emails. The email may involve a sender and at least one recipient. The user may be a sender or a recipient. The chatbot may be a sender or a recipient. The email may include the information regarding, but not limited to, a name (or display name) of the sender, an address (e.g., email address) of the sender which can also be a return path address, a name (or display name) of the intended recipient, an address (e.g., email address) of the recipient, a subject (or title) of the email, the content of the email (e.g., including a message and/or attachments), and/or a combination of the above. The text message channel may allow the chatbot to interact with the user through the existing user interface i.e., text messages. The text message may involve a sender and at least one recipient. The user may be a sender or a recipient. The chatbot may be a sender or a recipient. The text messages may comprise alphabetic and numeric characters. The text messages may be sent between two or more users of mobile phones, tablets, desktops/laptops, or other devices. The text messages may be sent over a cellular network. The text messages may be sent via an internet connection. A user may interact with a chatbot via social media channels and messenger APIs. The Facebook channel may enable the chatbot to interact with the user through Facebook Messenger. The Facebook Messenger may provide free voice and text communication. The Twilio SMS may allow the chatbot to interact with the user via text messages or voice applications. The Skype channel may allow the chatbot to interact with the user through a Skype app. The Skype app may allow the chatbot to interact with the user through a video chat, voice call, text, and/or images. The Slack channel may allow the chatbot to interact with the user through a Slack app by real-time messages. The WeChat channel may allow the chatbot to interact with the user through a WeChat app by free messages and calls across countries. The Telegram channel may allow the chatbot to interact with the user through a Telegram app by instant messages. The Viber channel may allow the chatbot to interact with the user through a Viber app by instant messages. The Line channel may allow the chatbot to interact with the user through a Line app by messages and/or voice calls worldwide. The Microsoft Team channel may allow the user to connect with the chatbot using the user's Microsoft Team account. The Cisco Spark channel may allow the chatbot to interact with the user through the Cisco Spark app. The Cisco Spark app may enable group chats and calls, screen-sharing, file-sharing, video meetings, and/or white-boarding. The Amazon Chime channel may allow the chatbot to interact with the user through the Amazon Chime app. All of the abovementioned apps may be accessible by the user on mobile systems and/or desktop systems. The mobile systems may comprise, but not limited to, iOS, Android, Windows Phone, and Ubuntu Touch. The desktop systems may comprise, but not limited to, Windows, macOS, and Linux. A chatbot can also be accessed via any suitable conversational channels such as smart home device, voice assistance, home automation system, interactive voice response (IVR) systems and the like. For instance, a chatbot may be activated using a wake-word. To select the one or more channels, the user may need to perform one or more actions. The one or more actions may include opening one or more channels. The user's action may be opening a website, email, text message, Facebook Messenger, Twilio app, Skype app, Slack app, WeChat app, Telegram app, Viber app, Line app, Microsoft Team app, Cisco Spark app, Amazon Chime app, providing a voice command, push a button and the like. If the user's action is opening one or more channels, the one or more channels opened by the user may be selected for the chatbot communication. For instance, if the user's action is opening a website, the website channel may be selected for the user to interact with the chatbot. The one or more actions may include closing one or more channels. The user's action may be closing a website, email, text message, Facebook Messenger, Twilio app, Skype app, Slack app, WeChat app, Telegram app, Viber app, Line app, Microsoft Team app, Cisco Spark app, and Amazon Chime app. If the user's action is closing one or more channels, the one or more channels closed by the user may not be selected for the chatbot communication. For instance, if the user's action is closing a website, the website channel may not be selected for the user to interact with the chatbot. After closing one or more channels, the user may select another channel by his/her action of opening that channel. The one or more channels may host one or more chatbots. The one or more channels may host one or more chatbots simultaneously. If the one or more channels host one or more chatbots simultaneously, the communication between the user and chatbots may be stored in a communication database. In this situation, when the user closes one of the one or more channels, the user may continue his/her communication with the chatbot in the channels that remain open. The one or more channels may host one or more chatbots non-simultaneously. If the one or more channels host one or more chatbots non-simultaneously, the communication between the user and chatbots may be stored in a communication database constantly. In this situation, when the user closes a first channel, the communication between the user and the chatbot on the closed channel may be transitioned to a second channel so that the user can continue his/her communication with the chatbot when he/she opens the second channel. For instance, a conversation can be continued and carried over multiple different communication channels, different user interfaces, different types of communications or different servers. Each of the one or more channels may host at least one, two, three, four or more chatbots. This beneficially allows a conversation with a chatbot to be continued on different channels seamlessly using the existing user interfaces. At least one chatbot of the one or more chatbots may have a communication data structure. The communication data structure may control the direction, the content, the time of a communication between the user and the chatbot. The communication data structure may comprise a plurality of communication paths. The plurality of communication paths may navigate the communication between the user and the chatbot through a plurality of units. Different communication paths may enable the user to go through different communications with the chatbot. In each communication path, a plurality of actions may be performed. The plurality of actions may include, but not limited to, comparing the user's input with a plurality of units, selecting one unit based on the comparison between the user's input and the plurality of units, randomly selecting a unit based on a selection algorithm, providing feedback to the user, and waiting for further input from the user. The plurality of actions may be repeated many times before a communication path is exhausted. The communication path may be unidirectional. The communication path may not be unidirectional. If the communication path is not unidirectional, the communication path may be a loop. In some cases, the communication path may remain open while the conversation is transitioned across multiple channels. The communication path may comprise a plurality of units. The plurality of units may be coded with instructions to navigate among different communication paths. The plurality of units may comprise different types of units, including, but not limited to, a feedback unit, input unit, wait unit, communication identity analysis unit, and universal unit. The feedback unit may be decoded to provide a feedback to a user during a communication. The feedback may be in a form of text, HTML, image, video, audio, and avatar animation. The input unit may be a reference input unit or a variation thereof. The reference input may represent what a user may input to the chatbot during a communication. The reference inputs may be pre-defined intents, utterances, questions, answers, requests, information, or demands. The reference inputs may be updated constantly according to one or more algorithms. The one or more algorithms may be machine learning algorithms. The reference inputs may be obtained through machine learning using existing customer data. The existing customer data may be historical voice call logs or chat messages. The reference inputs may be obtained through continued learning during a live communication between the user and the chatbot. The reference inputs may be obtained through collecting customer feedback to measure relevance, accuracy, and precision of responses. The input unit may be the user's actual input during his/her communication with the chatbot. The wait unit may be decoded to stop and wait for the user to enter another input. The communication identity analysis unit may be decoded to produce a communication identity of the user based on one or more inputs from the user and select a unit in the communication data structure based on the communication identity. The universal unit may be decoded to match any input during a communication when a match between the user's input and a reference input is not found. The method of operating chatbots through multiple channels may further comprise receiving an input from a user through one of the one or more channels. The input may be in the form of, but not limited to, text, voice, image, and video. The input may be any type of information of the user, including, but not limited to, the first name of the user, the last name of the user, the birthday of the user, the phone number of the user, the email address of the user, the social security number of the user, the driver's license number of the user, the password of the user, the credit card information of the user, the address of the user, the zip code of the user, and the answers to security questions of the user. The input may be a request, a question, an answer, a demand, or an instruction from the user. The method of operating chatbots through multiple channels may also comprise comparing the input with the plurality of units in the communication data structure. The comparison between the input and the plurality of the units may comprise checking the received input with the plurality of units until a match is found. The comparison between the input and the plurality of the units may comprise translating the received input into a format using one or more dictionaries in the communication database, and checking the translated input with the plurality of units in the communication data structure until a match is found. The method of operating chatbots through multiple channels may comprise selecting a unit in the communication data structure based on the comparison between the input and the plurality of units in the communication data structure. If the input matches one of the plurality of units in the communication data structure, the matched unit may be selected. If the input does not match any of the plurality of units in the communication data structure, a selection algorithm may be used to select a unit. The selection algorithm may conduct an approximate matching. The approximate matching may comprise determining one or more units in the communication data structure that may approximately match the received input, providing the user with one or more feedbacks associated with one or more approximately matched units, and if the user chooses one of the feedbacks, selecting the unit associated with the chosen feedback. The received input may be recorded as a variation to a reference unit. The selection algorithm may conduct a random selection. The random selection may be conducted if the approximate matching is not successful. The random selection may determine a user's expected feedback based on the user's historical communication data and/or other users' historical communication data. The selection algorithm may be a machine learning algorithm. The method of operating chatbots through multiple channels may also comprise processing the selected unit to generate instructions coded in the unit. The instructions may comprise performing a plurality of activities. The plurality of activities may include, but not limited to, receiving one or more inputs from a user, checking one or more units in the communication data structure, determining whether the input matches one unit in the communication data structure, selecting a communication path if there is a match between the input and one unit in the communication data structure, randomly selecting a unit/communication path according to a selection algorithm if no match is found, and providing a feedback to the user based on the selected communication path. The method of operating chatbots through multiple channels may also comprise selecting a communication path based on the instructions coded in the selected unit. The communication path may comprise a plurality of units. The communication path may be an artificial intelligent communication path. The communication path may be a real human communication path. The communication path may navigate through a plurality of units to compare the user's input with a plurality of units, select one unit based on the comparison between the user's input and the plurality of units, randomly select a unit based on a selection algorithm, provide feedback to the user, and wait for further input from the user. The communication path may be ended when the user choose to close the channels hosting the chatbot communication. The communication path may be unidirectional. The communication path may be a loop. In some cases, the communication path may be optimized to seamlessly navigate users through a deviation and then come back to the original conversation without custom coding. The method of operating chatbots through multiple channels may also comprise providing a feedback to the user based on the selected communication path. The feedback may be in the form of, but not limited to, text, voice, image, and video. The feedback may be text-based, HTML, image, video, audio, and avatar animation such as smiling. The feedback may provide clickable features for the user to add code, images, video, audio, and animation. The feedback may be a request, a question, an answer, a demand, or an instruction to the user. The feedback may be any type of information related to the user, including, but not limited to, the answers to the user's general questions, the balance of the credit card of the user, the way to make a payment by the user, the phone number requested by the user, the local time, the addresses requested by the user, the email addresses requested by the user, the local weather, the latest news, and the bills of the user. In another aspect, a method of operating chatbots through multiple channels may comprise selecting one or more channels, wherein the one or more channels host one or more chatbots, wherein at least one of the one or more chatbots has a communication data structure comprising a plurality of communication paths, wherein each communication path comprises a plurality of units. The descriptions and explanations of the one or more channels, the communication data structure, the plurality of communication path, and the plurality of units disclosed herein may be similar to the descriptions and explanations in the previous paragraphs. The method of operating chatbots through multiple channels may further comprise receiving one or more inputs from a user through the one or more channels. The input may be in the form of, but not limited to, text, voice, image, and video. The input may include or relate to any type of information of the user, including, but not limited to, the first name of the user, the last name of the user, the birthday of the user, the nationality of the user, the race of the user, the gender of the user, the hobbies of the user, the phone number of the user, the email address of the user, the social security number of the user, the driver's license number of the user, the password of the user, the credit card information of the user, the address of the user, the zip code of the user, and the answers to security questions of the user. The input may be a request, a question, an answer, a demand, or an instruction from the user. The one or more inputs of the user may be used to obtain a communication identity of the user. The input may be processed by any suitable algorithms or methods useful for data capturing, aggregation, data cleaning, normalization, voice processing, information extraction, intent analysis or various other analysis purposes. Such algorithms or methods may include, but not limit to, statistical modeling, linguistic processing, natural language processing techniques, pattern matching, machine learning, trend analysis, and logical queries on the data and the like. The method of operating chatbots through multiple channels may further comprise producing a communication identity of the user based on one or more inputs from the user. The communication identity of the user can be extracted using suitable algorithms or techniques as described above. The communication identity of the user may comprise a plurality of identity elements. The plurality of identity elements may correlate to the user's inputs. The user's inputs may be associated to a plurality of units in the communication data structure. The plurality of units may be processed to yield instructions to update the plurality of identity elements. The plurality of identity elements may enable the chatbot to better personalize a communication with the user according to the communication identity of the user. The communication identity of the user may represent whether the user is, a credit customer, an enterprise/company, or an employee of the enterprise/company. The communication identity may be created by updating the plurality of identity elements of the user when a unit coded with instructions to update the identity elements is processed. The plurality of identity elements may comprise, but not limited to, the content of the communication, the user's interest, the user's gender, the user's occupation, the user's education, the user's nationality, the user's race, the user's hobbies, the user's personality, the user's demographic, the user's input, and an activity irrelevant to the communication. One or more algorithms may be used to update the plurality of identity elements and/or producing the communication identity of the user. To produce the communication identity of the user, the one or more algorithms may be used to analyze a set of identity elements of a user, and generate the communication identity of the user based on the analysis. The one or more algorithms may comprise machine learning algorithms, natural language processing algorithms or other information extraction algorithms or methods. The communication identity may be obtained through machine learning using existing customer data. The existing customer data may be historical voice call logs or chat messages. The communication identity may be obtained through continued learning during a live communication between the user and the chatbot. The communication identity may be obtained through collecting customer feedback to measure relevance, accuracy, and precision of responses. The method of operating chatbots through multiple channels may further comprise selecting a unit in the communication data structure based on the communication identity. If the communication identity obtained based on user's inputs matches one of communication identities stored in the communication data structure, a unit associated with the matched communication identity may be selected. If the communication identity obtained based on user's inputs does not match any one of communication identities stored in the communication data structure, a selection algorithm may be used to select a unit. The selection algorithm may conduct an approximate matching. The approximate matching may comprise determining one or more communication identities stored in the communication data structure that may approximately match to the communication identity obtained based on user's inputs, providing the user with one or more feedbacks associated with one or more approximately matched communication identities, and if the user chooses one of the feedbacks, selecting the unit associated with the chosen communication identity. The chosen communication identity may be recorded as a variation to an existing communication identity stored in the communication data structure. The selection algorithm may conduct a random selection. The random selection may be conducted if the approximate matching is not successful. The random selection may determine a user's expected feedback based on the user's historical communication data and/or other users' historical communication data. The selection algorithm may be a machine learning algorithm. The method of operating chatbots through multiple channels may also comprise processing the selected unit to generate instructions coded in the unit. The instructions may comprise performing a plurality of activities. The plurality of activities may include, but not limited to, receiving one or more inputs from a user, checking one or more units in the communication data structure, determining whether a communication identity obtained based on user's inputs matches one communication identity of a particular user stored in communication database, selecting a communication path if there is a match between a communication identity obtained based on user's inputs and one communication identity of a particular user stored in the communication database, randomly selecting a unit according to a selection algorithm if no match is found, and providing a feedback to the user based on the selected communication path. The unit may further perform an assessment on one or more communication identities and an external source and report the result of the assessment. The method of operating chatbots through multiple channels may further comprise selecting a communication path based on the instructions coded in the selected unit. The communication path may comprise a plurality of units. The communication path may be an artificial intelligent communication path. The communication path may be a real human communication path. The communication path may navigate through a plurality of units to compare the user's input with a plurality of units, select one unit based on the comparison between a communication identity obtained based on user's inputs and one communication identity of a particular user stored in the communication database, randomly select a unit based on a selection algorithm, provide feedback to the user, and wait for further input from the user. The communication path may be ended when the user choose to close the channels hosting the chatbot communication. The communication path may be unidirectional. The communication path may be a loop. The method of operating chatbots through multiple channels may further comprise providing a feedback to the user based on a selected communication path. The feedback may be in the form of, but not limited to, text, voice, image, and video. The feedback may be text-based, HTML, image, video, audio, and avatar animation such as smiling. The feedback may provide clickable features for the user to add code, images, video, audio, and animation. The feedback may be a request, a question, an answer, a demand, or an instruction to the user. The feedback may be any type of information related to the user, including, but not limited to, the answers to the user's general questions, the balance of the credit card of the user, the way to make a payment by the user, the phone number requested by the user, the local time, the addresses requested by the user, the email addresses requested by the user, the local weather, the latest news, and the bills of the user. The selected communication path may be a customized communication path based on the user's communication identity. The customized communication path may comprise at least one of, but not limited to, customized content, customized layout, customized presentation, customized format, customized robot action and customized source of information. In another aspect, a chatbot system may comprise a communication database. The communication database may comprise a communication data structure. The communication database may further comprise a communication component, a dictionary component, a training component, and a universal component. The communication data structure may comprise a plurality of communication paths. The communication path may comprise a plurality of units. The communication database may store one or more chatbots. The communication database may store any information, including, but not limited to, the user's communication with the chatbot, the communication identities, the identity elements, the reference units, information obtained during the production of a chatbot, and the instructions coded in the units. The chatbot system may control the data transmission between the communication database and an application server. The application server may transmit a user's input to the communication database. The application server may display a feedback provided by the communication database to the user. The application server may host a plurality of communication channels, including, but not limited to, a website, email, text message, digital virtual assistant, smart home device, interactive voice response (IVR) systems, Facebook Messenger, Twilio app, Skype app, Slack app, WeChat app, Telegram app, Viber app, Line app, Microsoft Team app, Cisco Spark app, Amazon Chime app and various other conversational channels. In another aspect, a method of building a multi-channel chatbot system may comprise receiving one or more training datasets. The one or more training datasets may be stored in the communication database. The one or more training datasets may comprise, but not limited to, a group of questions and responses, variations of a group of questions and answers, information specific to an industry, a group of identity elements, a group of communication identities, and a communication data structure associated with another chatbot. The method of building a multi-channel chatbot system may further comprise building a communication data structure according to the one or more training datasets. Machine learning techniques may be used to build a communication data structure. The machine learning techniques may be deep neural networks (DNNs). The DNNs can include convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The communication data structure may comprise a plurality of communication paths. The communication path may comprise a plurality of units. In an embodiment, the methods and systems in the presenting disclosure may be used by insurance companies. In this situation, the user may be a primary named insured of an insurance policy. An input to the system may include, but not limited to, whether the user is the primary named insured, first name and last name of the user/primary named insured, birthday of the user/primary named insured, social security number of the user/primary named insured, description of a loss, risk address of the loss, risk zip code of the loss, home address of the user/primary named insured, zip code of the user/primary named insured, payment information of the user/primary named insured, and a policy number of the user/primary named insured. The communication identity may be a primary named insured or non-primary named insured. The feedback may include, but not limited to, an answer to a question asked by the user/primary named insured, a question to the user/primary named insured, and an instruction to the user/primary named insured. Reference will now be made to the figures, wherein like numerals refer to like parts throughout. It will be appreciated that the figures and features therein are not necessarily drawn to scale. FIG.1shows an example of a block diagram of a multi-channel messaging platform in which a digital assistant system of the present disclosure may be implemented. The digital assistant system may be a chatbot system. The network environment may comprise one or more user devices120, a digital assistant server100, an application server110, a database101. Each of the components100,110,120,101may be operatively connected to one another via network130or any type of communication links that allows transmission of data from one component to another. The application server110may host one or more channels that support the chatbot. The chatbot server100may host a virtual assistant such as a chatbot which is integrated with the one or more channels. A user may interact with the chatbot via the one or more communication channels using one or more user devices120. The database101may be a data warehouse storing data related to a service provided through the chatbot, associated user data or other information. AlthoughFIG.1illustrates a single application server and a single chatbot server, the disclosure is not limited thereto. In some embodiments, multiple application servers may be included in the network and each application server may host a communication channel. A chatbot system may be implemented anywhere in the network environment, and/or outside of the network environment. In some embodiments, the chatbot system may be implemented on a server100in the network. alternatively, the chatbot system may be implemented in a distributed computing environment, such as the cloud. In other embodiments, the chatbot may be implemented on the user device120. In some further embodiments, a plurality of chatbot systems may be implemented on one or more servers, user devices, and/or distributed computing environment. Alternatively, the chatbot system may be implemented in one or more databases. The chatbot system may be implemented using software, hardware, or a combination of software and hardware in one or more of the above-mentioned components within the network environment. In some embodiments, the chatbot system may comprise a dialogue management module, a machine learning module and an interface module. The dialogue management module may be configured to manage one or more communication flows. The dialogue management module may comprise multiple flows and may provide logistics based on the context and intent of a user. The flows may be well-crafted that provide convenience and accurate service as requested by a user. In some cases, the dialogue management module may be configured to update a flow/path based on collected data. For instance, the dialogue management module may collect data of different users across different channels during multiple flows/paths, an improved flow/path may be generated based on the actions taken by the user. The machine learning module may comprise using of natural language processing (NLP), artificial intelligence (AI) or machine learning technologies to extract an intent and/or entity from a user input. The machine learning module may be configured to train a model to understand user's intent specific to a service (e.g., insurance). In some cases, the model may be refined and updated as real-time data are collected. The interface module may be configured to interface to the multiple channels and/or the database. In some cases, the interface module may be configured to synchronize communication flows across multiple channels associated with a single user. The interface module may be configured to access a database for retrieving information involved in a flow/path. For instance, when the chatbot provides insurance related query service, the chatbot may access an insurance policy library and/or user profiles to provide an answer to the user. In another instance, the chatbot may access the database in order to retrieve an AI model or a flow path. In other instances, information communicated in a conversation with a chatbot on a first channel may be stored in the database, and such information may be retrieved to continue the same conversation carried on a second channel. A server may include a web server, an enterprise server, or any other type of computer server, and can be computer programmed to accept requests (e.g., HTTP, or other protocols that can initiate data transmission) from a computing device (e.g., user device and/or wearable device) and to serve the computing device with requested data. In addition, a server can be a broadcasting facility, such as free-to-air, cable, satellite, and other broadcasting facility, for distributing data. A server may also be a server in a data network (e.g., a cloud computing network). A server may include various computing components, such as one or more processors, one or more memory devices storing software instructions executed by the processor(s), and data. A server can have one or more processors and at least one memory for storing program instructions. The processor(s) can be a single or multiple microprocessors, field programmable gate arrays (FPGAs), or digital signal processors (DSPs) capable of executing particular sets of instructions. Computer-readable instructions can be stored on a tangible non-transitory computer-readable medium, such as a flexible disk, a hard disk, a CD-ROM (compact disk-read only memory), and MO (magneto-optical), a DVD-ROM (digital versatile disk-read only memory), a DVD RAM (digital versatile disk-random access memory), or a semiconductor memory. Alternatively, the methods can be implemented in hardware components or combinations of hardware and software such as, for example, ASICs, special purpose computers, or general purpose computers. The chatbot may be integrated into any communication channels accessed through user devices120. User devices may be computing device configured to perform one or more operations consistent with the disclosed embodiment. User devices may include mobile systems or desktop systems. Examples of user devices may include mobile phone, tablet, smartwatch, digital camera, personal navigation device, tablets, personal digital assistants (PDAs), laptop or notebook computers, desktop computers, media content players, television sets, video gaming station/system, virtual reality systems, augmented reality systems, microphones, smart home device, a navigation system in a vehicle (e.g. for changing the destination, turning on the highlights), a control panel on a machine (e.g. for turning on the machine, controlling the power of the engine), an artificial intelligence controller of a building (e.g. for opening a door, turning on a light, changing the temperate of the room), or any electronic device configured to enable the user to access the chatbot via a communication channel. The user device may or may not be a handheld object. The user device may or may not be portable. In some cases, the user device may be carried by a human user. In some cases, the user device may be located remotely from a human user, and the user can control the user device using wireless and/or wired communications. The user device may include a communication unit, which may permit the communications with one or more other components in the network. In some instances, the communication unit may include a single communication module, or multiple communication modules. In some instances, the user device may be capable of interacting with one or more components in the network environment using a single communication link or multiple different types of communication links. The user devices120may interact with the chatbot system100via the network130. User device may include one or more processors that are capable of executing non-transitory computer readable media that may provide instructions for one or more operations consistent with the disclosed embodiments. The user device may include one or more memory storage devices comprising non-transitory computer readable media including code, logic, or instructions for performing the one or more operations. In some embodiments, users may utilize the user devices120to interact with the chatbot system100by way of one or more software applications (i.e., client software) running on and/or accessed by the user devices, wherein the user devices120and the application server/chatbot server may form a client-server relationship. For example, the user devices120may run existing mobile communication applications (e.g., text/SMS) through which the user may establish a communication channel with the chatbot. The user may utilize one or more applications with the integrated chatbot to access insurance related information. In some embodiments, the client software (i.e., software applications installed on the user devices120) may be available either as downloadable mobile applications for various types of mobile devices. Alternatively, the client software can be implemented in a combination of one or more programming languages and markup languages for execution by various web browsers. For example, the client software can be executed in web browsers that support JavaScript and HTML rendering, such as Chrome, Mozilla Firefox, Internet Explorer, Safari, and any other compatible web browsers. The various embodiments of client software applications may be compiled for various devices, across multiple platforms, and may be optimized for their respective native platforms. User device may include a display. The display may be a screen. The display may or may not be a touchscreen. The display may be a light-emitting diode (LED) screen, OLED screen, liquid crystal display (LCD) screen, plasma screen, or any other type of screen. The display may be configured to show a user interface (UI) or a graphical user interface (GUI) rendered through an application (e.g., via an application programming interface (API) executed on the user device). The GUI may show graphical elements that permit a user to communicate with a chatbot within the GUI. The user device may also be configured to display webpages and/or websites on the Internet. One or more of the webpages/websites may be hosted by the application server or a third party server. User devices may be associated with one or more users. In some embodiments, a user may be associated with a unique user device. Alternatively, a user may be associated with a plurality of user devices. In an example, a user as described herein may refer to an individual or a group of individuals who are seeking insurance related information through the chatbot. For example, the user may be policy holders who would like to know the due date for the next payment, the status of the current claim, or report a loss; the user may be insurance agents who seek for live underwriting assistance, commission or guidance on creating an endorsement; the user may be “c level” executives who would like to know the written premium, loss ratio, or the amount of reported claims. The network130may be a communication pathway between the application server, the user devices120, and other components of the network. The network130may comprise any combination of local area and/or wide area networks using both wireless and/or wired communication systems. For example, the network130may include the Internet, as well as mobile telephone networks. In one embodiment, the network130uses standard communications technologies and/or protocols. Hence, the network130may include links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 2G/3G/4G or Long Term Evolution (LTE) mobile communications protocols, Infra-Red (IR) communication technologies, and/or Wi-Fi, and may be wireless, wired, asynchronous transfer mode (ATM), InfiniBand, PCI Express Advanced Switching, or a combination thereof. Other networking protocols used on the network130can include multiprotocol label switching (MPLS), the transmission control protocol/Internet protocol (TCP/IP), the User Datagram Protocol (UDP), the hypertext transport protocol (HTTP), the simple mail transfer protocol (SMTP), the file transfer protocol (FTP), and the like. The data exchanged over the network can be represented using technologies and/or formats including image data in binary form (e.g., Portable Networks Graphics (PNG)), the hypertext markup language (HTML), the extensible markup language (XML), etc. In addition, all or some of links can be encrypted using conventional encryption technologies such as secure sockets layers (SSL), transport layer security (TLS), Internet Protocol security (IPsec), etc. In another embodiment, the entities on the network can use custom and/or dedicated data communications technologies instead of, or in addition to, the ones described above. The network may be wireless, wired, or a combination thereof. The communication database101may comprise a plurality of components comprising a communication data structure103, a communication component104, a dictionary component105, a universal component106, and a training component107. Information stored in one component may be connected with information stored in other components. One piece of information may be stored in multiple components. The communication database101may be in communication with the user devices, application server and/or chatbot server over a network130. The communication database101may store all chatbots developed by the chatbot server109. The communication database101may be configured to leverage analytics for responses to sophisticated questions or complicated conversations. The communication database101may also store knowledge learned during the development of the chatbots. When new chatbots are created, information of the new chatbots may be stored in the dictionary component105. The dictionary component may be operatively coupled with the communication component104to update variations of references inputs. The communication component104may comprise many variations of reference inputs to match user's inputs during a communication between the user and the chatbot. For example, a way to express a greeting may comprise variations of “Hello,” “Hi,” and “How are you” in the communication component104. If a user's input is “Hello there,” a match would be made with “Hello” variation in the communication component104. The phrase “Hello there” may be stored in the communication component104and updated in dictionary component105as one of the reference inputs. The dictionary component105may be accessible by the system during converting a user's input into a recognizable variations stored in the communication component. The user's input may comprise misspellings, slangs, or foreign language terms. The dictionary component105may be responsible to connect a user's input to variations in the communication component104. In some embodiments, the dictionary component105may comprise a learned model generated using artificial intelligence techniques. The training component107may store templates of feedbacks to the user. The training component107may store one or more training datasets. The one or more training datasets may comprise, but not limited to, a group of questions and responses, variations of a group of questions and answers, information specific to an industry, a group of identity elements, a group of communication identities, and a communication data structure associated with another chatbot. The universal component106may store one or more universal units. The universal units may be selected based on a selection algorithm. The universal units may be selected based on a random selection algorithm. The universal units may be selected when the user's inputs do not match any of the reference units or variations thereof. The one or more databases may utilize any suitable database techniques. For instance, structured query language (SQL) or “NoSQL” database may be utilized for storing the analytics as described above. Some of the databases may be implemented using various standard data-structures, such as an array, hash, (linked) list, struct, structured text file (e.g., XML), table, JSON, NOSQL and/or the like. Such data-structures may be stored in memory and/or in (structured) files. In another alternative, an object-oriented database may be used. Object databases can include a number of object collections that are grouped and/or linked together by common attributes; they may be related to other object collections by some common attributes. Object-oriented databases perform similarly to relational databases with the exception that objects are not just pieces of data but may have other types of functionality encapsulated within a given object. If the database of the present invention is implemented as a data-structure, the use of the database of the present invention may be integrated into another component such as the component of the present invention. Also, the database may be implemented as a mix of data structures, objects, and relational structures. Databases may be consolidated and/or distributed in variations through standard data processing techniques. Portions of databases, e.g., tables, may be exported and/or imported and thus decentralized and/or integrated. In some embodiments, the chatbot system may construct the database in order to deliver the data to the users efficiently. For example, the chatbot system may provide customized algorithms to extract, transform, and load (ETL) the data. In some embodiments, the chatbot system may construct the databases using proprietary database architecture or data structures to provide an efficient database model that is adapted to large scale databases, is easily scalable, is efficient in query and data retrieval, or has reduced memory requirements in comparison to using other data structures. The application server110may provide one or more channels for the user to interact with the chatbots. The one or more channels may comprise social media channels such as Facebook®, LinkedIn®, Twitter®, YouTube, Pinterest®, Google+ or Instagram®, weblogs and the like, mobile apps and software such as Hangouts®, WhatsApp®, Snapchat®, Line®, Wechat®, Skype®, emails, smart home devices, interactive voice response (IVR) systems, enterprise website, SMS apps (e.g., iMessage®), messenger APIs and various others. In the illustrated example, the one or more channels include a company website112, a Facebook Messenger113, a smart home device (Alexa®)114, and emails115. The methods and systems provided herein can be used by any business entities, enterprises, organizations, industries or be utilized in any applications. For example, the methods and systems provided herein may be used by insurance companies.FIGS.2A-FIG.3Dshow examples of flows or communication paths in various situations associated with insurance service. These communication paths may be well-crafted to provide a smooth user experience. For example, when a user does not have the requested information handy (e.g., claim number), the chatbot may navigate the communication path by offering alternative prompt or calling the fallback prompt to facilitate customer interactions. For instance, if the chatbot's prompt for a customer's information, such as a claim number, does not result in a valid response, the chatbot may provide alternative prompt requesting other information. In another example, the communication path may be optimized to seamlessly navigate users through a deviation and then come back to the original conversation. FIG.2Ashows an example of a flow chart of a user's communication with a chatbot to check a payment due date when the chatbot is used by insurance companies. At the beginning of the communication, the chatbot may ask the user whether he/she knows the policy number201. If the user knows the policy number, the chatbot may ask the user to provide the policy number202. If the user does not know the policy number, the chatbot may ask user for alternative information such as the policy owner type203. The policy owner type may be a business or an individual. If the policy owner type is a business, the chatbot may ask the user about the business name204. If the business name provided by the user matches any reference unit in the communication data structure, the chatbot may generate policy number to the user. If the policy owner type is an individual, the chatbot will ask the user about his/her first name, last name, and zip code205. The process of asking the first name, last name, and zip code may be repeated many times until a match is found in the communication data structure. If the first name, last name, and zip code match any reference unit in the communication data structure, the chatbot may generate the policy number to the user. After the policy number is generated, the chatbot may ask the user to provide the risk zip code and zip code206. The process of asking the zip code may be repeated until a match is found. After matching the zip code, the chatbot may provide a feedback to the user. In the illustrated example, the feedback is the due date of the user's payment. The flow chart inFIG.2Amay be divided into different categories: the high priority category207, the medium priority category208, and low priority category209. FIG.2Bshows an example of a flow chart of a user's communication with a chatbot to make a payment if the chatbot is used by insurance companies. The light color square211may represent a user's input. The dark color square212may represent a chatbot's feedback. At the beginning of the communication, the chatbot may check whether the policy number is in context213. If the policy number is in context, the chatbot may obtain the policy number and then check the risk zip code and zip code214. The chatbot may then check whether the zip code is in context. If the zip code is in context, the chatbot may obtain an amount due215of the user's policy. If the zip code is not in the context, the chatbot may ask the user to enter the zip code. The process of entering the zip code may be repeated until a match is found. If the risk zip code equals to the zip code, the chatbot may obtain the amount due215of the user's policy. After obtaining the amount due215, the chatbot may then obtain the information of the due date216. If the amount due is zero, the chatbot may ask the user whether the user wants to overpay. If the user does not want to overpay, the communication may be terminated. If the user wants to overpay, the chatbot may direct the user to a payment process217. If the amount due is larger than zero, the chatbot may ask the user whether the user wants to pay the full amount due. According to the user's answer, the chatbot may direct user to different payment processes217. At the beginning of the communication, if the policy number is not in context, the chatbot may ask the user whether the user knows the policy number. If the user knows the policy number, the chatbot may ask the user to provide the policy number. The process of providing the policy number may be repeated many times until a match is found. The chatbot may then check the risk zip code. The chatbot may check whether the zip code is in context. If the zip code is in context, the chatbot may obtain the amount due215of the user's policy. If the zip code is not in the context, the chatbot may ask the user to enter the zip code. The process of entering the zip code may be repeated until a match is found. If the risk zip code equals to the zip code, then the chatbot may obtain the amount due215of the user's policy. After obtaining the amount due, the chatbot may obtain the information about the due date216. If the amount due is zero, the chatbot may ask the user whether the user wants to overpay. If the user does not want to overpay, the communication may be terminated. If the user wants to overpay, the chatbot may direct the user to a payment process217. If the user does not know the policy number, the chatbot may ask the user's first name, last name, and zip code. The process of asking the user's first name, last name, and zip code may be repeated many times until a match is found in the communication data structure. If the name and the zip code match any unit in the communication data structure, the chatbot may obtain the policy number of the user. After the policy number is obtained, the chatbot may check the risk zip code, the amount due, and the due date. The chatbot may then ask the user whether the user wants to pay the full amount due. According to the user's answer, the chatbot may direct user to different payment processes217. During the payment processes217, if the user wants to pay the full amount due, the chatbot may ask the user which payment method the user wants to choose. If the user does not want to pay the full amount due, the chatbot may ask the user to provide an amount that the user wants to pay. The amount the user wants to pay may have to be above certain threshold in order to proceed to the payment method. The payment method may include a credit card method and electronic transfer method. The process of making the payment may be repeated may times until all the payment information is correct and a payment is received. FIG.2Cshows an example of a flow chart of a user's communication with a chatbot to file a claim if the chatbot is used by insurance companies. At the beginning of the communication, the chatbot may ask the user whether the user knows the policy number. If the user knows the policy number, the chatbot may ask the user to provide the policy number and the first name of the policy primary insured. The chatbot may then obtain the last name of the policy primary insured, risk zip code, and zip code and check whether there is a match. If no match is found, the communication may be terminated. If a match is found, the chatbot may direct the user to a claim filing process. If the user does not know the policy number, the chatbot may ask user about the policy owner type. The policy owner type may be a business or an individual. If the policy owner type is a business, the chatbot may ask the business name, the risk street address, and zip code. If the policy owner type is an individual, the chatbot may ask the first name, last name, risk street address, and zip code of the policy primary insured. The processing of asking the user's information may be repeated many times until a match is found in the communication data structure. If a match is found, then the chatbot may obtain the policy number and risk zip code. After the policy number is obtained, the chatbot may ask the user to provide the zip code and direct the user to the claim filing process. During the claim filing process, the chatbot may ask the user to provide a loss description221, a loss date222and a loss time223. The chatbot may then check whether the loss is created by a catastrophe224. If the loss is created by a catastrophe, the chatbot may obtain a catastrophe ID. If the loss is not created by a catastrophe, the chatbot may check the cause of damage. If the cause of damage is found, the chatbot may obtain a cause of damage code. The chatbot may further check whether the insured has to move out225because of the loss. If the insured has to move out because of the loss, the chatbot may ask the user to provide a choice of contact. If the choice of contact is not the user himself or herself, the chatbot may ask the user to provide the first name, last name, and contact method of the choice of contact. If the choice of contact is the user himself or herself, the chatbot may ask the contact method. The user may choose the contact method226among emails, cell phone, home phone, and office phone. The chatbot may then obtain the email addresses and/or phone numbers from the user for the contact method. There may be distinctions between the claim's primary contact and policy primary insured. The first name and last name of the claim's primary contact may be part of the newly created claim. The first name and last name of the policy primary insured may be used to find a policy when the policy number is not available. FIG.3Ashows another example of a flow chart of a user's communication with a chatbot to create a claim if the chatbot is used by insurance companies. At the beginning of the communication, the chatbot may ask the user to provide an intent301for the communication. The user may answer “I'd like to report a loss” or “I'd like to file a claim.” The chatbot may then state “I can certainly help you with that. Are you the primary insured on the policy?” The user may answer yes or no. The chatbot may then state “Have policy number?”302. If the user has a policy number, the chatbot may ask the user to enter the policy number. The chatbot may then search its communication database to decide whether the policy number is valid303. The process of entering and checking the policy number may be repeated many times until a match is found. If there is no match after maximum retries, the communication may be terminated304. If the user does not have a policy number, the chatbot may ask “Is policyholder a business or individual?” If the user is a business, then the chatbot may ask the user to enter the business name and/or address. If the user is an individual, then the chatbot may ask the user to enter the first name, last name, and address. The chatbot may then search305its database to check whether there is a match. If there is no match, then the communication may be terminated304. If there is a match, the chatbot may check whether it can authorize an access to the policy306. If the chatbot cannot authorize the access to the policy, the chatbot may answer “Let me get you someone to help” and direct the communication with the user to a human representative307. At this point, the communication between the user and the chatbot may be terminated304. FIG.3Bshows another example of a flow chart of a user's communication with a chatbot to create a claim if the chatbot is used by an insurance company. After the access to the policy is authorized, the chatbot may ask the user to enter a risk address, zip code, and the date of birth of the primary named insured for a given policy number311. After user enters the information312, the chatbot may then check whether the entered data by the user is valid313. If the entered data is valid, the chatbot may proceed to a success mode314. If the entered data is invalid, the chatbot may proceed to a failure mode315. The communication may be terminated316. FIG.3Cshows another example of a flow chart of a user's communication with the chatbot to create a claim if the chatbot is used by an insurance company. After proceeding to the success mode, the chatbot may state “describe what happened” and “where did it happen?” The user may choose to either use the risk address as the loss location321or provide loss location details322. The chatbot may ask the user “when did the damage occur” and “whether the damage is caused by a catastrophe?” If the damage is caused by a catastrophe, the chatbot may ask the user to select a catastrophe event323. If the damage is not caused by a catastrophe, the chatbot may ask the user to select a loss type324. The chatbot may then ask the user “what caused the damage” and “do you have to move out?”325. Based on the user's answers, the chatbot may ask “who is the primary contact for claim?”326. The user may select that the primary contact is himself/herself, someone else, or an attorney/mortgagee. If the primary contact is someone else or an attorney/mortgagee, the chatbot may ask their relationships to the primary insured327and the contact name328. The chatbot may ask the contact method. The user may choose either email or phone. After this selection, the chatbot may ask the user whether the user is ready to file a claim329. If the user is not ready to file a claim, the chatbot may enter a to-be-decided mode330. If the user is ready to file a claim, the chatbot may enter the claim filing process. The claim filing process may comprise creating the claim331, showing the claim number and any other details to the user332, and displaying a thank you message333. Then the communication may be terminated334. FIG.3Dshows an example of a flow chart of a user's communication with a chatbot to check a claim status if the chatbot is used by insurance companies. At the beginning of the communication, the chatbot may ask the user “what is your intent?”341. The user may answer “I'd like to check my claim status.”342. The chatbot may ask the user whether he/she has the claim number343. If the user has the claim number, the chatbot may ask the user to enter the claim number and check whether the entered claim number is valid344. The process of entering the claim number and checking the entered claim number may be repeated many times until a match is found. Then the chatbot may determine the claim status345and return the claim status to the user346. If the user does not have the claim number, the chatbot may ask the user whether the user has the policy number347. If the user has the policy number, the chatbot may ask the user to enter the policy number and check whether the entered policy number is valid348. The process of entering the policy number and checking the entered policy number may be repeated many times until a match is found. If the user does not have the policy number, the chatbot may ask the user whether the user has the risk address349. If the user has the risk address, the chatbot may ask the user to enter the first name, last name and the risk address350. Then the chatbot may enter the authorization flow351, find all open claims352, and ask user to select a specific claim353. After the user selects a specific claim, the chatbot may determine the claim status345and return the claim status to the user346. If the user does not select any claim, the chatbot may ask the user if the user is calling about a closed or old claim354. If the user is calling about a closed or old claim, the chatbot may find all closed or old claims355and ask the user to select a specific claim353. The process of asking the user to select a claim may be repeated many times until the user selects a claim. If the user does not select a claim but asks for help or the user does not have the risk address, the chatbot may connect the user with a human representative356and the communication between the chatbot and the user may be terminated357. After returning the claim status to the user, the chatbot may ask whether the user has anything else358. If the user's answer is yes, the chatbot may ask user's new intent359. If the user's answer is no, then the chatbot may provide user with a thank you message360. The communication may be terminated357. The chatbot may also check whether the user is authorized to access a claim361. If the user is authorized to access a claim, the chatbot may enter a success mode362. FIGS.4A-4Lshow examples of communication with a chatbot over an existing communication channel. These examples illustrate multiple user interfaces of an app on a mobile electronic device regarding a user's communication with the chatbot to make a payment if the chatbot is used by insurance companies. In the illustrated examples, the user interfaces may be similar to a text message interface on a mobile electronic device.FIG.4Ashows that any users may communicate with the chatbot account by internet or by phone through the app. The communication bubbles aligned to the left401may represent the feedback provided by the chatbot system. The communication bubbles aligned to the right402may represent the input of the user.FIG.4Bshows the chatbot may first state “Hello! What can I do for your?” The user may then answer “I'd like to know when is my payment due.”FIG.4Cshows that the chatbot may state “I can certainly help you with that. Do you have your policy number handy?”FIG.4Dshows the user may answer “Yes, it is P000111969.” The chatbot may then state that “OK, I found that policy. Before we proceed, I'd like to ask you a question for your security. What is the ZIP code of the property address for this policy?”FIG.4Eshows the user may answer “32837” and the chatbot may reply with “Thank you. You next payment in the amount of $452.50 is due on Jun. 19, 2018. Is there anything else I can help you with?” FIG.4Fshows a scenario when a user may make a payment. The user may state “I'd like to make a payment.” The chatbot may state “I can certainly help you with that. Do you have your policy number handy?” The user may answer “Yes.” The chatbot may state “Great. Please go ahead and give me your policy number.”FIG.4Gshows the user may answer “P000108423.” The chatbot may state “How much would you like to pay now?” The user may answer“125.” The chatbot may state “OK. How would you like to pay this amount? I can handle payments by a major credit card or by electronic transfer.”FIG.4Hshows that after the user selects a payment method, provides the credit card number and provides the expiration date of the credit card, the chatbot may ask the security code on the back of the credit card. The user answers “123.” The chatbot states “OK. I now have all the needed information to make the payment. Shall I proceed to pay $125, crediting policy P000108423?” The user answers “Yes.” FIG.4Ishows another scenario when the user may not remember the policy number. The chatbot may state “I can certainly help you with that. Do you have your policy number handy?” The user may answer “No.” The chatbot may then answer “Not a problem. Let's look up your policy by different means. First please tell me the first name of the primary named insured.”FIG.4Jshows that after the user provides the first name, the chatbot may ask “Thank you. Now, please tell me the last name of the primary named insured.” The user may answer “Simmons.” The chatbot may state “Finally, please specify the ZIP code in which the insured property is located.” The user may answer “32084.” FIG.4Kshows another scenario that the user may not remember the policy number covering the loss in question. The chatbot may state “Not a problem. Do you know the number of the policy covering the loss question?” The user may answer “No.” The chatbot may state “OK, that's not a problem either. First question: in which city the loss located?”FIG.4Lshows that the user may answer “Holiday.” The chatbot may then answer “Thank you. Now, what is the 5 digit ZIP code in which the loss is located?” The user may answer “34691.” The chatbot may then answer “Almost there! What is the first name of primary insured?” FIGS.5A-5Eshow examples of chatbots integrated in a social media channel.FIG.5Ashows an image of a Facebook page of a chatbot system used by insurance companies. The chatbot window501may be minimized.FIG.5Bshows another image of the Facebook page of the chatbot system. The chatbot window511may be opened by the user. The user may enter “hi.” The chatbot may ask the user “Hello, what can I do for you?” The communication may continue in the chatbot window521inFIG.5C. InFIG.5C, the user may state “I need to report a loss.” The chatbot may answer “I can certainly help you with that. Are you the primary insured on the policy?” The user may answer “yes.” The chatbot may then ask “do you have your policy number handy?” The communication may continue to the chatbot window531inFIG.5D. InFIG.5D, the user may answer “no.” The chatbot may then state “Not a problem. Let's look up your policy by different means. First, please tell me your first name.” If the communication between the user and the chatbot continues, the communication may show in the chatbot window541inFIG.5E. InFIG.5E, the chatbot may state “please describe what happened.” The user may enter “water heater broke and flooded the garage.” The chatbot may ask “What date did the damage occur?” The user may enter “yesterday.” The chatbot may then ask “At which approximate time did the damage occur?” After the user provides more details of the damage, the communication may continue in the chatbot window551inFIG.5F. In theFIG.5F, the chatbot may state “Your claim has been submitted, and its number is 139840. We will be in touch with you soon to discuss your claim in more details within 5 business days. If you have any questions, feel free to contact me at anytime. Is there anything else I can help you with?” FIG.6is an example of a process flow diagram between a user and a chatbot server via an application server. InFIG.6, the process(es) carried out by or involving a user601is represented by a contact with a vertical line610, the process(es) carried out by or involving an application server611is represented by a contact with a vertical line620, and the process(es) carried out by or involving a chatbot server621is represented by a contact with a vertical line630. The application server611may host one or more communication channels as described elsewhere herein. AlthoughFIG.6illustrates a single application server, the disclosure is not limited thereto. For example, the user may be in communication with the chatbot on different channels supported by multiple application servers. In some embodiments, multiple application servers may be in communication with the chatbot server concurrently. In some embodiments, one or more applications may be running on one or more user devices611,613. The one or more user devices can be the same as the user devices as described inFIG.1. A conversation between the user601and a chatbot may be carried over multiple user devices and/or multiple applications. The application servers may also be referred to as communication servers throughout the specification. The application servers and/or the chatbot server can be any other type of network components as described elsewhere herein. For example, the application servers may host software applications that can be deployed as a cloud service, such as in a web services model. The application servers may be a private or commercial (e.g., public) cloud service providers that can provide cloud-computing service that may comprise, for example, an IaaS, PaaS, or SaaS. In another example, the chatbot may be an application runs in a cloud provider environment (e.g., Amazon-AWS, Microsft-Azue or Google-GCE) Alternatively or in addition to, the chatbot may be an application runs in on-premises environment. The application server or the chatbot server may utilize a cloud-computing resource that may be a physical or virtual computing resource (e.g., virtual machine). The cloud-computing resource may include a storage resource (e.g., Storage Area Network (SAN), Network File System (NFS), or Amazon S3®), a network resource (e.g., firewall, load-balancer, or proxy server), an internal private resource, an external private resource, a secure public resource, an infrastructure-as-a-service (IaaS) resource, a platform-as-a-service (PaaS) resource, or a software-as-a-service (SaaS) resource. In an example conversation between the user and the chatbot, the user601may provide602an input to the chatbot server621through an application running on a user device and the application may be hosted on one of the application servers611. After receiving the user's input, the application server611may transmit612the user's input to the chatbot server621. The chatbot server621may perform the following tasks: receiving one or more inputs from the user, comparing the input with the plurality of units in the communication data structure, selecting a unit in the communication data structure based on the comparison between the input and the plurality of units in the communication data structure, producing a communication identity of the user based on the one or more inputs from the user, selecting a unit in the communication data structure based on the communication identity, processing the unit to generate instructions coded in the unit, selecting a communication path based on the instructions generated in the unit, and providing a feedback613to the user based on the selected communication path. The chatbot server621may provide feedback613to the application server611regarding the user's input. After receiving the feedback from the chatbot server621, the application server611may display a feedback603to the user. In some situations, the same conversation may be continued on a different channel, through a different application, or on a different user device. For example, a user may provide the first user input602on a desktop613, then switch to a portable device611(e.g., smart phone) to provide a second user input604in order to continue the same conversation with the chatbot621. In another example, a user may provide a first user input through a first application or a first channel (e.g., Facebook messenger) supported by a first application server, then send a second input604through a second application or a second channel (e.g., text message) supported by a second application server. The second application server may transmit the user's input614to the chatbot server621. Similarly, by navigating the communication path, a feedback in response to the second user input may be generated and provided to the user615via the second channel. Computer Control Systems The present disclosure provides computer control systems that are programmed to implement methods of the disclosure.FIG.7shows a computer system701that is programmed or otherwise configured to facilitate the communication between a user and a chatbot. The computer system701can regulate various aspects of the present disclosure, such as, for example, receiving one or more inputs from the user, comparing the input with the plurality of units in the communication data structure, and producing a communication identity for the user based on the one or more inputs from the user. The computer system701can be an electronic device of a user or a computer system that is remotely located with respect to the electronic device. The electronic device can be a mobile electronic device. The computer system701includes a central processing unit (CPU), a graphic processing unit (GPU), or a general-purpose processing unit705, which can be a single core or multi core processor, or a plurality of processors for parallel processing. The computer system701also includes memory or memory location710(e.g., random-access memory, read-only memory, flash memory), electronic storage unit715(e.g., hard disk), communication interface720(e.g., network adapter) for communicating with one or more other systems, and peripheral devices725, such as cache, other memory, data storage and/or electronic display adapters. The memory710, storage unit715, interface720and peripheral devices725are in communication with the CPU705through a communication bus (solid lines), such as a motherboard. The storage unit715can be a data storage unit (or data repository) for storing data. The computer system701can be operatively coupled to a computer network (“network”)730with the aid of the communication interface720. The network730can be the Internet, an internet and/or extranet, or an intranet and/or extranet that is in communication with the Internet. The network730in some cases is a telecommunication and/or data network. The network730can include one or more computer servers, which can enable distributed computing, such as cloud computing. The network730, in some cases with the aid of the computer system701, can implement a peer-to-peer network, which may enable devices coupled to the computer system701to behave as a client or a server. The CPU705can execute a sequence of machine-readable instructions, which can be embodied in a program or software. The instructions may be stored in a memory location, such as the memory710. The instructions can be directed to the CPU705, which can subsequently program or otherwise configure the CPU705to implement methods of the present disclosure. Examples of operations performed by the CPU705can include fetch, decode, execute, and writeback. The CPU705can be part of a circuit, such as an integrated circuit. One or more other components of the system701can be included in the circuit. In some cases, the circuit is an application specific integrated circuit (ASIC). The storage unit715can store files, such as drivers, libraries and saved programs. The storage unit715can store user data, e.g., user preferences and user programs. The computer system701in some cases can include one or more additional data storage units that are external to the computer system701, such as located on a remote server that is in communication with the computer system701through an intranet or the Internet. The computer system701can communicate with one or more remote computer systems through the network730. For instance, the computer system701can communicate with a remote computer system of a user. Examples of remote computer systems include personal computers (e.g., portable PC), slate or tablet PC's (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iPhone, Android-enabled device, Blackberry®), or personal digital assistants. The user can access the computer system701via the network730. Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the computer system701, such as, for example, on the memory710or electronic storage unit715. The machine executable or machine readable code can be provided in the form of software. During use, the code can be executed by the processor705. In some cases, the code can be retrieved from the storage unit715and stored on the memory710for ready access by the processor705. In some situations, the electronic storage unit715can be precluded, and machine-executable instructions are stored on memory710. The code can be pre-compiled and configured for use with a machine having a processor adapted to execute the code, or can be compiled during runtime. The code can be supplied in a programming language that can be selected to enable the code to execute in a pre-compiled or as-compiled fashion. Aspects of the systems and methods provided herein, such as the computer system701, can be embodied in programming. Various aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of machine (or processor) executable code and/or associated data that is carried on or embodied in a type of machine readable medium. Machine-executable code can be stored on an electronic storage unit, such as memory (e.g., read-only memory, random-access memory, flash memory) or a hard disk. “Storage” type media can include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer into the computer platform of an application server. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software. As used herein, unless restricted to non-transitory, tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution. Hence, a machine readable medium, such as computer-executable code, may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, such as may be used to implement the databases, etc. shown in the drawings. Volatile storage media include dynamic memory, such as main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a ROM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution. The computer system701can include or be in communication with an electronic display735that comprises a user interface (UI)740for providing, for example, the feedback regarding a user's input. Examples of UI's include, without limitation, a graphical user interface (GUI) and web-based user interface. Methods and systems of the present disclosure can be implemented by way of one or more algorithms. An algorithm can be implemented by way of software upon execution by the central processing unit705. The algorithm can, for example, compare the input with the plurality of units in the communication data structure, produce a communication identity for the user based on the one or more inputs from the user, select a unit in the communication data structure based on the communication identity, select a unit in the communication data structure based on the comparison between the input and the plurality of units in the communication data structure, process the unit to generate instructions coded in the unit, select a communication path based on the instructions generated in the unit, and provide a feedback to the user based on the selected communication path. While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. It is not intended that the invention be limited by the specific examples provided within the specification. While the invention has been described with reference to the aforementioned specification, the descriptions and illustrations of the embodiments herein are not meant to be construed in a limiting sense. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. Furthermore, it shall be understood that all aspects of the invention are not limited to the specific depictions, configurations or relative proportions set forth herein which depend upon a variety of conditions and variables. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is therefore contemplated that the invention shall also cover any such alternatives, modifications, variations or equivalents. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby. | 89,919 |
11943179 | DETAILED DESCRIPTION FIG.1illustrates a system100configured for presenting graphical user interfaces corresponding to users and including portions of one or more chat sessions the users are participants in, the chat sessions facilitating synchronous textual communication between the users that takes place through a chat system, in accordance with one or more implementations. In some implementations, system100may include one or more servers102. Server(s)102may be configured to communicate with one or more client computing platforms104according to a client/server architecture and/or other architectures. Client computing platform(s)104may be configured to communicate with other client computing platforms via server(s)102and/or according to a peer-to-peer architecture and/or other architectures. Users may access system100via client computing platform(s)104. Server(s)102may be configured by machine-readable instructions106. Machine-readable instructions106may include one or more instruction components. The instruction components may include computer program components. The instruction components may include one or more of an environment state component108, chat component110, user selection component112, unit of work component114, graphical user interface component116, and/or other instruction components. Environment state component108may be configured to manage environment state information to maintain a collaboration environment. The environment state information may include user records and work unit records. The environment state information may define a state of the collaboration environment including user states, work unit states, and/or other states. The user states may be defined by the user records that define values of user parameters associated with users interacting with and/or viewing the collaboration environment. Individual ones of the user records may correspond to individual ones of the users. The work unit states may be defined by the work unit records that define values of work unit parameters for units of work managed, created, and/or assigned within the collaboration environment. Individual ones of the work unit records may correspond to individual ones of the units of work and/or be associated with one or more users and/or other units of work. The environment state information may include user records, work unit records, and/or other records. The environment state information may be continuously generated and/or updated based on the state of the collaboration environment representing the users' interactions with the collaboration environment. The state of the collaboration environment may include a user state, a work unit state, and/or other states. The user state may be defined by the user records. The user records may define values of user parameters associated with users interacting with and/or viewing the collaboration environment. The work unit state may be defined by the work unit records. The work unit records may define values of work unit parameters for units of work managed, created, and/or assigned within the collaboration environment. In some implementations, the work unit state may include a project state, a task state, a sub-task state, and/or other states. The work unit records may include project records, task records, sub-task records, and/or other records. The work unit parameters for work units managed, created, and/or assigned within the collaboration environment may include parameters describing one or more work units managed, created, and/or assigned within the collaboration environment and/or via the collaboration work management platform, and/or the metadata associated with the one or more work units. Individual ones of the work units may be associated with individual ones of the work unit records. A work unit record may define values of the work unit parameters associated with a given work unit managed, created, and/or assigned within the collaboration environment and/or via the collaboration work management platform. A given work unit may have one or more owners and/or one or more team members working on the given work unit. Work units may include one or more to-do items, action items, objectives, and/or other units of work one or more users should accomplish and/or plan on accomplishing. Units of work may be created by a given user for the given user and/or created by the given user and assigned to one or more other users. A given work unit may include one or more projects, tasks, sub-tasks, and/or other units of work possibly assigned to and/or associated with one or more users. The work unit parameters may, by way of non-limiting example, include one or more of: one or more units of work, one or more user comment parameters (e.g., a creator, a recipient, one or more followers, one or more other interested parties, content, one or more times, up-votes, other hard-coded responses, etc.), a work unit name, a work unit description, one or more work unit dates (e.g., a start date, a due date, a completion date, and/or other work unit dates), one or more members associated with a unit of work (e.g., an owner, one or more other project/task members, member access information, and/or other work unit members and/or member information), a status and/or progress (e.g., an update, a hardcoded status update, a measured status, quantity of work units remaining in a given project, completed work units in a given project, and/or other status parameter), one or more attachments, notification settings, privacy, an associated URL, one or more interaction parameters (e.g., sources of the interactions, context of the interactions, content of the interactions, time for the interactions, and/or other interaction parameters), updates, ordering of units of work within a given unit of work (e.g., tasks within a project, sub-tasks within a task, etc.,), state of a workspace for a given unit of work (e.g., application state parameters, application status, application interactions, user information, and/or other parameters related to the state of the workspace for a unit of work), dependencies between one or more work units, one or more custom fields (e.g., priority, cost, stage, and/or other custom fields), other work unit parameters for the given work units, and/or other work unit parameters, and/or user parameters for one or more users and/or work units the given project is associated with. The user parameters associated with the users interacting with and/or viewing the collaboration environment may include parameters describing the users, their actions within the collaboration environment, their settings, and/or other user information; and/or metadata associated with the users, their actions within the environment, their settings, and/or other user information. Individual ones of the users may be associated with individual ones of the user records. A user record may define values of the user parameters associated with a given user interacting with and/or viewing the collaboration environment. The user parameters may, by way of non-limiting example, include one or more of: a user name, a group parameter, a subset parameter, a user account, a user role, a user department, descriptive user content, a to-email, a from-email, a photo, an organization, a workspace, one or more projects (which may include project parameters defined by one or more work unit records), one or more items of work (which may include one or more unit of work parameters defined by one or more unit of work records), one or more user comments, one or more teams the user belongs to, one or more of the user display settings (e.g., colors, size, project order, task order, other work unit order, etc.), one or more authorized applications, one or more presence/interaction parameters (e.g., indicating presence and/or interaction level at an environment level, work unit level, project level, task level, application level, etc.), one or more notification settings, one or more progress parameters, status information for one or more work units the user is associated with, one or more statistics related to a given user (e.g., how many units of work the user has completed, how quickly the user completed the units of work, how quickly the user completes certain types of work units, the efficiency of the user, bandwidth of the user, activity level of the user, etc.), application access information (e.g., username/password for one or more third-party applications), one or more favorites and/or priorities, other user parameters for the given user, and/or other user parameters and/or work unit parameters, for one or more work units the given user is associated with. Chat component110may be configured to obtain chat information characterizing participants in the chat sessions. Chat sessions may include synchronous and/or semi-synchronous textual conversations between two or more users via a chat system and/or a chat interface. In some implementations, the chat sessions may facilitate textual communication and/or non-textual communication between two or more users. For example, audio communication, video communication and/or other types of communication may be facilitated by the chat sessions. The chat information may include user information characterizing the participants in the chat sessions, text information representing the textual communications exchanged during the chat sessions, time information indicating a date and/or time the textual communications were exchanged, natural language processing information characterizing the audio and/or video communications that occurred during the chat sessions, and/or other information. Chat component110may be configured to perform natural language processing of any audio and/or video communications that occur during the chat sessions to generate natural language processing information. The chat sessions may include a first chat session, a second chat session, a third chat session, a fourth session, a fifth chat session, and/or any other chat session. By way of non-limiting example, the first chat session may facilitate synchronous textual communication between a first user, a second user, and a third user. As such, the first chat information may characterize the first user, the second user, and the third user as participants in the first chat session is. The first chat information may be obtained by chat component110. The first chat information may be obtained by chat component110so it may be included in graphical user interfaces corresponding to the first user, the second user, and/or the third user, responsive to the first user, the second user, and/or the third user selecting one of the first user, the second user, and/or the third user. The second chat session may facilitate synchronous textual communication between the first user and the second user such that second chat information characterizing the first user and the second user as participants in the second chat session is obtained. In some implementations, chat component110may be configured to implement an instance of a chat session to facilitate the synchronous communication between the users within the collaboration environment. Implementing instances of the chat sessions may include transmitting the textual communications that make up the chat sessions to the client computing platforms for presentation through graphical chat interfaces. In some implementations, chat component110may be configured to receive and/or obtain chat information from a chat system external to the collaboration environment. By way of non-limiting example, a chat system external to the collaboration environment may integrate with and/or communicate with the collaboration environment via an Application Program Interface (API). User selection component112may be configured to receive user input indicating selection of one or more users. The one or more users may be selected by one or more other users. A given user may select another user to view a graphical user interface corresponding to the other user and the given user. The graphical user interface corresponding to the other user may include information associated with both the given user and the other user. The information associated with both the given user and the other user may include portions of one or more chat sessions the given user is a participant in with the other user, one or more units of work both the given user and the other user are associated with, and/or other information. User selection component112may be configured to receive user input indicating a selection of the first user by the second user. User input indicating selection of the first user by the second user may be received from a client computing platform associated with the second user. In some implementations, user selection component112may be configured to receive and/or execute search queries for one or more users. Responsive to receiving a search query, user selection component112may identify one or more users corresponding to the search query and initiate presentation of results for the search query. The results for the search query may include one or more users matching the search query. User selection component112may be configured to receive user input indicating selection of a user from the results for the search query. In some implementations, unit of work component114may be configured to identify one or more of the units of work associated with the selected users and/or the selecting users. Units of work associated with one or more users may include units of work one or more users assigned to another user, are working on, assigned to, responsible for, overseeing, managing, and/or otherwise associated with. For example, units of work associated with both the selected user and the selecting user unit of work component114may be configured to identify one or more units of work associated with the first user based on the work unit records and/or the user records. By way of another example, unit of work component114may be configured to identify the one or more units of work that are associated with both the first user and the second user. A unit of work associated with both the first user and the second user may include a unit of work assigned to the first user by the second user, a unit of work the first user is responsible for completing and the second user is overseeing, a unit of work assigned to both the first user and the second user, a unit of work for a team both the first user and the second user are a part of, and/or other configurations. Work information corresponding to the units of work identified by unit of work component114may be included in the graphical user interfaces for individual ones of the users. Graphical user interface component116may be configured to effectuate presentation of a graphical user interface corresponding to the user selected via the user input. The graphical user interfaces corresponding to the users may include stated information provided by the users, portions of one or more chat sessions the selected users and the selecting users (that selected to view the GUIs associated with the selected users) are participants in, and/or other information. In some implementations, graphical user interface component116may be configured to effectuate presentation of a first graphical user interface responsive to receiving user input indicating selection of the first user by the second user. The first graphical user interface corresponding to the first user may be presented via a client computing platform associated with the second user responsive to the second user selecting the first user. The first graphical user interface may represent chat sessions and/or units of work the first user and the second user have in common and/or share. By way of non-limiting example, the first graphical user interface (provided in response to the second user selecting to view the GUI associated with the first user) may include first stated information provided by the first user, portions of one or more chat sessions the first user is a participant in with the second user, and/or other information. As such, the first graphical user interface may display the first stated information characterizing the first user, at least a portion of the first chat session, and at least a portion of the second chat session. Stated information may include information inputted and/or selected by the users that characterizes the users and/or will be included in the graphical user interfaces associated with the users. The first stated information may include information selected and/or input by the first user that characterizes the first user. By way of non-limiting example, the first stated information may include username information, title information, department information, status information, personal information, and/or other information. A user may write, select, and/or determine what stated information is included in their graphical user interface regardless of the selecting user viewing their graphical user interface. In some implementations, effectuating presentation of the graphical user interfaces corresponding to the users may include presenting work information for the one or more units of work identified by unit of work component114as being associated with the selected user and/or the selecting user. By way of non-limiting example, effectuating presentation of the first graphical user interface may include presenting work information for the one or more units of work identified as being associated with both the first user and the second user (that selected the first user), or presenting work information for the one or more units of work identified as being associated with only the first user. Graphical user interface component116may be configured to effectuate presentation of a second graphical user interface corresponding to the second user. Presentation of the second graphical user interface may be effectuated responsive to receiving user input indicating selection of the second user by the first user. The second graphical user interface may be presented via a client computing platform associated with the first user. By way of non-limiting example, the second graphical user interface may include second stated information provided by the second user, portions of one or more chat sessions the second user is a participant in with the first user, and/or other information. As such, the second graphical user interface may display the second stated information, at least a portion of the first chat session, at least a portion of the second chat session, and/or other information. The graphical user interfaces may display portions of the chat sessions both the selected user and the selecting user are participants in. In some implementations, the graphical user interfaces may display portions of group chat sessions between other users in addition to the selected user and the selecting user. Graphical user interface component116may be configured to effectuate presentation of a third graphical user interface corresponding to the third user responsive to receiving user input indicating a selection of the third user by the second user. The third graphical user interface corresponding to the third user may be presented via a client computing platform associated with the second user responsive to user selection component112receiving user input from the second user indicating a selection of the third user. The third graphical user interface may include third stated information provided by the third user, portions of one or more chat sessions the third user is a participant in with the second user, and/or other information. As such, the third graphical user interface may display the third stated information, at least a portion of the first chat session (between the first user, the second user, and the third user), and/or other information. The portions of the one or more chat session displayed via one or more graphical user interfaces may include text information representing one or more of the communications input by an individual ones of the participants. For example, the portion of the first chat session displayed by the first graphical user interface may include one or more of the communications input by the first user, one or more communications input by the second user, and/or one or more communications input by the third user. Inputting communications may include The portion of the second chat session displayed by the first graphical user interface may include one or more of the communications input by the first user and/or one or more the communications input by the second user. FIG.2illustrates a graphical user interface corresponding to user1and including portions of one or more chat sessions user1and user2are participants in, in accordance with one or more implementations. Graphical user interface200may correspond to User1. User2may select User1to view graphical user interface200. Graphical user interface200may include stated information202. Stated information202may include a username (e.g., “User1”), a department, a title (e.g., a job title), an about me section, and/or other information provided by and/or input by User1. Graphical user interface200may include a portion of chat session204, a portion of chat session206, a portion of chat session208, and/or other portions of other chat sessions User1and User2are participants in. User1and User2may be participants in chat session204. User1, User2, and User3may be participants in chat session206. User1, User2, and User5may participants in chat session208. All of the chat sessions displayed via graphical user interface200may include at least User1and User2as participants responsive to User2selecting to view graphical user interface200corresponding to User1. FIG.3illustrates a graphical user interface corresponding to user1and including portions of one or more chat sessions user1and user2are participants in, in accordance with one or more implementations. Graphical user interface300may correspond to User1. User2may select User1to view graphical user interface300. Graphical user interface300may include stated information302. Stated information302may include a username (e.g., “User1”), a department, a title (e.g., a job title), an about me section, and/or other information provided by and/or input by User1. Graphical user interface300may include a portion of chat session304, a portion of chat session306, a portion of chat session308, and/or other portions of other chat sessions User1and User2are participants in. User1and User2may be participants in chat session304. User1, User2, and User3may be participants in chat session306. User1, User2, and User5may participants in chat session308. All of the chat sessions displayed via graphical user interface300may include at least User1and User2as participants responsive to User2selecting to view graphical user interface300corresponding to User1. Graphical user interface300may include work information310for units of work302. Work information310may comprise titles for units of work302both User1and User2are associated with. User1and User2may both be working on, assigned to, and/or associated with units of work312. By way of example, User2may be the project manager for Project A and Project C (which User1is associated with). By way of another example, User2may have assigned one or more of Task1, Task2, Task6, Task9, and Task10to User1. Returning toFIG.1, in some implementations, server(s)102, client computing platform(s)104, and/or external resources118may be operatively linked via one or more electronic communication links. For example, such electronic communication links may be established, at least in part, via a network such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which server(s)102, client computing platform(s)104, and/or external resources118may be operatively linked via some other communication media. A given client computing platform104may include one or more processors configured to execute computer program components. The computer program components may be configured to enable an expert or user associated with the given client computing platform104to interface with system100and/or external resources118, and/or provide other functionality attributed herein to client computing platform(s)104. By way of non-limiting example, the given client computing platform104may include one or more of a desktop computer, a laptop computer, a handheld computer, a tablet computing platform, a NetBook, a Smartphone, a gaming console, and/or other computing platforms. External resources118may include sources of information outside of system100, external entities participating with system100, and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources118may be provided by resources included in system100. Server(s)102may include electronic storage120, one or more processors122, and/or other components. Server(s)102may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of server(s)102inFIG.1is not intended to be limiting. Server(s)102may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server(s)102. For example, server(s)102may be implemented by a cloud of computing platforms operating together as server(s)102. Electronic storage120may comprise non-transitory storage media that electronically stores information. The electronic storage media of electronic storage120may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with server(s)102and/or removable storage that is removably connectable to server(s)102via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage120may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Electronic storage120may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). Electronic storage120may store software algorithms, information determined by processor(s)122, information received from server(s)102, information received from client computing platform(s)104, and/or other information that enables server(s)102to function as described herein. Processor(s)122may be configured to provide information processing capabilities in server(s)102. As such, processor(s)122may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s)122is shown inFIG.1as a single entity, this is for illustrative purposes only. In some implementations, processor(s)122may include a plurality of processing units. These processing units may be physically located within the same device, or processor(s)122may represent processing functionality of a plurality of devices operating in coordination. Processor(s)122may be configured to execute components108,110,112,114, and/or116, and/or other components. Processor(s)122may be configured to execute components108,110,112,114, and/or116, and/or other components by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s)122. As used herein, the term “component” may refer to any component or set of components that perform the functionality attributed to the component. This may include one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components. It should be appreciated that although components108,110,112,114, and/or116are illustrated inFIG.1as being implemented within a single processing unit, in implementations in which processor(s)122includes multiple processing units, one or more of components108,110,112,114, and/or116may be implemented remotely from the other components. The description of the functionality provided by the different components108,110,112,114, and/or116described below is for illustrative purposes, and is not intended to be limiting, as any of components108,110,112,114, and/or116may provide more or less functionality than is described. For example, one or more of components108,110,112,114, and/or116may be eliminated, and some or all of its functionality may be provided by other ones of components108,110,112,114, and/or116. As another example, processor(s)122may be configured to execute one or more additional components that may perform some or all of the functionality attributed below to one of components108,110,112,114, and/or116. FIG.4illustrates a method400for presenting graphical user interfaces corresponding to users and including portions of one or more chat sessions the users are participants in, the chat sessions facilitating synchronous textual communication between the users that takes place through a chat system, in accordance with one or more implementations. The operations of method400presented below are intended to be illustrative. In some implementations, method400may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method400are illustrated inFIG.4and described below is not intended to be limiting. In some implementations, method400may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method400in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method400. An operation402may include obtaining chat information characterizing participants in the chat sessions. The chat sessions may include a first chat session and a second chat session. The first chat session may facilitate synchronous textual communication between a first user, a second user, and a third user, such that first chat information characterizing the first user, the second user, and the third user as participants in the first chat session is obtained. The second chat session may facilitate synchronous textual communication between the first user and the second user such that second chat information characterizing the first user and the second user as participants in the second chat session is obtained. Operation402may be performed by one or more hardware processors configured by machine-readable instructions including a component that is the same as or similar to chat component110, in accordance with one or more implementations. An operation404may include effectuating presentation, responsive to receiving user input indicating a selection of the first user by the second user, of a first graphical user interface corresponding to the first user via a client computing platform associated with the second user. The first graphical user interface may include first stated information provided by the first user and portions of one or more chat sessions the first user is a participant in with the second user. As such, the first graphical user interface may display the first stated information characterizing the first user, at least a portion of the first chat session, and at least a portion of the second chat session. Operation404may be performed by one or more hardware processors configured by machine-readable instructions including a component that is the same as or similar to graphical user interface component116, in accordance with one or more implementations. FIG.5illustrates a method500for presenting graphical user interfaces corresponding to users and including portions of one or more chat sessions the users are participants in, the chat sessions facilitating synchronous textual communication between the users that takes place through a chat system, in accordance with one or more implementations. The operations of method500presented below are intended to be illustrative. In some implementations, method500may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method500are illustrated inFIG.5and described below is not intended to be limiting. In some implementations, method500may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method500in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method500. An operation502may include managing environment state information for maintaining a collaboration environment. The environment state information may include user records, work unit records, and/or other records. The environment state information may define a state of the collaboration environment including user states, a work unit state, and/or other states. The user states may be defined by the user records that define values of user parameters associated with users interacting with and/or viewing the collaboration environment. The work unit states may be defined by the work unit records that define values of work unit parameters for units of work managed, created, and/or assigned within the collaboration environment. Operation502may be performed by one or more hardware processors configured by machine-readable instructions including a component that is the same as or similar to environment state component108, in accordance with one or more implementations. Operation504may include obtaining chat information characterizing participants in the chat sessions. The chat sessions may include a first chat session and a second chat session. The first chat session may facilitate synchronous textual communication between a first user, a second user, and a third user, such that first chat information characterizing the first user, the second user, and the third user as participants in the first chat session is obtained. The second chat session may facilitate synchronous textual communication between the first user and the second user such that second chat information characterizing the first user and the second user as participants in the second chat session is obtained. Operation504may be performed by one or more hardware processors configured by machine-readable instructions including a component that is the same as or similar to chat component110, in accordance with one or more implementations. Operation506may include receiving user input indicating selection of a first user by the second user. Operation506may be performed by one or more hardware processors configured by machine-readable instructions including a component that is the same as or similar to user selection component112, in accordance with one or more implementations. Operation508may include identifying one or more units of work associated with the first user. The one or more units of work associated with the first user may be identified based on the work unit records and/or the user records. Operation508may be performed by one or more hardware processors configured by machine-readable instructions including a component that is the same as or similar to unit of work component114, in accordance with one or more implementations. An operation510may include effectuating presentation, responsive to receiving user input indicating a selection of the first user by the second user, of a first graphical user interface corresponding to the first user via a client computing platform associated with the second user. The first graphical user interface may include first stated information provided by the first user, portions of one or more chat sessions the first user is a participant in with the second user, and work information for the one or more units of work identified as being associated with the first user. As such, the first graphical user interface may display the first stated information characterizing the first user, at least a portion of the first chat session, at least a portion of the second chat session, and work information for the one or more units of work identified as being associated with the first user. Operation510may be performed by one or more hardware processors configured by machine-readable instructions including a component that is the same as or similar to graphical user interface component116, in accordance with one or more implementations. Although the present technology has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the technology is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present technology contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation. | 38,923 |
11943180 | DETAILED DESCRIPTION Draft message object collaboration for a communication platform is described herein. The communication platform, which, in some examples can be a group-based communication platform, a channel-based communication platform, a permission-based communication platform, a channel-based messaging platform, and/or any other platform for facilitating communication between and among users, can enable users to exchange message objects and/or other data via the communication platform. In existing techniques, a single user can draft a message object and post the message object to a virtual space of the communication platform. Such a virtual space can comprise a channel, direct message, board, and/or the like. In some examples, however, multiple users may desire to collaborate on a message object before posting the message object to a virtual space. For instance, with reference to a press release, announcement, communication to a business partner, and/or the like, multiple users may desire to collaborate on such a message object. Techniques described herein are directed to techniques for facilitating such collaboration. For instance, techniques described herein relate to a composition user interface wherein a user can add other users or entities to collaborate on drafting a message object. When the message object is ready to post, the message object can be posted to one or more virtual platforms with an indication that the sender of the message object is the group of users that collaborated on the drafting of the message object. In some examples, such users can have permissions to edit the posted message object collaboratively (i.e., together, in real-time or near real-time, via respective instances of the composition user interface). As such, techniques described herein enable multiple users to collaborate on a message object in a communication platform. For purposes of this discussion, a “message object” can refer to any electronically generated digital object, provided by a user using a client associated with the communication platform, that is configured for display within a channel, direct message, and/or other virtual space of the communication platform for facilitating communications as described herein. A message object may include any text, image, video, audio, or combination thereof provided by a user (using a client). For instance, a message object can include text, as well as an image and/or a video, as message object contents. In some examples, a message object can be associated with an image and/or a video that is embedded in (e.g., via a link) or attached to the message object. In at least one example, the text, image, and/or video can comprise the message object. In some examples, message objects can be associated with one or more files or other attachments, emojis, reactjis, and/or the like. In some examples, message objects can include or be associated with one or more links to content stored by the communication platform or a third-party platform or service. Each message object can include metadata comprising a one or more sending user identifiers (e.g., identifying sender(s) or originator(s) of the message object), one or more recipient user identifiers (e.g., identifying user(s) and/or entity(s) that is/are intended to receive the message object), a message object identifier (e.g., identifying the message object itself), a group identifier (e.g., identifying a group associated with the sender(s), the recipient(s), and/or the message object), a channel identifier (e.g., identifying a channel associated with the sender(s), the recipient(s), and/or the message object), a direct message identifier (e.g., identifying a direct message associated with the sender(s), the recipient(s), and/or the message object), a virtual space identifier (e.g., identifying a virtual space associated with the sender(s), the recipient(s), and/or the message object), and/or the like. In at least one example, each of the foregoing identifiers may comprise American Standard Code for Information Interchange (ASCII) text, a pointer, a memory address, or the like. In some examples, message object metadata can include message object content, permission(s) associated with the message object (e.g., viewing permission(s), editing permission(s), etc.), and/or the like in addition to the one or more identifiers listed above. In an example, the communication platform can receive a request, from a first client of a first user, to generate a new message object. The communication platform can cause an instance of a composition user interface to be presented via the first client. The first user can invite second user(s) to collaborate on the new message object. The communication platform can cause additional instance(s) of the composition user interface to be presented via second client(s) of the second user(s). As such, the first user and the second user(s) can collaborate—in real-time or near real-time—to generate and/or edit a draft of a message object. That is, the first user and second user(s) can provide comments and/or other feedback, make revisions, additions, and/or other modifications, accept changes, and/or the like via the respective instances of the composition user interface in real-time or near real-time. When the message object is ready to be posted, or otherwise presented, one of the users can provide an input (e.g., to a corresponding instance of the composition user interface) to cause the message object to be posted to, or otherwise presented via, one or more virtual spaces. In such an example, the message object can be posted to, or otherwise presented via, the one or more virtual spaces and can be identified as having been sent and/or having originated from the first user and the second user(s). Such techniques can be useful for drafting press releases, announcements, business client communications, and/or the like. As described above, in at least one example, the first user can invite second user(s) to collaborate on a draft of a message object. In some examples, a first entity can invite second entity(s) to collaborate on a draft of a message object. An example of such an entity can comprise a user, a role (e.g., marketing team member, engineer, front desk staff, security team member, administrator, etc.), an application, a channel (i.e., members of the channel), a direct message (i.e., members of the direct message), a board (i.e., members of the board), and/or the like. As such, the first and second entities can collaborate in drafting the new message object. Such collaborators (i.e., the first and second entities) can be associated with a first set of actions that are different than other users and/or entities that are not collaborators when the message object is posted to, or presented via, the virtual space. That is, in at least one example, the first and second entities can have access to view, edit, interact, etc. with the new message object when posted, or otherwise presented, and/or can be notified when another user and/or entity responds to the new message object (e.g., reaction, thread message object, etc.). That is, such collaborators can have access to the new message object as if any one of the collaborators individually generated and posted the new message object. While techniques described herein are described with reference to “message objects,” techniques described herein can be similarly applicable to any other object that can be generated via the communication platform. Examples of such objects can comprise documents, posts, channel descriptions, user profiles, board content, and/or the like. Furthermore, techniques described herein are described in the context of enabling collaboration for message objects and/or other objects to be posted to the communication platform. However, in some examples, techniques described herein can be similarly applicable to generating message objects and/or other objects for posting on third-party platforms, such as social media platforms, email platforms, and/or the like. That is, in some examples, techniques described herein can be applicable to draft message object collaboration within the communication platform for a message object that is posted or otherwise presented via a third-party platform that is integrated with the communication platform (e.g., via one or more application programming interfaces (APIs) and/or software development kits (SDKs)). Additional details are described below. Techniques described herein can streamline message object drafting and/or editing in view of current techniques. As described above, the communication platform can utilize permissions to maintain security and privacy for users of the communication platform. However, such permissions can restrict the ability of multiple users or entities to collaborate on drafting message objects or other objects. Due to current limitations of existing technology, users are required to collaborate via another platform (e.g., a word processing platform, an email platform, or the like) to draft a message object or other object. When the message object or other object is ready on the other platform, one of the collaborators can copy the text or other data as generated in the other platform and paste the text or other data into the communication platform. In some examples, when such text or other data is pasted into the communication platform, the text or other data loses formatting or is introduced in a format that is inconsistent with the communication platform. In some examples, the communication platform can enable a user to re-format the text or other data. Nevertheless, with current techniques, collaborators are required to use multiple platforms if they want to collaborate on a message object or other object prior to posting the message object or other object to the communication platform. Once posted, editing permissions may be limited to the posting user. As such, the other collaborators may not be permitted to edit the message object or other object without going through the posting user. This can cause multiple interactions, with multiple platforms, which can be computationally expensive and cause network congestion. Techniques described herein enable improved techniques that reduce consumption of computing resources and decrease network congestion. Additional or alternative improvements to those described above are described below with reference to the figures. FIG.1illustrates an example environment100for performing techniques described herein. In at least one example, the example environment100can include one or more server computing devices (or “server(s)”)102. In at least one example, the server(s)102can include one or more servers or other types of computing devices that can be embodied in any number of ways. For example, in the case of a server, the functional components and data can be implemented on a single server, a cluster of servers, a server farm or data center, a cloud-hosted computing service, a cloud-hosted storage service, and so forth, although other computer architectures can additionally or alternatively be used. In at least one example, the server(s)102can be associated with a communication platform that can leverage a network-based computing system to enable users of the communication platform to exchange data. In at least one example, the communication platform can be “group-based” such that the platform, and associated systems, channels, message objects, and/or virtual spaces, have security (that can be defined by permissions) to limit access to defined groups of users, such a defined group of users having, for instance, sole access to a given channel, message object, and/or virtual space. In some examples, such groups of users can be defined by identifiers, which can be associated with common access credentials, domains, or the like. In some examples, the communication platform can be a hub, offering a secure and private virtual space to enable users to chat, meet, call, collaborate, transfer files or other data, message object, or otherwise communicate between or among each other, within secure and private virtual spaces, such as channel(s), direct message(s), board(s), and/or the like. In some examples, each group can be associated with an organization, which can be associated with an organization identifier. Users associated with the organization identifier can chat, meet, call, collaborate, transfer files or other data, message, or otherwise communicate between or among each other in a secure and private virtual space available via the communication platform. In some examples, each group can be associated with a workspace, associated with a workspace identifier. Users associated with the workspace identifier can chat, meet, call, collaborate, transfer files or other data, message, or otherwise communicate between or among each other in a secure and private virtual space available via the communication platform. In some examples, a group can be associated with multiple organizations and/or workspaces. In some examples, an organization can be associated with multiple workspaces or a workspace can be associated with multiple organizations. In at least one example, the server(s)102can communicate with a user computing device104via one or more network(s)106. That is, the server(s)102and the user computing device104can transmit, receive, and/or store data (e.g., content, message objects, data, or the like) using the network(s)106, as described herein. In some examples, the user computing device104can comprise a “client” associated with a user. The user computing device104can be any suitable type of computing device, e.g., portable, semi-portable, semi-stationary, or stationary. Some examples of the user computing device104can include a tablet computing device, a smart phone, a mobile communication device, a laptop, a netbook, a desktop computing device, a terminal computing device, a wearable computing device, an augmented reality device, an Internet of Things (IOT) device, or any other computing device capable of sending communications and performing the functions according to the techniques described herein. While a single user computing device104is shown, in practice, the example environment100can include multiple (e.g., tens of, hundreds of, thousands of, millions of) user computing devices. In at least one example, user computing devices, such as the user computing device104, can be operable by users to, among other things, access communication services via the communication platform. A user can be an individual, a group of individuals, an employer, an enterprise, an organization, or the like. In some examples, users can be associated with designated roles (e.g., based at least in part on an organization chart) and/or types (e.g., administrator, verified, etc.). The network(s)106can include, but are not limited to, any type of network known in the art, such as a local area network or a wide area network, the Internet, a wireless network, a cellular network, a local wireless network, Wi-Fi and/or close-range wireless communications, Bluetooth®, Bluetooth Low Energy (BLE), Near Field Communication (NFC), a wired network, or any other such network, or any combination thereof. Components used for such communications can depend at least in part upon the type of network, the environment selected, or both. Protocols for communicating over such network(s)106are well known and are not discussed herein in detail. In at least one example, the server(s)102can include one or more processors108, computer-readable media110, one or more communication interfaces112, and input/output devices114. In at least one example, each processor of the processor(s)108can be a single processing unit or multiple processing units, and can include single or multiple computing units or multiple processing cores. The processor(s)108can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units (CPUs), graphics processing units (GPUs), state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. For example, the processor(s)108can be one or more hardware processors and/or logic circuits of any suitable type specifically programmed or configured to execute the algorithms and processes described herein. The processor(s)108can be configured to fetch and execute computer-readable instructions stored in the computer-readable media, which can program the processor(s) to perform the functions described herein. The computer-readable media110can include volatile, nonvolatile, removable, and/or non-removable memory or other media implemented in any type of technology for storage of data, such as computer-readable instructions, message objects, program modules, or other data. Such computer-readable media110can include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, optical storage, solid state storage, magnetic tape, magnetic disk storage, RAID storage systems, storage arrays, network attached storage, storage area networks, cloud storage, or any other medium that can be used to store the desired data and that can be accessed by a computing device. Depending on the configuration of the server(s)102, the computer-readable media110can be a type of computer-readable storage media and/or can be a tangible non-transitory media to the extent that when mentioned, non-transitory computer-readable media exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. The computer-readable media110can be used to store any number of functional components that are executable by the processor(s)108. In many implementations, these functional components comprise instructions or programs that are executable by the processor(s)108and that, when executed, specifically configure the processor(s)108to perform the actions attributed above to the server(s)102. Functional components stored in the computer-readable media can optionally include a message object management component116, a channel management component118, a direct message management component119, an operating system120, and a datastore122. In at least one example, the message object management component116can manage generating, posting and/or presenting, editing, and/or the like of message objects associated with the communication platform. In at least one example, the message object management component116can receive a request to generate a new message object from a client of a user or other entity. In at least one example, the message object management component116can generate a message object in response to receiving the request to generate the new message object. In some examples, while a message associated with the message object is being composed, the message object can be associated with an indication that the message object is a draft and/or is not posted or otherwise presented. In at least one example, the message object management component116can cause a composition user interface to be presented via the client to enable the user to associate data with the message object (e.g., text, image(s), video(s), link(s), file(s), etc.). In some examples, as the user or other entity associates data with the message object, the client can send such data to the message object management component116. The message object management component116can store such data in association with the message object. As such, the message object can persist on the server(s)102, which can allow for draft synchronization and/or message object collaboration, as described herein. In at least one example, the user or other entity can request to add at least one other user or other entity as a collaborator to collaborate with the other user or other entity. As such, the message object can be associated with multiple collaborators, each which can provide input for composing the message object via respective instances of the composition user interface. As described above, each of the collaborators can comprise a user, a role, an application, a channel (e.g., members associated therewith), a direct message (e.g., members associated therewith), a board (e.g., members associated therewith), and/or the like. In some examples, the collaborators can be associated with two or more organizations. In at least one example, the message object can be associated with identifiers of each of the collaborators. In some examples, prior to associating another user or other entity with the message object as a collaborator, the message object management component116can send an invitation to a client of the other user or entity, inviting them to collaborate on the message object. In such examples, the message object management component116can wait to associate identifier(s) of the other collaborator(s) with the message object until an acceptance of the invitation is received. That is, in at least one example, the message object management component116can associate the identifier(s) of the invited collaborator(s) (e.g., user(s) and/or entity(s)) with the message object in response to receiving an acceptance of the invitation. In at least one example, each of the collaborators can interact with respective instances of the composition user interface to provide comments and/or other feedback, add new content to a message object, remove contents of a message object, revise contents of the message object, view a history of modifications (e.g., additions, revisions, etc.), accept proposed modifications, and/or the like. In at least one example, the composition user interface can enable the collaborators to generate and/or edit the message object in real-time or near real-time. In at least one example, when a client of one of the collaborators receives an input associated with the message object, the input can be sent to the message object management component116, which can update the message object based at least in part on such an input. In at least one example, such an input can be pushed to other instances of the composition user interface using techniques such as operational transform or the like. That is, in at least one example, the message object management component116can enable real-time or near real-time collaboration on message objects or other objects. In at least one example, one of the collaborators can provide an input, via the collaboration user interface, to post the message object to, or otherwise present the message object via, one or more virtual spaces. In such an example, the message object management component116can receive an indication of such an input and can cause the message object to be posted to, or otherwise presented via, the one or more virtual spaces. In some examples, receipt of such an indication can cause the message object management component116to modify the indication that the message object is a draft and/or is not posted or otherwise presented to an indication that the message object is posted or otherwise presented (i.e., and no longer a draft). In at least one example, the message object management component116can push, or otherwise provide, the message object to client(s) of each of the member(s) associated with the target virtual space(s) and the message object can be presented in association with the target virtual space(s) on each of the client(s). For example, if a message object is posted to a channel, the message object management component116can push, or otherwise provide, the message object to respective clients of each of the members of the channel. As another example, if a message object is posted to a board, the message object management component116can push, or otherwise provide, the message object to respective clients of each of the members of the board. In at least one example, the message object can be posted to, or otherwise presented via, the virtual space(s) with an indication of each of the user(s) and/or entity(s) that collaborated on the composition of the message object. That is, the message object can be associated with an indication that the message object originated from each of the collaborators. In some examples, prior to posting a message object that is associated with multiple collaborators, the message object management component116can initiate an approval process. That is, individual message objects can be associated with approval processes that indicate whether approval is required to post or otherwise present the message object and/or from whom approval is required. In some examples, the approval process can be associated with a priority in which individual of the collaborators are to approve the draft message object. In some examples, a particular collaborator can be required to approve the draft message object. In some examples, each of the collaborators can be required to approve the draft message object. In at least one example, the collaborators can be associated with permission(s) that enable the collaborators to perform a set of actions associated with the message object after it has been posted or otherwise presented. In some examples, non-collaborators may not be associated with the same permission(s) and/or set of actions. In at least one example, the message object management component116can receive a request to edit a posted message object. The message object management component116can determine whether a user and/or client associated with the request has permission to edit the posted message object. So long as the user and/or client associated with the request has permission to edit the posted message object (e.g., the user and/or client was one of the original collaborators or has otherwise been added as a collaborator), the message object management component116can cause an instance of the composition user interface to be presented via the requesting client. As such, the requesting client can receive an input associated with an edit or other modification to the message object and can provide an indication of such an input to the message object management component116. The message object management component116can cause the posted message object to be updated based at least in part on receiving the indication of the input. That is, the message object management component116can update the message object and push, or otherwise provide, an update to one or more recipients of the posted message object (e.g., member(s) of the virtual space(s) to which the message object is posted or otherwise presented). In some examples, other collaborators can receive a notification that one of the collaborators has requested to edit the message object such that one or more of the other collaborators can opt to open respective instance(s) of the composition user interface for collaborative editing. In some examples, an approval process, as described above, may be implemented prior to posting or otherwise presenting an edited message. In some examples, collaborators can be added to a message object after the message object has been posted. That is, in at least one example, a collaborator associated with the message object can request another user or other entity be added as a collaborator after the message object has been posted to or otherwise presented via a virtual space. In such an example, the message object management component116can associate the other user or other entity with the message as a collaborator. In such an example, the message object management component116can cause an indication of the new collaborator to be presented in association with the message object presented via the user interface, as described herein. In some examples, a user or entity that is not a collaborator at the time the message object is posted or otherwise presented can request to be added as a collaborator. In such examples, the message object management component116can receive the request and may add the requesting user or entity as a collaborator. In some examples, the message object management component116can utilize permissions to determine whether the requesting user or entity can be added and/or can execute an authorization process for one of the current collaborators to authorize the addition of the requesting user or entity as a collaborator. In some examples, the message object management component116can exchange data with the channel management component118and/or the direct message (DM) component119to post message objects to virtual spaces managed by each of the components. In some examples, the message object management component116can exchange data with other component(s) of the server(s)102to post message objects to virtual space(s) managed by such component(s). In some examples, as described above, techniques described herein can be applicable to objects other than message objects. For instance, in at least one example, techniques described herein can be used to enable collaboration on documents, posts, channel descriptions, user profiles, and/or the like, which can be managed by the message object management component116or another component of the server(s)102. In at least one example, the channel management component118can manage channels of the communication platform. In at least one example, the communication platform can be “channel-based” such that the platform can be organized into channels having security (that can be defined by permissions) to limit access to defined groups of users (e.g., members of the channels). A channel, or virtual space, can be a data route used for exchanging data between and among systems and devices associated with the communication platform such as, for example, content and/or message objects. In some examples, a channel may be “public,” which may allow any user within a group (e.g., associated with an organization identifier, associated with a workspace identifier, etc.) with which the channel is associated to join and participate in the data sharing through the channel. In some examples, a channel may be “private,” which may restrict data communications in the channel to certain users or users having particular roles (e.g., managers, administrators, etc.) and/or types (e.g., verified, administrator, etc.). In some examples, a channel may be an “announcement” channel, which may restrict communication in the channel to announcements or may otherwise be associated with announcements instead of other more granular topics of other channels. In at least one example, a channel can be associated with a defined group of users within the same organization. Such a channel can be an “internal channel” or an “internally shared channel.” In some examples, a channel may be “shared” or “externally shared,” which may allow users associated with two or more different groups (e.g., entities associated with two or more different organization and/or workspace identifiers) to join and participate in the data sharing through the channel. A shared channel may be public such that it is accessible to any user of groups associated with the shared channel, or may be private such that it is restricted to access by certain users or users having particular roles and/or types. A “shared channel” or an “externally shared channel” can enable two or more organizations, such as a first organization and a second organization to share data, exchange communications, and the like (hence, a “shared” channel or an “externally shared channel” can refer to a channel which is accessible across different organizations, whereas an “internal channel” can refer to a communication channel which is accessible within a same organization). In an example, the first organization and the second organization can be associated with different organization identifiers, can be associated with different business entities, have different tax identification numbers, and/or otherwise can be associated with different permissions such that users associated with the first organization and users associated with the second organization are not able to access data associated with the other organization, without the establishment of an externally shared channel. In some examples, a shared channel can be shared with one or more different workspaces and/or organizations that, without having a shared channel, would not otherwise have access to each other's data by the nature of the permission-based and/or group-based configuration of the communication platform described herein. In at least one example, the channel management component118can receive a request to generate a channel. In some examples, the request can include a name that is to be associated with the channel, one or more users to invite to join the channel, and/or permissions associated with the channel. In at least one example, one or more user identifiers associated with one or more users and/or one or more user accounts can be mapped to, or otherwise associated with, a channel (e.g., a channel identifier associated therewith). User(s) associated with a channel can be “members” of the channel. Members of a channel can communicate with other members via the channel. That is, in at least one example, the channel management component118can establish a channel between and among various user computing devices associated with user identifiers associated with the channel, allowing the user computing devices to communicate and share data between and among each other. As described herein, in some examples, such communication and/or sharing of data can be via one or more message objects that can be exchanged via a channel. In at least one example, the channel management component118can manage such communications and/or sharing of data. In some examples, data associated with a channel can be presented via a user interface. As described above, in at least one example, one or more permissions can be mapped to, or otherwise associated with, a channel and/or members associated therewith. Such permission(s) can indicate which user(s) have permission to access the channel, actions and/or message objects permitted in the channel, which user(s) and/or type(s) of users are permitted to add or remove members, which user(s) and/or types of users are permitted to share the channel with other users, a retention policy associated with data in the channel, whether the channel is public or private, or the like. In at least one example, the direct message management component119can manage “direct messages,” which can comprise communications with individual users or multiple specified users (e.g., instead of all, or a subset of, members of an organization). In at least one example, a “direct message” can comprise a data route, or virtual space, used for exchanging data between and among systems and devices associated with the communication platform (e.g., content and/or message objects). In some examples, a direct message can be a private message object between two or more users of the communication platform. In some examples, a direct message may be “shared,” which may allow users associated with two or more different groups (e.g., entities associated with two or more different organization and/or workspace identifiers) to join and participate in the data sharing through the direct message. In at least one example, the direct message management component119can receive a request to generate a direct message. In some examples, the request can include identifiers associated with one or more users that are intended recipient(s) (e.g., recipient user(s)) of the direct message. In at least one example, one or more user identifiers associated with one or more users and/or one or more user accounts can be mapped to, or otherwise associated with, a direct message (e.g., or direct message identifier associated therewith). User(s) associated with a direct message can communicate with one another and/or otherwise share data with one another via the direct message. As described herein, in some examples, such communication and/or sharing of data can be via one or more message objects that can be exchanged via the direct message. In at least one example, the direct message management component119can manage such communications and/or sharing of data. In some examples, data associated with a direct message can be presented via a user interface. In at least one example, the operating system120can manage the processor(s)108, computer-readable media110, hardware, software, etc. of the server(s)102. In at least one example, the datastore122can be configured to store data that is accessible, manageable, and updatable. In some examples, the datastore122can be integrated with the server(s)102, as shown inFIG.1. In other examples, the datastore122can be located remotely from the server(s)102and can be accessible to the server(s)102and/or user device(s), such as the user device104. The datastore122can comprise one or multiple databases, which can include user data124, permission data126, channel data128, and direct message (DM) data130. Additional or alternative data may be stored in the datastore and/or one or more other datastores. In at least one example, the user data124can store data associated with users of the communication platform. In at least one example, the user data124can store data in user profiles (which can also be referred to as “user accounts”). In some examples, a user can be associated with a single user profile. In some examples, a user can be associated with multiple user profiles. A user profile can store data associated with a user, including, but not limited to, one or more user identifiers associated with multiple, different organizations, groups, or entities with which the user is associated, one or more group identifiers for groups (or, organizations, teams, entities, or the like) with which the user is associated, one or more channel identifiers associated with channels to which the user has been granted access, an indication whether the user is an owner or manager of any channels, an indication whether the user has any channel restrictions, one or more direct message identifiers associated with direct messages with which the user is associated, one or more board identifiers associated with boards with which the user is associated, a plurality of message objects, a plurality of emojis, a plurality of conversations, a plurality of conversation topics, an avatar, an email address, a real name (e.g., John Doe), a username (e.g., j doe), a password, a time zone, a status, and the like. In some examples, the user data124can store indications of user preferences, which can be explicitly indicated or learned. In some examples, the user data124of a user can indicate a role or position of a user, which can be determined based at least in part on an organizational chart and/or learned. In some examples, the communication platform can analyze messaging and/or other interaction data to determine relationships between users and/or relative ranks and can infer organizational charts. In some examples, the user data124of a user can indicate a user type of the user, for example, whether the user is an administrator, a verified user, and/or the like. In at least one example, user type can be a designation provided by the communication platform (e.g., wherein roles can be designated by organizations, workspaces, teams, and/or other groups). In some examples, the communication platform can store indications of which users and/or virtual spaces a user communicates with and/or in, a frequency of such communication, topics associated with such communications, reactions and/or feedback associated with such communications and/or the like. In at least one example, the permission data126can store data associated with permissions of individual users of the communication platform. In some examples, permissions can be set automatically or by an administrator of the communication platform, an employer, enterprise, organization, or other entity that utilizes the communication platform, a team leader, a group leader, or other entity that utilizes the communication platform for communicating with team members, group members, or the like, an individual user, or the like. In some examples, permissions associated with an individual user can be mapped to, or otherwise associated with, a profile and/or account associated with the user data124. In some examples, permissions can indicate which users can communicate directly with other users, which channels a user is permitted to access, restrictions on individual channels, which workspaces the user is permitted to access, restrictions on individual workspaces, and the like. In at least one example, the permissions can support the communication platform by maintaining security for limiting access to a defined group of users. In some examples, such users can be defined by common access credentials, group identifiers, or the like, as described above. In some examples, the permission data126can store data associated with permissions of groups associated with the communication platform. In some examples, permissions can be set automatically or by an administrator of the communication platform, an employer, enterprise, organization, or other entity that utilizes the communication platform, a team leader, a group leader, or other entity that utilizes the communication platform for communicating with team members, group members, or the like, an individual user, or the like. In some examples, permissions associated with a group can be mapped to, or otherwise associated with, data associated with the group. In some examples, permissions can indicate restrictions on individual groups, restrictions on channel(s) associated with individual groups, restrictions on user(s) associated with individual groups, and the like. In at least one example, the permissions can support the communication platform by maintaining security for limiting access to a defined group of users. In some examples, such groups can be defined by common access credentials, group identifiers, or the like, as described above. In some examples, the permission data126can store data associated with permissions of individual channels. In some examples, permissions can be set automatically or by an administrator of the communication platform, an employer, enterprise, organization, or other entity that utilizes the communication platform, a team leader, a group leader, or other entity that utilizes the communication platform for communicating with team members, group members, or the like, an individual user, or the like. In some examples, permissions associated with a channel can be mapped to, or otherwise associated with, data associated with the channel in the channel data128. In some examples, permissions can indicate restrictions on individual channels, restrictions on user(s) associated with individual channels, and the like. In some examples, the permission data126can store data associated with permissions of individual message objects or other objects. In some examples, permissions can be set automatically or by an administrator of the communication platform, an employer, enterprise, organization, or other entity that utilizes the communication platform, a team leader, a group leader, or other entity that utilizes the communication platform for communicating with team members, group members, or the like, an individual user (e.g., the originator of the message object), or the like. In some examples, permissions associated with a message object or other object can be mapped to, or otherwise associated with, data associated with the message object or other object. In some examples, permissions can indicate viewing permissions, access permissions, editing permissions, etc. In at least one example, the channel data128can store data associated with individual channels. In at least one example, the channel management component118can establish a channel between and among various user computing devices, allowing the user computing devices to communicate and share data between and among each other. In at least one example, a channel identifier may be assigned to a channel, which indicates the physical address in the channel data128where data related to that channel is stored. Individual message objects or other objects posted to a channel can be stored in association with the channel data128. In at least one example, the DM data130can store data associated with individual direct messages. In at least one example, the direct message management component119can establish a direct message between and among various user computing devices, allowing the user computing devices to communicate and share data between and among each other via the direct message. In at least one example, a direct message identifier may be assigned to a direct message, which indicates the physical address in the DM data130where data related to that direct message is stored. Individual message objects or other objects posted to a direct message can be stored in association with the DM data130. As described above, message objects posted, or otherwise sent and/or received, via channels, direct messages, etc. can be stored in associated with the channel data128and/or DM data130. In some examples, such message objects can additionally or alternatively be stored in association with the user data124. In some examples, message objects can be associated with individual permissions, as described herein. The datastore122can store additional or alternative types of data, which can include, but is not limited to board data (e.g., data posted to or otherwise associated with boards of the communication platform), interaction data (e.g., data associated with additional or alternative interactions with the communication platform), model(s), etc. In some examples, the datastore122can be partitioned into discrete items of data that may be accessed and managed individually (e.g., data shards). Data shards can simplify many technical tasks, such as data retention, unfurling (e.g., detecting that message object contents include a link, crawling the link's metadata, and determining a uniform summary of the metadata), and integration settings. In some examples, data shards can be associated with groups (e.g., organizations, workspaces), channels, direct messages, users, or the like. In some examples, individual groups can be associated with a database shard within the datastore122that stores data related to a particular group identification. For example, a database shard may store electronic communication data associated with members of a particular group, which enables members of that particular group to communicate and exchange data with other members of the same group in real time or near-real time. In this example, the group itself can be the owner of the database shard and has control over where and how the related data is stored and/or accessed. In some examples, a database shard can store data related to two or more groups (e.g., as in a shared channel, such as an externally shared channel). In some examples, a channel can be associated with a database shard within the datastore122that stores data related to a particular channel identification. For example, a database shard may store electronic communication data associated with the channel, which enables members of that particular channel to communicate and exchange data with other members of the same channel in real time or near-real time. In this example, a group or organization can be the owner of the database shard and can control where and how the related data is stored and/or accessed. In some examples, a direct message can be associated with a database shard within the datastore122that stores data related to a particular direct message identification. For example, a database shard may store electronic communication data associated with the direct message, which enables a user associated with a particular direct message to communicate and exchange data with other users associated with the same direct message in real time or near-real time. In this example, a group or organization can be the owner of the database shard and can control where and how the related data is stored and/or accessed. In some examples, individual users can be associated with a database shard within the datastore122that stores data related to a particular user account. For example, a database shard may store electronic communication data associated with an individual user, which enables the user to communicate and exchange data with other users of the communication platform in real time or near-real time. In some examples, the user itself can be the owner of the database shard and has control over where and how the related data is stored and/or accessed. The communication interface(s)112can include one or more interfaces and hardware components for enabling communication with various other devices (e.g., the user computing device104), such as over the network(s)106or directly. In some examples, the communication interface(s)112can facilitate communication via Web sockets, Application Programming Interfaces (APIs) (e.g., using API calls), Hyper Text Transfer Protocols (HTTPs), etc. The server(s)102can further be equipped with various input/output devices114(e.g., I/O devices). Such I/O devices114can include a display, various user interface controls (e.g., buttons, joystick, keyboard, mouse, touch screen, etc.), audio speakers, connection ports and so forth. In at least one example, the user computing device104can include one or more processors132, computer-readable media134, one or more communication interfaces136, and input/output devices138. In at least one example, each processor of the processor(s)132can be a single processing unit or multiple processing units, and can include single or multiple computing units or multiple processing cores. The processor(s)132can comprise any of the types of processors described above with reference to the processor(s)108and may be the same as or different than the processor(s)108. The computer-readable media134can comprise any of the types of computer-readable media134described above with reference to the computer-readable media110and may be the same as or different than the computer-readable media110. Functional components stored in the computer-readable media can optionally include at least one application140and an operating system142. In at least one example, the application140can be a mobile application, a web application, or a desktop application, which can be provided by the communication platform or which can be an otherwise dedicated application. In at least one example, the application140can be a native application associated with the communication platform. In some examples, individual user computing devices associated with the environment100can have an instance or versioned instance of the application140, which can be downloaded from an application store, accessible via the Internet, or otherwise executable by the processor(s)132to perform operations as described herein. That is, the application140can be an access point, enabling the user computing device104to interact with the server(s)102to access and/or use communication services available via the communication platform. In at least one example, the application140can facilitate the exchange of data between and among various other user computing devices, for example via the server(s)102. In at least one example, the application140can present user interfaces, as described herein. In at least one example, a user can interact with the user interfaces via touch input, keyboard input, mouse input, spoken input, or any other type of input. Additional or alternative access points, such as a web browser, can be used to enable the user computing device104to interact with the server(s)102as described herein. That is, in examples where the application140is described as performing an operation below, in an additional or alternative example, such an operation can be performed by another access point, such as a web browser or the like. In at least one example, the user computing device104can correspond to a “client” of a user. In some examples, the user computing device104can be associated with multiple “clients,” in which case, each instance of an application or other access point can be its own client. For example, a user can be signed into a first client (e.g., the application140) and a second client (e.g., a web browser), both of which can be associated with the user computing device104. In another example, the user can be signed into a first client (e.g., the application140) and a second client, each of which can be on separate user computing devices. As described above, a client, which can be associated with the user computing device104, can present one or more user interfaces. A non-limiting example of a user interface144is shown inFIG.1. As illustrated inFIG.1, the user interface144can present data associated with one or more channels, direct messages, or other virtual spaces. In some examples, the user interface144can include a first section146(e.g., which can be a portion, pane, or other partitioned unit of the user interface144), that includes user interface element(s) representing data associated with channel(s), direct message(s), etc. with which the user (e.g., account of the user) is associated. Additional details associated with the first section146and user interface element(s) are described below with reference toFIG.2A. In at least one example, the user interface144can include a second section148(e.g., which can be a portion, pane, or other partitioned unit of the user interface144) that can be associated with a data feed (or, “feed”) indicating message objects posted to and/or actions taken with respect to one or more channels, direct messages, and/or other virtual spaces for facilitating communications (e.g., a virtual space associated with event(s) and/or action(s), etc.) as described herein. In at least one example, data associated with the second section148can be associated with the same or different workspaces. That is, in some examples, the second section148can present data associated with the same or different workspaces via an integrated data feed. In some examples, the data can be organized and/or is sortable by date, time (e.g., when associated data is posted or an associated operation is otherwise performed), type of action and/or data, workspace, channel, user, topic, relevance metric, and/or the like. In some examples, such data can be associated with an indication of which user (e.g., member of the channel) posted the message object and/or performed an action. In examples where the second section148presents data associated with multiple workspaces, at least some data can be associated with an indication of which workspace the data is associated with. In at least one example, the first section146and the second section148, in combination, can be associated with a “group-based communication user interface” from which a user can interact with the communication platform. Additional details associated with the user interface144, the first section146, and the second section148, are described below with reference toFIG.2A. In at least one example, a composition user interface150, which can enable multiple entities (e.g., users, roles, applications, channels, direct messages, boards, and/or the like) to collaborate on drafting or editing of a message object can be presented via the user interface144. The composition user interface150can be presented as a pop-up, as shown, or an overlay, a portion of the user interface144, in association with another input mechanism of the user interface144, a new user interface, and/or the like. In some examples, the composition user interface150can be triggered for presentation in response to a request to generate a new message object. In some examples, the composition user interface150can be triggered for presentation in response to a request to edit a previously posted message object. In some examples, aspects of the composition user interface150can be integrated into other features of the user interface144, as described below. Additional details associated with the user interface144and/or differentiated presentation of message objects and/or user interface(s) are described below with reference toFIGS.2A-4. In at least one example, the operating system142can manage the processor(s)132, computer-readable media134, hardware, software, etc. of the user computing device104. The communication interface(s)136can include one or more interfaces and hardware components for enabling communication with various other devices (e.g., the user computing device104), such as over the network(s)106or directly. In some examples, the communication interface(s)136can facilitate communication via Websockets, APIs (e.g., using API calls), HTTPs, etc. The user computing device104can further be equipped with various input/output devices138(e.g., I/O devices). Such I/O devices138can include a display, various user interface controls (e.g., buttons, joystick, keyboard, mouse, touch screen, etc.), audio speakers, microphones, cameras, connection ports and so forth. While techniques described herein are described as being performed by the message object management component116, the channel management component118, the direct message management component119, and the application140, techniques described herein can be performed by any other component, or combination of components, which can be associated with the server(s)102, the user computing device104, or a combination thereof. FIG.2Aillustrates an example user interface200presented via a communication platform, as described herein. The user interface200can correspond to the user interface144described above with reference toFIG.1. As described above, in some examples, a user interface200presented via the communication platform can include a first section202(which can correspond to the first section146described above with reference toFIG.1) that includes user interface element(s) representing virtual space(s) associated with the workspace(s) with which the user (e.g., account of the user) is associated. In at least one example, the first section202can include one or more sub-sections, which can represent different virtual spaces. For example, a first sub-section204can include user interface elements representing virtual spaces that can aggregate data associated with a plurality of channels and/or workspaces. In at least one example, each virtual space can be associated with a user interface element in the first sub-section204. In some examples, a user interface element can be associated with an actuation mechanism, that when actuated, can cause the application140to present data associated with the corresponding virtual space via a second section206of the user interface200(which can correspond to the second section148described above with reference toFIG.1). In at least one example, a virtual space can be associated with all unread data associated with each of the workspaces with which the user is associated. That is, in some examples, if the user requests to access the virtual space associated with “unreads,” all data that has not been read (e.g., viewed) by the user can be presented in the second section206, for example in a feed. In another example, “drafts” can be associated with message objects or other objects that have not yet been posted to a virtual space or otherwise sent to a receiving entity. In at least one example, a message object, while being composed, can be associated with an indicator indicating that the message object is a draft and can therefore be associated with the “drafts” referenced in the second sub-section208. In another example, “threads” can be associated with message objects, files, etc. posted in threads to message objects posted in a channel and/or a virtual space associated with “mentions and reactions” (e.g., “M & R”) can be associated with message objects or threads where the user (e.g., User F) has been mentioned (e.g., via a tag) or another user has reacted (e.g., via an emoji, reaction, or the like) to a message object or thread posted by the user. In some examples, if the first sub-section204includes a user interface element representative of a virtual space associated with “snippets of content” (e.g., stories) that is actuated by a user, snippets of content associated with the user, which can be associated with different channels and/or virtual spaces, can be presented via the second section206. In some examples, such snippets of content can be presented via a feed. For the purpose of this discussion, a snippet of content can correspond to audio and/or video content provided by a user associated with the communication platform. In another example, a virtual space can be associated with “boards” with which the user is associated. In at least one example, if the user requests to access the virtual space associated with “boards,” one or more boards with which the user is associated can be presented via the user interface200(e.g., in the second section206). In at least one example, boards, as described herein, can be associated with individual groups and/or communication channels to enable users of the communication platform to create, interact with, and/or view data associated with such boards. That is, a board, which can be an “electronic board,” can be a virtual space, canvas, page, or the like for collaborative communication and/or organization within the communication platform. In at least one example, a board can support editable text and/or objects that can be ordered, added, deleted, modified, and/or the like. In some examples, a board can be associated with permissions defining which users of a communication platform can view and/or edit the board. In some examples, a board can be associated with a communication channel and at least some members of the communication channel can view and/or edit the board. In some examples, a board can be sharable such that data associated with the board is accessible to and/or interactable for members of the multiple communication channels, workspaces, organizations, and/or the like. In at least one example, a board can include section(s) and/or object(s). In some examples, each section can include one or more objects. In at least one example, an object can be associated with an object type, which can include, but is not limited to, text (e.g., which can be editable), a task, an event, an image, a graphic, a link to a local object, a link to a remote object, a file, and/or the like. In some examples, the sections and/or objects can be reordered and/or otherwise rearranged, new sections and/or objects can be added or removed, and/or data associated with such sections and/or objects can be edited and/or modified. That is, boards can be created and/or modified for various uses. That is, users can customize and/or personalize boards to serve individual needs as described herein. As an example, sections and/or objects can be arranged to create a project board that can be used to generate and/or assign tasks, track progress, and/or otherwise manage a project. Further, in some examples, boards can present company metrics and also enable access to company goals so that such data can be stored and/or accessed via a single location. In some examples, boards can be used to keep track of work progress and/or career growth, which can be used by managers or supervisors for managing and/or supervising employees, agents, and/or other workers. In at least one example, a board can be used to track incidents, incoming customer service requests, and/or the like. Additional details associated with boards are provided in U.S. patent application Ser. No. 16/993,859, filed on Aug. 14, 2020, the entire contents of which are incorporated by reference herein. In some examples, data presented via the second section can be organized and/or is sortable by date, time (e.g., when associated data is posted or an associated operation is otherwise performed), type of action and/or data, workspace, channel, user, topic, relevance metric, and/or the like. In some examples, such data can be associated with an indication of which user(s) (e.g., member(s) of a channel) posted a message object, performed an action, and/or the like. Additional details are described below. In at least one example, the first section202of the user interface200can include a second sub-section208that includes user interface elements representing channels to which the user (i.e., user profile) has access. In some examples, the channels can include public channels, private channels, shared channels (e.g., between workspaces or organizations), single workspace channels, cross-workspace channels, announcement channels, combinations of the foregoing, or the like. In some examples, the channels represented can be associated with a single workspace. In some examples, the channels represented can be associated with different workspaces (e.g., cross-workspace). In some examples, the channels represented can be associated with combinations of channels associated with a single workspace and channels associated with different workspaces. In some examples, the second sub-section208can depict all channels, or a subset of all channels, that the user has permission to access (e.g., as determined by the permission data126). In such examples, the channels can be arranged alphabetically, based on most recent interaction, based on frequency of interactions, based on channel type (e.g., public, private, shared, cross-workspace, announcement, etc.), based on workspace, in user-designated sections, or the like. In some examples, the second sub-section208can depict all channels, or a subset of all channels, that the user is a member of, and the user can interact with the user interface200to browse or view other channels that the user is not a member of but are not currently displayed in the second sub-section208. In some examples, a new channel, generated subsequent to a request received at the channel management component118inFIG.1and accessible to the user, can be added to the second sub-section208. The new channel can be generated by the user or added to the second sub-section208in response to acceptance of an invite sent to the user to join a new channel. In some examples, different types of channels (e.g., public, private, shared, etc.) can be in different sections of the second sub-section208, or can have their own sub-sections or sub-sections in the user interface200. In some examples, channels associated with different workspaces can be in different portions of the second sub-section208, or can have their own sections or sub-sections in the user interface200. In some examples, the indicators can be associated with user interface elements that visually differentiate types of channels. For example, Channel B is associated with a double square user interface element instead of a circle user interface element. As a non-limiting example, and for the purpose of this discussion, the double square user interface element can indicate that the associated channel (e.g., Channel B) is an externally shared channel. In some examples, such a user interface element can be the same for all externally shared channels. In other examples, such a user interface element can be specific to the other group with which the externally shared channel is associated. In some examples, additional or alternative graphical elements can be used to differentiate between public channels, private channels, shared channels, channels associated with different workspaces, and the like. In other examples, channels that the user is not a current member of may not be displayed in the second sub-section208of the user interface200. In such examples, the user may navigate to a different interface (not shown) to browse additional channels that are accessible to the user but to which the user is not currently a member. In addition to the second sub-section208, the first section202can include a third sub-section210that can include user interface elements representative of direct messages. That is, the third sub-section210can include user interface elements representative of virtual spaces that are associated with private message objects between one or more users, as described above. As described above, in at least one example, the user interface200can include a second section206that can be associated with data associated with virtual spaces of the communication platform. In some examples, data presented via the second section206can be presented as a feed indicating message objects posted to and/or actions taken with respect to a channel and/or other virtual space (e.g., a virtual space associated with direct message communication(s), a virtual space associated with event(s) and/or action(s), etc.) for facilitating communications. As described above, in at least one example, data associated with the second section206can be associated with the same or different workspaces. That is, in some examples, the second section206can present data associated with the same or different workspaces via an integrated feed. In some examples, the data can be organized and/or is sortable by date, time (e.g., when associated data is posted or an associated operation is otherwise performed), type of action and/or data, workspace, channel, user, topic, relevance metric, and/or the like. In some examples, such data can be associated with an indication of which user(s) and/or entity(s) posted the message object and/or performed an action. As described above, a “message object” can refer to any electronically generated digital object provided by a user using the user computing device104and that is configured for display within a channel, a direct message, and/or another virtual space as described herein. In some examples, a user can comment on a message object in a “thread.” A thread can be a message object associated with another message object that is not posted to a channel, direct message, or other virtual space, but instead is maintained within an object associated with the original message object. Message objects and/or threads can be associated with file(s), emoji(s), reactji(s), application(s), etc. A channel, direct message, or other virtual space can be associated with data and/or content other than message objects, or data and/or content that is associated with message objects. For example, non-limiting examples of additional data and/or content that can be presented via the second section206of the user interface144include members added to and/or removed from the channel, file(s) (e.g., file attachment(s)) uploaded and/or removed from the channel, application(s) added to and/or removed from the channel, post(s) (data that can be edited collaboratively, in near real-time by one or members of a channel) added to and/or removed from the channel, description added to, modified, and/or removed from the channel, modifications of properties of the channel, etc. In some examples, the second section206can comprise a feed associated with a single channel. In such examples, data associated with the channel can be presented via the feed. In at least one example, data associated with a channel can be viewable to at least some of the users of a group of users associated with a same group identifier. In some examples, for members of a channel, the content of the channel (e.g., messaging communications and/or objects) can be displayed to each member of the channel. For instance, a common set of group-based messaging communications can be displayed to each member of the channel such that the content of the channel (e.g., messaging communications and/or objects) may not vary per member of the channel. In some examples, messaging communications associated with a channel can appear differently for different users (e.g., based on personal configurations, group membership, permissions, policies, etc.). In at least one example, the format of the individual channels or virtual spaces may appear differently to different users. In some examples, the format of the individual channels or virtual spaces may appear differently based on which workspace or organization a user is currently interacting with or most recently interacted with. In some examples, the format of the individual channels or virtual spaces may appear differently for different users (e.g., based on personal configurations, group membership, permission(s), etc.). In at least one example, the user interface200can include a search mechanism212, wherein a user can input a search term and the server(s)102can perform a search associated with the communication platform. In some examples, the search can be performed across each group with which the user is associated, or the search can be restricted to a particular group, based on a user specification. The search may be performed with one or more shards associated with each group across which the search is performed. InFIG.2A, the user can interact with the user interface element that corresponds to Channel D in the second sub-section208and as such, a feed associated with the channel can be presented via the second section206of the user interface. In some examples, the second section206can be associated with a header that includes user interface elements214representing data associated with Channel D. Furthermore, the second section206can include user interface elements216,218, and220which each represent message objects posted to the channel. As illustrated, the user interface elements representative of the message objects216-220can include an indication of user(s) and/or entity(s) that posted the message object, a time when the message object was posted, content associated with the message object, reactions associated with the message object (e.g., emojis, reactjis, etc.), and/or the like. In at least one example, the second section206can include an input mechanism222, which can be associated with a composition user interface to enable a user to compose a message object to be posted to the channel. That is, in at least one example, a user can provide input via the input mechanism222(e.g., type, speak, etc.) to generate a new message object. In some examples, message objects can be generated by applications and/or automatically by the communication platform. In some examples, the second section206can include user interface elements representative of other objects and/or data associated with the channel (or other virtual space). In some examples, the user interface200can include a user interface element224that can trigger presentation of a composition user interface226. For example, the user interface element224can be associated with a request to generate a new message object. As illustrated inFIG.2A, the composition user interface226can comprise a pop-up. In some examples, the composition user interface226can comprise an overlay or a new user interface. In some examples, input to the input mechanism222can trigger presentation of the composition user interface226. In some examples, aspects described in association with the composition user interface226can be integrated with the input mechanism222or another aspect of the user interface200. As described above, a first entity can request to generate a new message object (e.g., via an interaction with the user interface element224, input to the input mechanism222, etc.). As described above, an entity can comprise a user, a role, an application, a channel, a direct message, a board, and/or the like. In at least one example, when an entity is a role, that role can be associated with one or more user profiles. That is, one or more user profiles can be associated with a role, and each of the one or more user profiles can be associated with a message object as a collaborator. In an example where an entity is a channel, a direct message, a board, or the like, user profile(s) of member(s) of the channel, the direct message, the board, or the like can be associated with a message object as a collaborator. The message object management component116can receive the request (i.e., to generate the new message object) and cause the composition user interface226to be presented via the user interface200. In at least one example, the composition user interface226can include an actuation mechanism228to add a collaborator. In at least one example, based at least in part on receiving an input associated with the actuation mechanism228, another user interface element can be presented to enable the first entity to identify one or more second entities to add as collaborators. In some examples, the first entity can add collaborators by tagging the collaborators, dragging and dropping an identifier of a collaborator into the composition user interface226, or the like. In at least one example, the first entity can invite one or more second entities to collaborate on the composition of the message object. That is, the first entity can add one or more second entities to the composition user interface226so that each of the second entity(s) can collaborate with the first entity in drafting the message object. As such, the first entity and the second entity(s) can be referred to as “collaborators.” In at least one example, the composition user interface226can include one or more user interface elements230that represent each of the collaborators associated with the message object and/or the composition user interface226. In at least one example, the composition user interface226can comprise a free form text box232or other input mechanism to enable the collaborators to generate contents of the message object. In some examples, the collaborators can attach and/or embed files, videos, images, resource locators (e.g., links), and/or the like to the message object. In at least one example, the composition user interface226can include one or more cursors or other indicators to indicate where each of the collaborators is editing in the free form text box232or are otherwise interacting with the composition user interface226. In at least one example, each of the collaborators can see additions, deletions, or other modifications to the content of the message object in real-time or near real-time. In at least one example, each of the collaborators can see comments and/or other feedback provided by other of the collaborators in real-time or near real-time. In at least one example, the composition user interface226can include an actuation mechanism234(e.g., “save”) to enable the message object to be saved (e.g., as a draft) and an actuation mechanism236(e.g., “post”) to enable the message object to be posted or otherwise delivered to a recipient of the message object. In at least one example, based at least in part on detecting an input associated with the actuation mechanism234, the message object can be saved as a draft (i.e., not published). In at least one example, based at least in part on detecting an input associated with the actuation mechanism236, the message object management component116can cause the message object to be posted to, or otherwise presented via, one or more virtual spaces. In at least one example, based at least in part on detecting an input associated with the actuation mechanism236, as illustrated inFIG.2B, the message object management component116can cause a user interface element238to be presented via the composition user interface226. In at least one example, the user interface element238can enable a collaborator to select one or more virtual spaces to which the message object is to be posted or otherwise presented, as illustrated inFIG.2C. As described above, the one or more virtual spaces can comprise channels, direct messages, boards, and/or the like. In some examples, a message object can be posted to or otherwise presented via two channels, a channel and a direct message, two direct messages, a channel and a board, two boards, a direct message and a board, and/or any combination of the foregoing. In at least one example, a collaborator can perform a search of the virtual spaces to which the message object can be posted or otherwise presented. In some examples, a collaborator can scroll or pan to view additional or alternative options than those presented via the user interface element238. In some examples, the user interface element238can comprise a pop-up, as illustrated, an overlay, or the like. In at least one example, the user interface element238can include an actuation mechanism240(e.g., “post”) that can enable a collaborator to provide an input to post the message object to, or otherwise present the object via, the selected virtual space(s). In at least one example, based at least in part on receiving an input to post the message object to one or more virtual spaces, the message object management component116can update the indicator associated with the message object representative of the message object to indicate that the message object is posted and can cause the message object to be presented via instances of the user interface200presented by client(s) of member(s) of the target virtual space(s).FIG.2Dillustrates an example of such, wherein the message object is posted to an instance of the user interface200presented via a client of a user of the channel. As shown in the second section206, a new user interface element242is presented as part of the feed in the second section206. In at least one example, the user interface element242can include one or more other user interface elements244that are representative of the collaborators associated with the message object. That is, the message object, as presented via the user interface200, can include indication(s) of which user(s) and/or entity(s) generated the message object and/or the message object otherwise originated from. In at least one example, when the message object is posted to, or otherwise presented via, the user interface200, the message object can retain formatting and/or other aspects as drafted. That is, unlike existing techniques, wherein collaboration can happen via another platform and message object content can be copied and pasted into the communication platform, the message object content can be presented as drafted by the collaborators. As described above, in some examples, collaborators can be associated with permissions that enable the collaborators to perform a set of actions relative to the message object, wherein other members of the virtual space (e.g., Channel D) do not have the same permissions and/or set of actions. For example, as illustrated inFIG.2D, the user interface element242is associated with an actuation mechanism246(e.g., “edit”) to enable the user (User Z) to edit the message object associated therewith. As illustrated, User Z is one of the collaborators, so User Z has the permission to edit the message object. However, as presented via another instance of the user interface200on a client of a member of the channel (e.g., Channel D) that is not one of the collaborators, such a user interface element may not be presented. In at least one example, if a user or other entity responds to the message object and/or reacts to the message object, each of the collaborators can be notified. That is, the message object management component116receives an indication that another user or other entity responds to the message object and/or reacts to the message object (e.g., via respective instance of the user interface200on their client), the message object management component116can send a notification to respective clients of each of the collaborators. As described above, in some examples, a user or entity that is not a collaborator can request to be added as a collaborator after the message object has been posted or otherwise presented via the channel. Further, in some examples, an existing collaborator can add another user or entity as a collaborator after the message object has been posted or otherwise presented via the channel. As illustrated inFIG.2E, the message object can be posted to a second virtual space (e.g., Channel N). InFIG.2E, the message object is posted to an instance of the user interface200presented via a client of a user of a second channel (e.g., Channel N). As shown in the second section206, a user interface element248is presented as part of the feed. In at least one example, the user interface element248can include one or more other user interface elements250that are representative of the collaborators associated with the message object. That is, the message object, as presented via the user interface200, can include indication(s) of which user(s) and/or entity(s) generated the message object and/or the message object otherwise originated from. In some examples, if a collaborator is not a member of a channel or other virtual space, the message object may not be posted or otherwise presented via the channel or other virtual space (e.g., such a channel or virtual space may not be presented via the user interface element238). In some examples, if a collaborator is not a member of a channel or other virtual space that is the target recipient of a message object, the message object management component116can associate the collaborator with the channel or other virtual space as a full member or temporary member of the channel or other virtual space. In some examples, if a collaborator is not a member of a channel or other virtual space that is the target recipient of a message object, the message object management component116can refrain from associating the collaborator with the channel or other virtual space as a full member or temporary member of the channel or other virtual space, but may grant the collaborator permission to view the message object, message objects associated with the message object (e.g., threads), reactions associated with the message object, and/or the like. FIG.3Aillustrates an example of two instances,302and304, of the user interface200ofFIG.2Apresented via two clients, as described herein, for enabling a first user to invite a second user to collaborate on a draft message object. In some examples, the composition user interface226can be presented via the first instance302of the user interface200in response to a request, from a client associated with the first user, to generate or edit a message object. As described above, if the request is associated with a request to generate a new message object, the message object management component116can generate a new message object. The message object can be associated with an identifier of the first user. In at least one example, the message object can include an indicator that the message object is a draft and/or is not posted or otherwise presented (e.g., is associated with a draft state). In an example where such a request is associated with a request to edit an existing message object, the message object may already exist. In such an example, the message object management component116can retrieve the message object and can associate an indicator with the message object indicating that the message object is being edited (e.g., is associated with an editing state). As described above, in at least one example, the composition user interface226, presented via the first instance302of the user interface200, can include an actuation mechanism228which enables a first user to add a collaborator to the composition user interface226for collaborating on a message object. In at least one example, based at least in part on receiving an indication of an interaction with the actuation mechanism228, the message object management component116can cause another user interface element306to be presented via the composition user interface226. The other user interface306can enable the first user to tag or otherwise identify a second user as a collaborator. In at least one example, the user interface element306can include an actuation mechanism308(e.g., “add”) that can cause the input associated with the user interface element306to be sent to the server(s)102. As described above, in some examples, collaborators can be added to a message object by additional or alternative means, such as by tagging a user or entity, or the like. In at least one example, based at least in part on receiving an indication of a second user to add to the composition user interface226, the message object management component116can cause an invitation to be sent to a client of the second user.FIG.3Aillustrates an example of such an invitation being presented via the second instance304of the user interface200. In at least one example, a user interface element310representative of the invitation can be presented via the second instance304of the user interface200. In at least one example, the user interface element310can include one or more actuation mechanisms312,314(e.g., “accept” or “decline”). In at least one example, the second user can interact with one of the actuation mechanisms to accept (312) or decline (314) the invitation. In at least one example, based at least in part on receiving an indication that the second user accepts the invitation, the message object management component116can cause an instance of the composition user interface226to be presented via the first instance302of the user interface200and an instance of the composition user interface226to be presented via the second instance304of the user interface200, as illustrated inFIG.3B. As such, the respective instances of the composition user interface226can enable collaboration between the first user and the second user in real-time or near real-time. In some examples, based at least in part on receiving the indication that the second user accepts the invitation, the message object management component116can associate a user identifier of the second user with the message object. As such, the second user can comprise a collaborator of the message object. In at least one example, each of the collaborators can interact with respective instances of the composition user interface226to provide comments and/or other feedback, add new content to a message object, delete content from a message object, revise contents of the message object, view a history of modifications (e.g., additions, revisions, etc.), accept proposed modifications, and/or the like. In at least one example, the composition user interface226can enable the collaborators to generate and/or edit the message object in real-time or near real-time. In at least one example, when a client of one of the collaborators receives an input associated with the message object, the input can be sent to the message object management component116, which can update the message object based at least in part on such an input. In at least one example, such an input can be pushed to other instances of the composition user interface226using techniques such as operational transform or the like. That is, in at least one example, the message object management component116can enable real-time or near real-time collaboration on message objects or other objects. FIG.4illustrates an example of the user interface200ofFIG.2A, wherein a subsidiary channel is presented in association with draft message object collaboration. As described above, in at least one example, each of the collaborators associated with a message object can interact with respective instances of the composition user interface226to provide comments and/or other feedback, add new content to a message object, delete content from the message object, revise contents of the message object, view a history of modifications (e.g., additions, revisions, etc.), accept proposed modifications, and/or the like. In at least one example, the composition user interface226can enable the collaborators to generate and/or edit the message object in real-time or near real-time. In some examples, individual contributions of individual of the collaborators can be presented in a subsidiary channel or feed, as illustrated inFIG.4. That is, in some examples, the message object management component116can cause a user interface element400to be presented via the user interface200that includes descriptions of one or more actions taken with respect to the message object (e.g., additions, deletions, and/or other modifications), comments and/or other feedback provided, and/or the like. In some examples, the user interface element400representative of the subsidiary channel can be associated with the message object such that each time the message object is edited, the user interface element400representative of the subsidiary channel can be presented via the user interface200. FIGS.1-4make reference to “user interface elements.” A user interface element can be any element of the user interface that is representative of an object, message object, virtual space, and/or the like. A user interface element can be a text element, a graphical element, a picture, a logo, a symbol, and/or the like. In some examples, a user interface element can be presented as a pop-up, overlay, new sections of the user interface200, a new user interface, part of another user interface element, and/or the like. In at least one example, individual of the user interface elements can be associated with actuation mechanisms. Such actuation mechanisms can make the corresponding user interface elements selectable or otherwise interactable. That is, actuation of an actuation mechanism as described herein can, in some examples, indicate a selection of a corresponding user interface element. In at least one example, the application140can receive an indication of an interaction with a user interface element (e.g., indication of a selection and/or actuation of an actuation mechanism) and can send an indication of such to the server(s)102. In some examples, the server(s)102can send data and/or instructions to the application140to generate new user interfaces and/or update the user interface200, as described herein. The example user interfaces and user interface elements described above are provided for illustrative purposes. In some examples, such user interfaces and user interface elements can include additional or alternative data, which can be presented in additional or alternative configurations. That is, the user interfaces and user interface elements should not be construed as limiting. FIGS.5-6are flowcharts showing example processes involving techniques as described herein. The processes illustrated inFIGS.5-6are described with reference to components of the environment100shown inFIG.1for convenience and ease of understanding. However, the processes illustrated inFIGS.5-6are not limited to being performed using the components described above with reference to the environment100. Moreover, the components described above with reference to the environment100are not limited to performing the processes illustrated inFIGS.5-6. The processes inFIGS.5-6are illustrated as collections of blocks in logical flow graphs, which represent sequences of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by processor(s), perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, message objects, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the processes. In some embodiments, one or more blocks of the process can be omitted entirely. Moreover, the processes inFIGS.5-6can be combined in whole or in part with each other or with other processes. FIG.5illustrates an example process500for draft message object collaboration, as described herein. At operation502, the message object management component116can receive a request to generate a message object from a first client of a first entity associated with a communication platform. In at least one example, the message object management component116can receive a request to generate a new message object from a first client of a first entity. As described above, the first entity can be any of a user, a role (e.g., marketing team member, engineer, front desk staff, security team member, administrator, etc.), an application, a channel (i.e., members of the channel), a direct message (i.e., members of the direct message), a board (i.e., members of the board), and/or the like. In at least one example, the message object management component116can generate a message object in response to receiving the request to generate the new message object. As described above, the message object can persist on the server(s)102, which can allow for draft synchronization and/or message object collaboration, as described herein. At operation504, the message object management component116can cause a first instance of a composition user interface to be presented via the first client. In at least one example, the message object management component116can cause a composition user interface to be presented via the first client to enable the first entity to associate data with the message object (e.g., text, image(s), video(s), link(s), file(s), etc.). In some examples, as the first user associates data with the message object, the first client can send such data to the message object management component116. The message object management component116can store such data in association with the message object. At operation506, the message object management component116can receive, via the first instance of the composition user interface, a request to associate a second entity with the message object as a collaborator. In at least one example, the first entity can request to add at least a second entity as a collaborator. That is, in at least one example, the first entity can interact with the first instance of the composition user interface to invite at least the second entity to collaborate in drafting of the message object. In at least one example, based at least in part on receiving the request to add at least the second entity as a collaborator, the message object management component116can associate the second entity with the message object. In some examples, the message object management component116can associate an identifier of the second entity with the message object. In some examples, prior to associating at least the second entity with the message object as a collaborator, the message object management component116can send an invitation to a second client of the second entity, inviting them to collaborate on the message object. In such examples, the message object management component116can wait to associate identifier(s) of the other collaborator(s) with the message object until an acceptance of the invitation is received. That is, in at least one example, the message object management component116can associate the identifier(s) of the invited collaborator(s) (e.g., user(s) and/or entity(s)) with the message object associated with the message object in response to receiving an acceptance of the invitation. At operation508, the message object management component116can cause a second instance of the composition user interface to be presented via a second client associated with the second entity. As described above, in at least one example, each of the collaborators can interact with respective instances of the composition user interface to provide comments and/or other feedback, add new content to a message object, revise contents of the message object, view a history of modifications (e.g., additions, revisions, etc.), accept proposed modifications, and/or the like. In at least one example, the message object management component116can cause a second instance of the composition user interface to be presented via a second client associated with the second entity. As such, the respective instance of the composition user interface can enable the collaborators to generate and/or edit the message object in real-time or near real-time. At operation510, the message object management component116can receive, from at least one of the first client or the second client, an indication of an input associated with a respective instance of the composition user interface. In at least one example, the first entity and/or the second entity can interact with the composition user interface to add content to the message object, remove content from the message object, and/or otherwise modify the message object. In some examples, such an input can be associated with a request to save the message object (e.g., as a draft) or post the message object (e.g., to one or more virtual spaces). In some examples, when a client of one of the collaborators receives an input associated with the message object, the input can be sent to the message object management component116. That is, in response to at least one of the first entity or the second entity providing an input to a respective instance of the composition user interface, the message object management component116can receive an indication of such an input. At operation512, the message object management component116can determine whether the input is associated with a modification to the message object. As described, in some examples, the first entity and/or the second entity can interact with respective instances of the composition user interface to provide an input. In some examples, the input can be the addition of content (e.g., text, image(s), video(s), link(s), file(s), etc.), removal of content, and/or another modification. Based at least in part on a determination that the input is associated with a modification to the message object, the message object management component116can modify the message object, as illustrated at operation514. In at least one example, the message object management component116can update the message object based at least in part on such an input. In at least one example, such an input can be pushed to other instances of the composition user interface using techniques such as operational transform or the like. That is, in at least one example, the message object management component116can enable real-time or near real-time collaboration on message objects or other objects. In at least one example, the process can return to operation510. At operation516, the message object management component116can determine whether the input is a post or a save request. In at least one example, the composition user interface can include one or more actuation mechanisms to enable the message object to be saved (e.g., as a draft) and/or posted or otherwise delivered to a recipient of the message object. Based at least in part on a determination that the input is associated with a post request, the message object management component116can cause the message to be presented via a user interface of the communication platform, as illustrated at operation518. That is, in at least one example, based at least in part on a determination that the input is associated with a post request, the message object management component116can cause the message object to be posted to, or otherwise presented via, one or more virtual spaces. In at least one example, the message object management component116can push, or otherwise provide, the message object to client(s) of each of the member(s) associated with the target virtual space(s) and the message object can be presented in association with the target virtual space(s) on each of the client(s). In at least one example, the message object can be posted to, or otherwise presented via, the virtual space(s) with an indication of each of the user(s) and/or entity(s) that collaborated on the composition of the message object. That is, the message object can be associated with an indication that the message object originated from each of the collaborators. In some examples, prior to posting a message object that is associated with multiple collaborators, the message object management component116can initiate an approval process. That is, individual message objects can be associated with approval processes that indicate whether approval is required to post or otherwise present the message object and/or from whom approval is required. In some examples, the approval process can be associated with a priority in which individual of the collaborators are to approve the draft message object. In some examples, a particular collaborator can be required to approve the draft message object. In some examples, each of the collaborators can be required to approve the draft message object. Based at least in part on a determination that the input is associated with a save request, the message object management component116can cause the message to be stored as a draft message, as illustrated at operation520. That is, in at least one example, based at least in part on a determination that the input is associated with a save request, the message object can be saved as a draft (i.e., not published). In such an example, the process can return to operation510. As described above, in at least one example, the collaborators can be associated with permission(s) that enable the collaborators to perform a set of actions associated with the message object after it has been posted or otherwise presented. In some examples, non-collaborators may not be associated with the same permission(s) and/or set of actions.FIG.6illustrates an example process600for enabling differentiated editing permissions associated with a posted message object that was generated by multiple users, as described herein. At operation602, the message object management component116can cause a message object associated with a plurality of collaborators to be presented via a virtual space of a communication platform. As described above with reference to the process500, a message object can be generated by a plurality of collaborators and can be posted or otherwise presented via a virtual space, such as a channel, direct message, board, or the like. Such a message object can be referred to as a “posted message object.” At operation604, the message object management component116can receive a request to edit a posted message object. In at least one example, a client associated with a user or other entity can send a request to edit the posted message object, which can be received by the message object management component116. In at least one example, the message object management component116can determine whether the request is associated with a collaborator of the plurality of collaborators, as illustrated at operation606. That is, the message object management component116can determine whether the user and/or entity associated with the request has permission to edit the posted message object. Based at least in part on a determination that the request is associated with a collaborator of the plurality of collaborators, the message object management component116can cause an instance of the composition user interface to be presented via a client of the requesting user and/or entity to enable the requesting user and/or entity to edit the posted message object, as illustrated at operation608. That is, so long as the user and/or client associated with the request has permission to edit the posted message object (e.g., the user and/or client was one of the original collaborators or has otherwise been added as a collaborator), the message object management component116can cause an instance of the composition user interface to be presented via the requesting client. As such, the requesting client can receive an input associated with an edit or other modification to the message object and can provide an indication of such an input to the message object management component116. The message object management component116can cause the posted message object to be updated based at least in part on receiving the indication of the input. That is, the message object management component116can update the message object and push, or otherwise provide, an update to one or more recipients of the posted message object (e.g., member(s) of the virtual space(s) to which the message object is posted or otherwise presented). In some examples, other collaborators can receive a notification that one of the collaborators has requested to edit the message object such that one or more of the other collaborators can opt to open respective instance(s) of the composition user interface for collaborative editing. In some examples, an approval process, as described above, may be implemented prior to posting or otherwise presenting an edited message. In an example where the request is not associated with a collaborator of the plurality of collaborators, the message object management component116can cause a notification that the requesting user and/or entity is not permitted to edit the message object to be presented via an instance of a user interface associated with the communication platform. As described above, however, in some examples, collaborators can be added to a message object after the message object has been posted. In some examples, a user or entity that is not a collaborator at the time the message object is posted or otherwise presented can request to be added as a collaborator. In such examples, the message object management component116can receive the request and may add the requesting user or entity as a collaborator. In some examples, the message object management component116can utilize permissions to determine whether the requesting user or entity can be added and/or can execute an authorization process for one of the current collaborators to authorize the addition of the requesting user or entity as a collaborator. That is, in at least one example, a user or entity may not be a collaborator at the time the request associated with operation604is received but can be added as a collaborator based at least in part on having requested to edit the message object. FIGS.1-6describe techniques related to draft message object collaboration for a communication platform. While techniques described herein are described with reference to “message objects,” techniques described herein can be similarly applicable to any other object that can be generated via the communication platform. Examples of such objects can comprise documents, posts, channel descriptions, user profiles, board content, and/or the like. Furthermore, techniques described herein are described in the context of enabling collaboration for message objects and/or other objects to be posted to the communication platform. However, in some examples, techniques described herein can be similarly applicable to generating message objects and/or other objects for posting on third-party platforms, such as social media platforms, email platforms, and/or the like. That is, in some examples, techniques described herein can be applicable to draft message object collaboration within the communication platform for a message object that is posted or otherwise presented via a third-party platform that is integrated with the communication platform (e.g., via one or more application programming interfaces (APIs) and/or software development kits (SDKs)). Further, as described above, techniques described herein can streamline message object drafting and/or editing in view of current techniques. As described above, the communication platform can utilize permissions to maintain security and privacy for users of the communication platform. However, such permissions can restrict the ability of multiple users or entities to collaborate on drafting message objects or other objects. Due to current limitations of existing technology, users are required to collaborate via another platform (e.g., a word processing platform, an email platform, or the like) to draft a message object or other object. When the message object or other object is ready on the other platform, one of the collaborators can copy the text or other data as generated in the other platform and paste the text or other data into the communication platform. That is, with current techniques, collaborators are required to use multiple platforms if they want to collaborate on a message object or other object prior to posting the message object or other object to the communication platform. Once posted, editing permissions may be limited to the posting user. As such, the other collaborators may not be permitted to edit the message object or other object without going through the posting user. This can cause multiple interactions, with multiple platforms, which can be computationally expensive and cause network congestion. Techniques described herein enable improved techniques that reduce consumption of computing resources and decrease network congestion. That is, by enabling collaboration via respective instances of a composition user interface as described herein, techniques described herein can offer improvements to existing techniques that reduce consumption of computing resources and decrease network congestion. Further techniques described herein enable improved user experience for users of the communication platform. Conclusion While one or more examples of the techniques described herein have been described, various alterations, additions, permutations and equivalents thereof are included within the scope of the techniques described herein. In the description of examples, reference is made to the accompanying drawings that form a part hereof, which show by way of illustration specific examples of the claimed subject matter. It is to be understood that other examples can be used and that changes or alterations, such as structural changes, can be made. Such examples, changes or alterations are not necessarily departures from the scope with respect to the intended claimed subject matter. While the steps herein can be presented in a certain order, in some cases the ordering can be changed so that certain inputs are provided at different times or in a different order without changing the function of the systems and methods described. The disclosed procedures could also be executed in different orders. Additionally, various computations that are herein need not be performed in the order disclosed, and other examples using alternative orderings of the computations could be readily implemented. In addition to being reordered, the computations could also be decomposed into sub-computations with the same results. Example Clauses A. A method comprising: receiving, from a first client associated with a first entity of a group-based communication platform, a request to generate a message object; causing display, on the first client, of a first instance of a draft associated with the message object via a first instance of a composition user interface presented via a first instance of a group-based communication user interface associated with the group-based communication platform; receiving, via the first instance of the composition user interface and from the first client, a request to add at least one second entity as a collaborator to enable the first entity and the second entity to collaborate on the draft of the message object; causing display, on a second client associated with the second entity, of a second instance of the draft associated with the message object in a second instance of a composition user interface presented via a second instance of the group-based communication user interface; receiving, from at least one of the first client or the second client, a modification to the draft associated with the message object; and in response to receiving the modification to the draft associated with the message object, causing the first instance of the draft associated with the message object to be updated as presented via the first instance of the composition user interface and the second instance of the draft associated with the message object to be updated as presented via the second instance of the composition user interface. B. The method of paragraph A, further comprising: in response to receiving the request to generate the message, generating the message object and associating a first identifier of the first entity with the message object; and in response to receiving the request to add the at least one second entity as a collaborator, associating a second identifier of the second entity with the message object, wherein the message object is associated with an indicator indicating that the message is not presented. C. The method of paragraph A or B, wherein the first entity is a first user and the second entity is a second user, an application, or a channel. D. The method of any of claims A-C, wherein the first instance of the composition user interface and the second instance of the composition user interface enable the first entity and the second entity to collaborate on the generation of the draft associated with the message object in real-time. E. The method of any of claims A-D, further comprising: receiving, from at least one of the first client or the second client, a request to present the message object via at least one of a channel, a direct message, or a board associated with the group-based communication platform; causing display of the message object via the first instance of the group-based communication user interface; and causing display of the message object via the second instance of the group-based communication user interface, wherein the message object, when presented, comprises an indication that the first entity and the second entity are senders of the message object. F. The method of any of claims A-E, wherein, after the message object is presented via respective instances of the group-based communication user interface, the first entity and the second entity are associated with a first set of actions associated with the message object that are different than a second set of actions associated with a third entity that did not collaborate on the generation of the draft of the message object. G. The method of any of claims A-F, further comprising: after the message object is presented via respective instances of the group-based communication user interface, receiving a request to edit the message object; and receiving, from at least one of the first client or the second client, a request to add a third entity as a collaborator to enable the first entity, the second entity, and the third entity to collaborate on editing of the message object, wherein the first entity, the second entity, and the third entity are represented as senders of the message object when the message object is presented via the respective instances of the group-based communication user interface. H. A system comprising: one or more processors; and one or more non-transitory computer-readable media storing instructions that, when executed by the one or more processors, cause the system to perform operations comprising: receiving, from a first client associated with a first entity of a group-based communication platform, a request to generate a message object; causing display, on the first client, of a first instance of a draft associated with the message object via a first instance of a composition user interface presented via a first instance of a group-based communication user interface associated with the group-based communication platform; receiving, via the first instance of the composition user interface and from the first client, a request to add at least one second entity as a collaborator to enable the first entity and the second entity to collaborate on the draft of the message object; causing display, on a second client associated with the second entity, of a second instance of the draft associated with the message object in a second instance of a composition user interface presented via a second instance of the group-based communication user interface; receiving, from at least one of the first client or the second client, a modification to the draft associated with the message object; and in response to receiving the modification to the draft associated with the message object, causing the first instance of the draft associated with the message object to be updated as presented via the first instance of the composition user interface and the second instance of the draft associated with the message object to be updated as presented via the second instance of the composition user interface. I. The system of paragraph H, the operations further comprising: in response to receiving the request to generate the message, generating the message object and associating a first identifier of the first entity with the message object; and in response to receiving the request to add the at least one second entity as a collaborator, associating a second identifier of the second entity with the message object, wherein the message object is associated with an indicator indicating that the message is not presented. J. The system of paragraph H or I, wherein the first entity is a first user and the second entity is a second user, an application, or a channel. K. The system of any of claims H-J, wherein the first instance of the composition user interface and the second instance of the composition user interface enable the first entity and the second entity to collaborate on the generation of the draft associated with the message object in real-time. L. The system of any of claims H-K, further comprising: receiving, from at least one of the first client or the second client, a request to present the message object via at least one of a channel, a direct message, or a board associated with the group-based communication platform; causing display of the message object via the first instance of the group-based communication user interface; and causing display of the message object via the second instance of the group-based communication user interface, wherein the message object, when presented, comprises an indication that the first entity and the second entity are senders of the message object. M. The system of any of claims H-L, wherein, after the message object is presented via respective instances of the group-based communication user interface, the first entity and the second entity are associated with a first set of actions associated with the message object that are different than a second set of actions associated with a third entity that did not collaborate on the generation of the draft of the message object. N. The system of any of claims H-M, the operations further comprising: after the message object is presented via respective instances of the group-based communication user interface, receiving a request to edit the message object; and receiving, from at least one of the first client or the second client, a request to add a third entity as a collaborator to enable the first entity, the second entity, and the third entity to collaborate on editing of the message object, wherein the first entity, the second entity, and the third entity are represented as senders of the message object when the message object is presented via the respective instances of the group-based communication user interface. O. One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving, from a first client associated with a first entity of a group-based communication platform, a request to generate a message object; causing display, on the first client, of a first instance of a draft associated with the message object via a first instance of a composition user interface presented via a first instance of a group-based communication user interface associated with the group-based communication platform; receiving, via the first instance of the composition user interface and from the first client, a request to add at least one second entity as a collaborator to enable the first entity and the second entity to collaborate on the draft of the message object; causing display, on a second client associated with the second entity, of a second instance of the draft associated with the message object in a second instance of a composition user interface presented via a second instance of the group-based communication user interface; receiving, from at least one of the first client or the second client, a modification to the draft associated with the message object; and in response to receiving the modification to the draft associated with the message object, causing the first instance of the draft associated with the message object to be updated as presented via the first instance of the composition user interface and the second instance of the draft associated with the message object to be updated as presented via the second instance of the composition user interface. P. The one or more non-transitory computer-readable media of paragraph O, the operations further comprising: in response to receiving the request to generate the message, generating the message object and associating a first identifier of the first entity with the message object; and in response to receiving the request to add the at least one second entity as a collaborator, associating a second identifier of the second entity with the message object, wherein the message object is associated with an indicator indicating that the message is not presented. Q. The one or more non-transitory computer-readable media of paragraph O or P, wherein the first instance of the composition user interface and the second instance of the composition user interface enable the first entity and the second entity to collaborate on the generation of the draft associated with the message object in real-time. R. The one or more non-transitory computer-readable media of any of claims O-Q, further comprising: receiving, from at least one of the first client or the second client, a request to present the message object via at least one of a channel, a direct message, or a board associated with the group-based communication platform; causing display of the message object via the first instance of the group-based communication user interface; and causing display of the message object via the second instance of the group-based communication user interface, wherein the message object, when presented, comprises an indication that the first entity and the second entity are senders of the message object. S. The one or more non-transitory computer-readable media of any of claims O-R, wherein, after the message object is presented via respective instances of the group-based communication user interface, the first entity and the second entity are associated with a first set of actions associated with the message object that are different than a second set of actions associated with a third entity that did not collaborate on the generation of the draft of the message object. T. The one or more non-transitory computer-readable media of any of claims O-S, the operations further comprising: after the message object is presented via respective instances of the group-based communication user interface, receiving a request to edit the message object; and receiving, from at least one of the first client or the second client, a request to add a third entity as a collaborator to enable the first entity, the second entity, and the third entity to collaborate on editing of the message object, wherein the first entity, the second entity, and the third entity are represented as senders of the message object when the message object is presented via the respective instances of the group-based communication user interface. While the example clauses described above are described with respect to one particular implementation, it should be understood that, in the context of this document, the content of the example clauses can also be implemented via a method, device, system, a computer-readable medium, and/or another implementation. Additionally, any of examples A-T may be implemented alone or in combination with any other one or more of the examples A-T. | 129,239 |
11943181 | Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION According to the described technologies, a computing system receives an item of digital content from a user device, such as a digital image that depicts a particular item. The system generates one or more labels that indicate attributes of the item of digital content. For example, the labels can include words or text phrases that are descriptive of the particular item depicted in the digital image. The system generates one or more conversational replies to the item of digital content based on the one or more labels that indicate attributes of the item of digital content. The system selects a conversational reply from among the one or more conversational replies and provides the conversational reply for output to the user device. FIG.1is an example computing system100for generating one or more conversational replies. System100generally includes user device102. Example user devices102can include smartphones, mobile computing devices, laptop/desktop computers, tablet devices, smart televisions, gaming consoles, or other related computing device. User device102can include a digital camera, and a user of device102can use the digital camera to capture image content. In the context of system100, the captured image content can be an item of digital content, such as digital image, digital photo, or electronic picture that includes or depicts a particular item/content item104. As shown inFIG.1, the user may be located in Paris, France and the captured image content may include a particular content item104such as the Eiffel tower that is also located in Paris, FR. User device102can execute program code for enabling a virtual device assistant of the device. In some implementations, a device assistant of user device102can be configured to generate one or more replies105based on input107received by user device102or based on an input107corresponding to image content captured by user device104. For example, a camera of user device102can capture image content and a computing system of user device102can cause the device assistant to generate reply content105based on the captured image content. Current device assistants, or other conventional application programs that process inputs, may generate example reply content that can be perceived by a user as lacking “personality.” For example, current device assistants are often configured to provide (or suggest) machine generated replies that often times are not perceived by a user as being delightful or conversational in a tone, nature or substance. In particular, current device assistants, or related application programs that process digital image content, may provide or suggest a reply such as “I can see images.” Although this reply is not inaccurate given the received input, such reply content may be perceived by a user as overly terse and lacking of a certain conversational feel. Thus, this reply might not attract the interest of a user and, hence, may not elicit a response or additional queries from the user. Referring again toFIG.1, according to technologies described herein, system100can be configured to generate one or more conversational replies that are, for example, at least descriptive of an item of digital content (e.g., a digital photo). For example, system100can receive an item of digital content that corresponds to input107. The item of digital content can be a digital image that depicts a particular item104, e.g., the Eiffel tower. In contrast to current device assistants or other current programs, a generated conversional reply105of system100can either identify the particular item104, or can indicate an attribute of the particular item. For example, a conversational reply105can be “I am no architect, but the Eiffel tower seems like quite a construction!” In particular, conversational reply105identifies the particular item104as being the Eiffel tower, and includes text content that is descriptive or indicative of an attribute of the Eiffel tower, e.g., that the Eiffel tower is a “construction,” such as a physical structure or building. Further, in this implementation, reply105is not overly terse and includes content that may be perceived by a user as having more of a conversational tone. As described in more detail below, in some implementations, in addition to indicating an attribute of an item of digital content, conversational reply105can include text or image content that may be perceived as delightful, interesting, pleasant, or pleasing to a user. One or more conversational replies generated by components or devices of system100can be provided for output to user device102and may be generated for presentation to a user via display103of user device102. As shown, system100includes a computing device/server106that receives data signals, e.g., non-transitory propagating signals, from at least one user device102. As shown, server106can include an image recognition module108, a previous replies module110, a media content replies module112, a predetermined replies module114, and a reply selection module116. In some implementations, server106can include additional or fewer modules and system100can include one or more additional servers or computing devices. Module108depicted inFIG.1is generally representative of image or data analysis, image feature extraction, and label generation functions that can be executed or performed by server106. An output of module108can include at least one of: i) one or more labels that indicate attributes of an item of digital content provided by, or received from, user device102; or ii) image data or image pixel data associated with digital image content corresponding to the item of digital content. Labels and/or image pixel data output by module108can be provide to, or received by, one or more of modules110,112, and114. As used herein, labels generated by module108can be individual words or text phrases that indicate one or more attributes of an item of digital content or that describe one or more features of an item of digital content. As described in more detail below, each word or text phrase can be assigned a relevance or confidence score that indicates a relevance of a particular word or text phrase (e.g., a label) with regard to attributes or features of a received item of digital content. Each of modules110,112, and114depicted inFIG.1are generally representative of data analysis and data signal processing functions that can be executed by server106to generate one or more conversational replies based on label or image pixel data received from module108. For example, each of modules110,112, and114can include one or more databases having multiple content items and can also include program code or logic configured to access the databases and to use the content items to generate respective sets of conversational replies. As described in more detail below, each conversational reply of the respective sets of conversational replies can be assigned a confidence score that indicates a relevance of a particular conversational reply with regard to labels or pixel data received from module108. Reply selection module116includes program code or logic that can analyze scoring and/or ranking data associated with respective sets of conversational replies generated by each of modules110,112, and114. One or more conversational replies selected by module116can be provided for output to user device102by server106. Modules108,110,112,114, and116are each described in more detail below with reference toFIG.2. As used in this specification, the term “module” is intended to include, but is not limited to, one or more computers configured to execute one or more software programs that include program code that causes a processing unit(s) of the computer to execute one or more functions. The term “computer” is intended to include any data processing device, such as a desktop computer, a laptop computer, a mainframe computer, a personal digital assistant, a server, a handheld device, a tablet device, or any other device able to process data. FIG.2illustrates a diagram including an example module grouping200associated with computing server106of system100. Module grouping200can be disposed within server106or can include independent computing devices that collectively are coupled to, and in data communication with, server106. Module grouping200generally includes modules108,110,112,114, and116discussed briefly above with reference toFIG.1, and an entity relatedness module234. In general, described actions or functions of user device102, server106, and modules of module grouping200can be enabled by computing logic or instructions that are executable by a processor and memory associated with these electronic devices. For example, each of user device102, server106, and module grouping200(collectively “devices of system100”) can include one or more processors, memory, and data storage devices that cooperatively form a computing system of each device. Execution of the stored instructions can cause one or more of the actions described herein to be performed by devices of system100. In other implementations, multiple processors may be used, as appropriate, along with multiple memories and types of memory. For example, user device102or server106may be connected with multiple other computing devices, with each device (e.g., a server bank, groups of servers, or a multi-processor system) performing portions of the actions or operations associated with the various processes or logical flows described in this specification. Referring again toFIG.2, image recognition module108can generate one or more labels that indicate attributes of an item of digital content or that describe characteristics of a particular item depicted in the of digital content. For example, image recognition module108can execute program code to analyze a digital image of an item of digital content. In response to analyzing the digital image, module108can use feature extraction logic204to extract one or more features of the digital image. Module108can then use label generation logic206to generate at least one label that indicates attributes of the digital image or that describe characteristics of a particular item depicted in the digital image. For example, if a digital image received by module108includes a particular content item(s) such as the Eiffel tower, and/or a dog standing in front of the Eiffel tower, then module108can use logic204to extract image features, or pixel data, that correspond to at least one of: a) the Eiffel tower, or b) the dog. Module108can then use label generation logic206to generate one or more labels (e.g., words, or text phrases) based on extracted features for Eiffel tower and dog. Example extracted features that correspond to the Eiffel tower may cause logic206to generate one or more example labels such as “Eiffel,” “Eiffel tower,” “tower,” “Paris,” “France,” or “iron lattice tower.” Likewise, example extracted features that correspond to the dog may cause logic206to generate one or more example labels such as “dog,” “golden retriever,” “cocker spaniel,” “cute dog,” “big cute golden retriever,” or “cute cocker spaniel.” Module108further includes scoring/ranking logic208. Logic208is used to analyze multiple labels generated using logic206and, based on the analysis, generate respective confidence scores for each label of the multiple labels. Each label can be assigned confidence score that indicates a relevance of a particular word or text phrase (e.g., a label) with regard to attributes, or extracted image features, of a received item of digital content. In some implementations, labels that are more definitive or descriptive of particular attributes or extracted image features of an item of digital content may be assigned a higher confidence score relative to labels that more generic. For example, referencing the above extracted features for the Eiffel tower and the dog, descriptive labels such as “Eiffel” or “Eiffel tower” may receive higher confidence scores when compared to more generic labels such as “tower” or “Paris.” Likewise, descriptive labels such as “golden retriever” or “cute cocker spaniel” may receive higher confidence scores when compared to more generic labels such as “dog” or “cute dog.” In some implementations, module108can execute program code to generate at least one boundary box that bounds at least one feature of a received digital image or item of digital content. In some instances, at least one label may be generated by module108prior to module108generating a boundary box that bounds at least one feature of the digital image. In this instance, module108can determine if the at least one generated label is descriptive of a feature that is bounded by the boundary box. Labels that are descriptive of features of a boundary box can receive higher confidence scores relative to labels that are not descriptive of features of a boundary box. In other implementations, a digital image can include at least two features and a first feature can be more prominent within the image than a second feature. A first boundary box can bound the first more prominent feature of the digital image, e.g., the Eiffel tower, while a second boundary box can bound the second less prominent feature of the digital image, e.g., the dog. Labels that are descriptive of the first more prominent feature of the first boundary box can receive higher confidence scores relative to labels that are descriptive of the second less prominent feature of the second boundary box. Module108can generate multiple labels and can use logic208to rank each label based on a respective confidence score that is assigned to each label to form a subset of ranked labels. In some implementations, a subset of ranked labels can include at least two labels that have the highest confidence scores from among the respective confidence scores assigned to each of the multiple labels. In other implementations, a subset of ranked labels can include one or more labels having confidence scores that exceed a threshold confidence score. As noted above, each respective confidence score indicates a relevance of a particular label to an attribute or extracted image feature of the item of digital content. Module108can select at least one label based on a confidence score of the at least one label exceeding a threshold confidence score. Module108can provide the selected at least one label to one or more of modules110,112, and114. Alternatively, module108can select at least one label, of the subset of ranked labels, and provide the selected at least one label to one or more of modules110,112, and114. Previous replies module110generally includes machine learning logic210, content extraction database212, and scoring/ranking logic214. Module110can receive at least one of: i) image data or image pixel data associated with digital image content for an item of digital content received by server106from user device102; or ii) one or more labels from module108that indicate attributes of an item of digital content received by server106from user device102. Content extraction database212can include multiple other items of digital content (“chat content”) that have been extracted from a variety of electronic conversations, or electronic “chats,” that occur between at least two users. In some implementations, the electronic conversations can occur via an example messaging or chat application and can include a communication message provided by at least a first user and a reply message provided by at least a second user. Extracted chat content can include multiple digital content items such as texts, words, text phrases, or digital image data. Module110can generate one or more conversational replies based on a similarity between image pixel data received from module108and at least one content item of the extracted chat content stored in database212. In alternative implementations, module110can generate one or more conversational replies based on a similarity between at least one received label received from module108and at least one content item of the extracted chat content. For example, referencing the above extracted features for the Eiffel tower and the dog, pixel data can indicate that the Eiffel tower and the dog are particular items included in a digital image received by server106. Module110can then scan or analyze database212to identify texts, words, text phrases, or image data having an apparent relation to the Eiffel tower or the dog. The words, text phrases, and digital pictures/images can be previous replies and other chat messages mined or extracted over-time by system100and stored in database212. The words or text phrases stored in database212can include content items such as: “Eiffel tower,” “Paris,” “France,” “golden retriever,” “cocker spaniel,” or “cute dog.” Digital pictures or image data stored in database212can include images of a variety dogs, images of the Eiffel tower, or images of a variety of locations in Paris, France. In some implementations, module110uses machine learning logic210to compute inferences using an example neural network of system100. The computed inferences are used to determine digital content items of database212that are similar or relevant to the image pixel data of the item of digital content received from user device102. Module110can use scoring/ranking logic214to determine at least one similarity score that indicates a similarity between: i) image pixel data of an item of digital content; and ii) at least one content item of chat content extracted from an electronic conversation. For example, module110can determine a similarity score between pixel data for the Eiffel tower and respective images of the Eiffel tower accessed from database212. Likewise, module108can determine a similarity score between pixel data for the Eiffel tower and respective words or text phrases accessed from database212that are descriptive of the Eiffel tower. Module110can also determine whether similarity scores exceed a threshold similarity score. In response to determining that one or more similarity scores exceed a threshold similarity score, module110can generate one or more conversational replies and a confidence score for each conversational reply. Module110generates the conversational replies based on at least one content item of chat content (e.g., another item of digital content) accessed from database212. In some implementations, conversational replies generated by module110include digital image data from database212, text data such as words or text phrases from database212, or a combination of image and text data from database212. Module110can analyze one or more determined similarity scores and, based on the analysis, generate respective confidence scores for each conversational reply. Each conversational reply can be assigned a confidence score that indicates a relevance between the conversational reply and the image pixel data for the received item of digital content. In some implementations, determined similarity scores can indicate an extent to which a content item of database212is similar or relevant to image pixel data of the received item of digital content. For example, determining the similarity scores can correspond to determining a relevance parameter that indicates a relevance between a conversational reply and an item of digital content received by server106from user device102. Hence, module110can generate a confidence score based on a determined relevance parameter. Similarity scores for content items accessed from database212can be ranked based on a numerical value of the score such that scores with larger numerical values (e.g., high similarity scores) are ranked higher than scores with lower numerical values (e.g., low similarity scores). Conversational replies generated from content items of database212that have high similarity scores may be assigned higher confidence scores relative to conversational replies generated from content items of database212that have low similarity scores. Module110can generate a set of conversational replies and each conversational reply in the set can have a corresponding confidence score. Further, module110can use logic214to rank each conversational reply in the set based on the corresponding confidence score for the reply. For example, module110can generate a first set of conversational replies. This example first set of conversational replies can include: i) a first reply that includes a close-up image of the Eiffel tower, and/or text that states “wow the Eiffel tower looks really tall up close, don't you think?”; ii) a second reply that includes an image taken several miles away from the Eiffel tower and that shows multiple other buildings in the city of Paris, France, and/or text that states “Paris has so many cool places that surround the tower.”; and iii) a third reply that includes an image taken from within the Eiffel tower showing multiple other buildings in the city of Paris, France but the image does not show the Eiffel tower, and/or text that states “Look at all the really nice places to visit that are around the Eiffel tower.” Further, regarding this first set of conversational replies, the first reply might receive an example confidence score of 0.8, the second reply might receive an example confidence score of 0.6, and the third reply might receive an example confidence score of 0.3. The first set of conversational replies can include: a) the first reply being ranked highest, e.g., ranked first out of the three replies, based on the confidence score of 0.8; b) the second reply being ranked between the first reply and the third reply, e.g., ranked second out of the three replies, based on the confidence score of 0.6; and c) the third reply being ranked after the first reply and the second reply, e.g., ranked third or last out of the three replies, based on the confidence score of 0.3. As described in more detail below, a set of conversational replies generated by module110, and the corresponding confidence scores for each reply, are provided to module116for analysis and selection of a particular conversational reply from among multiple conversational replies. In some implementations, the example first set of conversational replies described above can be provided to module116along with ranking data that indicates a ranking of a particular conversational reply relative to other replies in the first set. Media content replies module112generally includes machine learning logic218, media content database220, and scoring/ranking logic222. Module112can receive one or more labels from module108that indicate attributes of an item of digital content received by server106from user device102. Media content database220can include multiple other items of digital content (“media content”) that have been extracted or reproduced from a variety of different types of media content such as films or video data, music or audio data, books/articles/publications text data, or other forms of digital text, image or video data. Media content items of database220includes multiple quotes, e.g., texts, words, or text phrases, relating to content and data extracted or produced from digital text, image or video data stored in database212. Module112can generate one or more conversational replies based on a similarity between labels received from module108and at least one media content item stored in database220. For example, referencing the above extracted features for the Eiffel tower and the dog, one or more labels can include text phrases or words such as “Eiffel tower” and “cocker spaniel.” Module112can then scan or analyze database220to identify quotes (e.g., text content) or other media content relating to texts, words, or text phrases that have an apparent relation to “Eiffel tower” or “cocker spaniel.” Quotes, words, text phrases, or other media content of database220can be content items, e.g., from movies, television shows, songs, books, or magazines, that have been mined or extracted over-time by system100. The quotes, words, or text phrases stored in database220can include content items such as: “Eiffel tower,” “Paris,” “France,” “golden retriever,” “cocker spaniel,” or “cute dog.” Thus, at least one media content item can be related to, or descriptive of, particular items depicted in a digital image received from user device102. Further, as noted above, the at least one media content item can be another item of digital content that is distinct from the item of digital content received by server106from user device102. In some implementations, module112uses machine learning logic218to compute inferences using an example neural network of system100. The computed inferences are used to determine media content items of database220that are similar or relevant to the labels generated by module108, and that indicate an attribute of the item of digital content received from user device102. Module112can use scoring/ranking logic222to determine at least one similarity score that indicates a similarity between: i) the one or more labels indicating an attribute of an item of digital content; and ii) at least one media content item stored in database220. For example, module112can determine a similarity score between a label including “Eiffel tower” and respective quotes, words, text phrases, or other media content relating to Eiffel tower accessed from database220. In some implementations, module112can execute program code for data matching and data comparison processes such as entity matching, n-gram similarity, phrase matching, and feature similarity, to indicate a threshold level of similarity between labels provided by module108and media content items accessed from database220. Module112can then generate a similarity score based on an outcome of the data matching process. Module112can also determine whether similarity scores exceed a threshold similarity score. In response to determining that one or more similarity scores exceed a threshold similarity score, module112can generate one or more conversational replies and a confidence score for each conversational reply. Module112generates the conversational replies based on at least one media content item (e.g., a quote or other item of digital content) accessed from database220. In some implementations, conversational replies generated by module112include quotes or text data accessed from database220. Module112can analyze one or more determined similarity scores and, based on this analysis, generate respective confidence scores for each conversational reply. Each conversational reply can be assigned a confidence score that indicates a relevance between the conversational reply and labels from module108that indicate an attribute of the received item of digital content. In some implementations, determined similarity scores can indicate an extent to which a media content item of database220is similar or relevant to labels generated by module108. For example, determining the similarity scores can correspond to determining a relevance parameter that indicates a relevance between a conversational reply and an item of digital content received by server106from user device102. Hence, similar to module110, module112can also generate a confidence score based on a determined relevance parameter. Module112can generate conversational replies using quotes, words or text phrases or other media content that are associated with particularly high similarity scores (e.g., as indicated by a corresponding relevance parameter for the similarity score). Such high similarity scores can indicate that these quotes, words or text phrases have substantial relevance to the labels generated module108. Similarity scores for media content items accessed from database220can be ranked based on a numerical value of the score such that scores with larger numerical values (e.g., high similarity scores) are ranked higher than scores with lower numerical values (e.g., low similarity scores). Conversational replies generated from media content items of database220that have high similarity scores may be assigned higher confidence scores relative to conversational replies generated from media content items of database220that have low similarity scores. Module112can generate a set of conversational replies and each conversational reply in the set can have a corresponding confidence score. Further, module112can use logic222to rank each conversational reply in the set based on the corresponding confidence score for the reply. For example, module112can generate a second set of conversational replies relative to the example first set generated by module110. This example second set of conversational replies can include: i) a first reply that includes text stating “wow the Eiffel tower looks really tall up close, don't you think?”, where the text is a quote from a song by a singer and the reply further includes an image of the singer standing in front of the Eiffel tower; ii) a second reply that includes text stating “Paris has so many cool places that surround the tower,” where the text is a quote from a movie and the reply further includes an image from a scene of the movie that shows the Eiffel tower and multiple buildings that surround the tower; and iii) a third reply that includes text stating “Paris has nice places to visit around the Eiffel tower,” where the text is a quote from a web-based article. Further, regarding this second set of conversational replies, the first reply might receive an example confidence score of 0.88, the second reply might receive an example confidence score of 0.7, and the third reply might receive an example confidence score of 0.2. The second set of conversational replies can include: a) the first reply being ranked highest, e.g., ranked first out of the three replies, based on the confidence score of 0.88; b) the second reply being ranked between the first reply and the third reply, e.g., ranked second out of the three replies, based on the confidence score of 0.7; and c) the third reply being ranked after the first reply and the second reply, e.g., ranked third or last out of the three replies, based on the confidence score of 0.2. As described in more detail below, a set of conversational replies generated by module112, and the corresponding confidence scores for each reply, are provided to module116for analysis and selection of a particular conversational reply from among multiple conversational replies. In some implementations, the example second set of conversational replies described above can be provided to module116along with ranking data that indicates a ranking of a particular conversational reply relative to other replies in the second set. Predetermined replies module114generally includes machine learning logic226, predetermined replies database228, and scoring/ranking logic230. Module114can receive one or more labels from module108that indicate attributes of an item of digital content received by server106from user device102. Predetermined replies database228can include multiple predetermined conversational replies and at least one conversational reply generated by module112can be selected from among the multiple predetermined conversational replies of database228. The predetermined replies can be curated such that each predetermined conversational reply stored in database228includes at least a portion of text/words, text phrases, or image content that may have a likelihood of being perceived as delightful, pleasing, pleasant, or interesting to a user. Module114can generate one or more conversational replies based on a similarity between labels received from module108and at least one content item stored in database228. The content item can include one or more of: i) words/text included in predetermined replies stored in database228; ii) text phrases included in predetermined replies stored in database228; and iii) predetermined replies stored in database228. For example, referencing the above extracted features for the Eiffel tower and the dog, one or more labels can include text phrases or words such as “Eiffel tower” and “cocker spaniel.” Module114can then scan or analyze database228to identify predetermined replies or other content items relating to texts, words, or text phrases that have an apparent relation to “Eiffel tower” or “cocker spaniel.” Predetermined replies, words, or text phrases of database228can be content items, e.g., a string of curated text/words forming snippets of descriptive and interesting content, that have been drafted using computer-based or human reply drafters. The predetermined replies, words, or text phrases stored in database228can include content items such as: “Eiffel tower,” “Paris,” “wow the Eiffel tower seems really cool, I'd like to visit Paris,” “cocker spaniel,” “that cocker spaniel seems really small compared to the Eiffel,” or “I am no architect, but the Eiffel tower seems like quite a construction!” Thus, at least one content item of database228can be related to, substantially related to, or descriptive of, particular items depicted in a digital image received from user device102. In some implementations, module114uses machine learning logic226to compute inferences using an example neural network of system100. The computed inferences are used to determine content items of database228that are similar or relevant to the labels generated by module108, and that indicate an attribute of the item of digital content received from user device102. Module114can use scoring/ranking logic230to determine at least one similarity score that indicates a similarity between: i) the one or more labels indicating an attribute of an item of digital content; and ii) at least one content item that includes predetermined replies stored in database228. For example, module114can determine a similarity score between a label including “Eiffel tower” and respective predetermined replies, words, or text phrases relating to Eiffel tower accessed from database228. In some implementations, module114can execute program code for data matching and data comparison processes such as entity matching, n-gram similarity, phrase matching, and feature similarity, to indicate a threshold level of similarity between labels provided by module108and predetermined replies or other content items accessed from database228. Module114can then generate a similarity score based on an outcome of the data matching process. Module114can also determine whether similarity scores exceed a threshold similarity score. In response to determining that one or more similarity scores exceed a threshold similarity score, module114can generate one or more conversational replies and a confidence score (described below) for each conversational reply. Module114generates the conversational replies based on at least one content item (e.g., a word or text phrases included in a predetermined reply) accessed from database228. In some implementations, conversational replies generated by module114can include various combinations of content items accessed from database228. For example, module114can generate a conversational reply that is the same as, similar to, or substantially similar to, a predetermined reply stored in database228. In some instances, module114generates a conversational reply by modifying an existing predetermined reply to include one or more words or text phrases from another predetermined reply of database228. In related instances, module114generates a conversational reply by using individual words or text phrases from existing predetermined replies to form new replies that are then stored in database228as new predetermined replies. Referring now to the confidence scores, module114can analyze one or more determined similarity scores and, based on this analysis, generate respective confidence scores for each conversational reply. Each conversational reply can be assigned a confidence score that indicates a relevance between the conversational reply and labels from module108that indicate an attribute of the received item of digital content. In some implementations, determined similarity scores can indicate an extent to which a content item of database228is similar or relevant to the labels generated by module108. For example, determining the similarity scores can correspond to determining a relevance parameter that indicates a relevance between a conversational reply and an item of digital content received by server106from user device102. Hence, similar to modules110and112, module114can also generate a confidence score based on a determined relevance parameter. Module114can generate conversational replies using words or text phrases that are associated with particularly high similarity scores (e.g., as indicated by a corresponding relevance parameter for the similarity score). Such high similarity scores can indicate that these words or text phrases have substantial relevance to the labels generated module108. Similarity scores for content items accessed from database228can be ranked based on a numerical value of the score such that scores with larger numerical values (e.g., high similarity scores) are ranked higher than scores with lower numerical values (e.g., low similarity scores). Conversational replies generated from content items of database228that have high similarity scores may be assigned higher confidence scores relative to conversational replies generated from content items of database228that have low similarity scores. Module114can generate a set of conversational replies and each conversational reply in the set can have a corresponding confidence score. Further, module114can use logic230to rank each conversational reply in the set based on the corresponding confidence score for the reply. For example, module114can generate a third set of conversational replies relative to the example first and second sets generated by modules110and112, respectively. This example third set of conversational replies can include: i) a first reply that includes predetermined reply text stating “I am no architect, but the Eiffel tower seems like quite a construction!”; and ii) a second reply that includes predetermined reply text stating “wow the Eiffel tower seems really cool, I'd like to visit Paris.” Further, regarding this third set of conversational replies, the first reply might receive an example confidence score of 0.92 and the second reply might receive an example confidence score of 0.65. The third set of conversational replies can include: a) the first reply being ranked highest, e.g., ranked first out of the two replies, based on the confidence score of 0.92; and b) the second reply being ranked after the first reply, e.g., ranked second out of the two replies, based on the confidence score of 0.65. As described in more detail below, a set of conversational replies generated by module114, and the corresponding confidence scores for each reply, are provided to module116for analysis and selection of a particular conversational reply from among multiple conversational replies. In some implementations, the example third set of conversational replies described above can be provided to module116along with ranking data that indicates a ranking of a particular conversational reply relative to other replies in the third set. Reply selection module116receives respective sets of conversational replies generated by each of modules110,112, and114. For each set of conversational replies, module116can also receive respective confidence scores for each conversational reply in the set as well as any associated ranking data that indicates of ranking of confidence scores. Module116can include program code or logic to analyze the respective confidence scores, and ranking data, for each conversational reply in the sets of conversational replies generated by each of modules110,112, and114. In some implementations, analyzing the respective confidence scores includes ranking each conversational reply based on the corresponding confidence score for the reply. Conversational replies can be ranked based on a numerical value of their associated confidence score such that replies having scores with larger numerical values (e.g., high confidence scores) are ranked higher than replies having scores with lower numerical values (e.g., low confidence scores). In other implementations, module116can assign a weighting or boosting parameter to at least one of modules110,112,114. The weighting parameter can be used to boost numerical values of confidence scores for conversational replies generated by the module that was assigned the weighting parameter. Conversational replies generated by a module110,112,114that was assigned a particular weighting parameter can be ranked higher relative to replies generated by another module110,112,114that was not assigned a particular weighting or boosting parameter. Based on analysis of the respective confidence scores, module116can select a particular number of conversational replies from among the replies included in the respective sets of replies generated by modules110,112, and114. Module116can select one or more conversational replies that have the highest confidences among the replies included in the respective sets of replies. For example, referencing the above described first, second, and third sets of conversational replies, module116can select the first reply of the third set of conversational replies generated by module114and that has a corresponding confidence score of 0.92. Likewise, module116can also select the first reply of the second set of conversational replies generated by module112and that has a corresponding confidence score of 0.88. Module116selects the first reply of the third set based on the reply's corresponding confidence score of 0.92 being the highest among scores for all replies of the respective sets. Further, module116selects the first reply of the second set based on the reply's corresponding confidence score of 0.88 being the second highest among scores for all replies of the respective sets. One or more conversational replies selected by module116are provided for output to user device102by server106. For example, server106can provide the selected first reply of the third set based on the reply's corresponding confidence score of 0.92 being the highest among scores for all replies of the respective sets. In some implementations, selected conversational replies can be provided to user device102as a suggested reply to an item of digital content provided to server106from user device102. In other implementations, the selected conversational reply can be provided to user device102in response to user device102receiving an item of digital content as a communication message of an electronic conversation generated by a messaging application. For example, user device102may include a messaging application used to exchange data communications between at least two users that are associated with an electronic conversation. The messaging application can receive a communication message that includes an item of digital content, e.g., a digital image. User device102can provide the digital image to server106and server106can generate a conversational reply based on the digital image and according to the technologies described herein. Server106provides the generated conversational reply for output to user device102. User device102may suggest or output the conversational reply as a reply message to the communication message of the electronic conversation. User device102suggests the conversational reply to at least one user as a reply message to the communication message of the electronic conversation. Further, user device102outputs the conversational reply via a graphical display of the device that presents a graphical interface showing the electronic conversation. Entity relatedness module232receives one or more labels from module108that indicate attributes of an item of digital content received by server106from user device102. In response to receiving a label, module232can access knowledge graph234and use the label to generate one or more related entities that have a threshold relevance to the item of digital content. At least one of modules110,112,114can receive one or more related entities from module232and use the related entities to generate one or more conversational replies. For example, predetermined replies module112can generate one or more conversational replies based on a similarity between: i) labels received from module108and at least one content item stored in database228; and ii) one or more related entities received from module232and at least one content item stored in database228. For example, referencing the above extracted features for the Eiffel tower and the dog, one or more labels can include text phrases or words such as “Eiffel tower” and “cocker spaniel.” Module232can then use knowledge graph234to identify related entities such as content items including texts, words, or text phrases that have an apparent relation to “Eiffel tower” or “cocker spaniel.” Example related entities provided by graph234can include content items such as: “Paris,” “Paris, France,” “English spaniel,” or “American spaniel.” Thus, at least one content item of knowledge graph234can be an entity that is related to, or substantially related to, a label generated by module108. Further, at least one of modules110,112,114can generate one or more conversational replies based on a similarity between two or more of: i) labels received from module108, ii) image pixel data received from module108, iii) related entities received from module232, or iv) content items stored in a respective database of the module. Module114can use scoring/ranking logic230to determine at least one similarity score that indicates a similarity between two or more of: i) labels received from module108, ii) image pixel data received from module108, iii) related entities received from module232, or iv) content items stored in a respective database of the module. Module114can also determine whether the similarity scores exceed a threshold similarity score. In response to determining that the similarity score exceeds a threshold similarity score, module114can generate one or more conversational replies and a confidence score for each conversational reply. Module114can generate the conversational replies based on the related entity and based on at least one content item (e.g., a word or text phrases included in a predetermined reply) accessed from database228. In some implementations, in response to determining that the similarity score exceeds a threshold similarity score, module114can select, from database228, a content item or a predetermined conversational reply for inclusion with one or more conversational replies generated by modules110and112of system100. Knowledge graph234is a collection of data representing entities and relationships between entities. The data is logically described as a graph, in which each distinct entity is represented by a respective node and each relationship between a pair of entities is represented by an edge between the nodes. Each edge is associated with a relationship and the existence of the edge represents that the associated relationship exists between the nodes connected by the edge. For example, if a node A represents a person alpha, a node B represents a person beta, and an edge E is associated with the relationship “is the father of,” then having the edge E connect the nodes in the direction from node A to node B in the graph represents the fact that alpha is the father of beta. A knowledge graph can be represented by any of a variety of convenient physical data structures. For example, a knowledge graph can be represented by triples that each represent two entities in order and a relationship from the first to the second entity; for example, [alpha, beta, is the father of], or [alpha, is the father of, beta], are alternative ways of representing the same fact. Each entity and each relationship can be and generally will be included in multiple triples. Alternatively, each entity can be stored as a node once, as a record or an object, for example, and linked through a linked list data structure to all the relationships the entity has and all the other entities to which the entity is related. More specifically, a knowledge graph can be stored as an adjacency list in which the adjacency information includes relationship information. It is generally advantageous to represent each distinct entity and each distinct relationship with a unique identifier. The entities represented by a knowledge graph need not be tangible things or specific people. The entities can include particular people, places, things, artistic works, concepts, events, or other types of entities. Thus, a knowledge graph can include data defining relationships between people, e.g., co-stars in a movie; data defining relationships between people and things, e.g., a particular singer recorded a particular song; data defining relationships between places and things, e.g., a particular type of wine comes from a particular geographic location; data defining relationships between people and places, e.g., a particular person was born in a particular city; and other kinds of relationships between entities. In some implementations, each node has a type based on the kind of entity the node represents; and the types can each have a schema specifying the kinds of data that can be maintained about entities represented by nodes of the type and how the data should be stored. So, for example, a node of a type for representing a person could have a schema defining fields for information such as birth date, birth place, and so on. Such information can be represented by fields in a type-specific data structure, or by triples that look like node-relationship-node triples, e.g., [person identifier, was born on, date], or in any other convenient predefined way. Alternatively, some or all of the information specified by a type schema can be represented by links to nodes in the knowledge graph; for example, [one person identifier, child of, another person identifier], where the other person identifier is a node in the graph. FIG.3is a flow diagram of an example process300for generating one or more conversational replies. In some implementations, process300may be performed or executed by one or more electronic devices, modules, or components of system100described above. At block302of process300, server106of system100receives an item of digital content from user device102. The item of digital content can include a digital image that depicts a particular item. At block304, system100generates one or more labels that indicate attributes of the item of digital content or that describe characteristics of the particular item. For example, image recognition module108can execute program code to analyze the digital image of the item of digital content. In response to analyzing the digital image, module108can extract one or more features of the image and use the extracted features to generate the one or more labels that indicate attributes of the item of digital content. At block306of process300, system100generates one or more conversational replies to the item of digital content based on the one or more labels that at least indicate attributes of the item of digital content. For example, server106can use one or more of modules110,112, or114to generate the one or more conversational replies based on at least one label generated by module108. In some implementations, rather than generating conversational replies based on the one or more labels, system100can instead use module110to generate conversational replies based on image data of the item of digital content. At block308, system100selects a conversational reply from among the one or more conversational replies that are generated by the one or more modules of server106. System100can use reply selection module116to select a particular conversional reply from among multiple conversational replies that are generated by at least one module of server106. At block310of process300, system100can cause server106provide the selected conversational reply for output to user device102. Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. A computer program, which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). FIG.4is a block diagram of computing devices400,450that may be used to implement the systems and methods described in this document, as either a client or as a server or plurality of servers. Computing device400is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device450is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, smartwatches, head-worn devices, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations described and/or claimed in this document. Computing device400includes a processor402, memory404, a storage device406, a high-speed interface408connecting to memory404and high-speed expansion ports410, and a low speed interface412connecting to low speed bus414and storage device406. Each of the components402,404,606,408,410, and412, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor402can process instructions for execution within the computing device400, including instructions stored in the memory404or on the storage device406to display graphical information for a GUI on an external input/output device, such as display416coupled to high speed interface408. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices600may be connected, with each device providing portions of the necessary operations, e.g., as a server bank, a group of blade servers, or a multi-processor system. The memory404stores information within the computing device400. In one implementation, the memory404is a computer-readable medium. In one implementation, the memory404is a volatile memory unit or units. In another implementation, the memory404is a non-volatile memory unit or units. The storage device406is capable of providing mass storage for the computing device400. In one implementation, the storage device406is a computer-readable medium. In various different implementations, the storage device406may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory404, the storage device406, or memory on processor402. The high speed controller408manages bandwidth-intensive operations for the computing device400, while the low speed controller412manages lower bandwidth-intensive operations. Such allocation of duties is exemplary only. In one implementation, the high-speed controller408is coupled to memory404, display416, e.g., through a graphics processor or accelerator, and to high-speed expansion ports410, which may accept various expansion cards (not shown). In the implementation, low-speed controller412is coupled to storage device406and low-speed expansion port414. The low-speed expansion port, which may include various communication ports, e.g., USB, Bluetooth, Ethernet, wireless Ethernet, may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device400may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server420, or multiple times in a group of such servers. It may also be implemented as part of a rack server system424. In addition, it may be implemented in a personal computer such as a laptop computer422. Alternatively, components from computing device400may be combined with other components in a mobile device (not shown), such as device450. Each of such devices may contain one or more of computing device400,450, and an entire system may be made up of multiple computing devices400,450communicating with each other. Computing device450includes a processor452, memory464, an input/output device such as a display454, a communication interface466, and a transceiver468, among other components. The device450may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components450,452,464,454,466, and468, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate. The processor452can process instructions for execution within the computing device450, including instructions stored in the memory464. The processor may also include separate analog and digital processors. The processor may provide, for example, for coordination of the other components of the device450, such as control of user interfaces, applications run by device450, and wireless communication by device450. Processor452may communicate with a user through control interface458and display interface456coupled to a display454. The display454may be, for example, a TFT LCD display or an OLED display, or other appropriate display technology. The display interface456may include appropriate circuitry for driving the display454to present graphical and other information to a user. The control interface458may receive commands from a user and convert them for submission to the processor452. In addition, an external interface462may be provided in communication with processor452, so as to enable near area communication of device450with other devices. External interface462may provide, for example, for wired communication, e.g., via a docking procedure, or for wireless communication, e.g., via Bluetooth or other such technologies. The memory464stores information within the computing device450. In one implementation, the memory464is a computer-readable medium. In one implementation, the memory464is a volatile memory unit or units. In another implementation, the memory464is a non-volatile memory unit or units. Expansion memory474may also be provided and connected to device450through expansion interface472, which may include, for example, a SIMM card interface. Such expansion memory474may provide extra storage space for device450, or may also store applications or other information for device450. Specifically, expansion memory474may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory474may be provided as a security module for device450, and may be programmed with instructions that permit secure use of device450. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. The memory may include for example, flash memory and/or MRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory464, expansion memory474, or memory on processor452. Device450may communicate wirelessly through communication interface466, which may include digital signal processing circuitry where necessary. Communication interface466may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver468. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS receiver module470may provide additional wireless data to device450, which may be used as appropriate by applications running on device450. Device450may also communicate audibly using audio codec460, which may receive spoken information from a user and convert it to usable digital information. Audio codec460may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device450. Such sound may include sound from voice telephone calls, may include recorded sound, e.g., voice messages, music files, etc., and may also include sound generated by applications operating on device450. The computing device450may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone480. It may also be implemented as part of a smartphone482, personal digital assistant, or other similar mobile device. Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs, computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs, also known as programs, software, software applications or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device, e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. The systems and techniques described here can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component such as an application server, or that includes a front-end component such as a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here, or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication such as, a communication network. Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Further to the descriptions above, a user may be provided with controls allowing the user to make an election as to both if and when systems, programs or features described herein may enable collection of user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location), and if the user is sent content or communications from a server. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, in some embodiments, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over what information is collected about the user, how that information is used, and what information is provided to the user. A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. For example, various forms of the flows shown above may be used, with steps re-ordered, added, or removed. Also, although several applications of the payment systems and methods have been described, it should be recognized that numerous other applications are contemplated. Accordingly, other embodiments are within the scope of the following claims. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. | 71,662 |
11943182 | DETAILED DESCRIPTION A communication services platform, such as a Software as a Service (SaaS) platform, can offer various communication services to users. For example, a SaaS platform can offer messaging service tools that facilitate messaging conversations, e.g., the sending and/or receiving of messages, such as SMS messages, MMS messages, and/or IM messages, to and from devices via various communication channels. A communication channel can refer to a form of communication that uses one or more of a particular protocol, a particular underlying technology or is provided by a particular entity (e.g., third-party entity). Different communication channels can refer to different forms of communication that can use one or more of different communication protocols, different underlying technologies (e.g., SMS vs IP), or are provided by different entities, such as a third-party entity, that offer services, software or hardware (or a combination thereof) through which messages can be exchanged between recipient devices. For instance, the SaaS platform may send a text message (e.g., SMS message) to a recipient device using a communication channel, such as a telecommunications carrier network or send an instant message to a recipient device using an IM communication channel (e.g., using an application programming interface (API) to communicate with the IM communication channel). Examples of channels can include Public Switched Telephone Network (PTSN) based channels such as SMS or MMS, Internet Protocol (IP) based channels, and proprietary channels (e.g., proprietary social media messaging applications). In another example, the communication services offered by a SaaS platform can include voice services, such as routing inbound and outbound voice calls. In addition to routing voice calls, voice services can include transcription services, conference call services, recording services, interactive voice recognition (IVR) services, text-to-speech services, among others. For instance, the SaaS platform can provision telephone numbers to an organization (e.g., entity) and the provisioned telephone numbers can be assigned, often dynamically, to various user accounts of the organization. The organization may, via APIs, configure user-defined routing logic that can specify rules detailing how the SaaS platform is to route particular voice calls and/or execute particular voice services before, during, and/or after the voice calls. Customers of the communications services offered by a SaaS platform may subscribe to a variety of different communication services, and in particular subscribe to both messaging services and voices services, and further desire that the requested communication services are provided as an integrated service where the user experience is seamless and cohesive when transferring between voice services and messaging services. However and in some cases, voice systems that offer voice services and message conversation systems that offer messaging services can be distinct communication systems that may not be interoperable, which can pose challenges in integrating the communication services and in providing a seamless user experience. Aspects of the disclosure address the above-mentioned and other challenges by integrating user-configurable voice services and user-configurable messaging services that both can be can be accessed using a single application executing at the client device. In some embodiments, a user of client device (e.g., customer of the SaaS platform) can exchange messages (e.g., SMS message) of a messaging conversation (also referred to as “messaging” herein) with an end user (e.g., customer's end user). The user of the client device may desire to place a call to the end user's device (e.g., outbound voice call) concurrently with the exchange of messages or even some time after the last message of the messaging conversation has been exchanged (e.g., days). In some embodiments, to provide enhanced security to sensitive data, such as telephone numbers, the telephone numbers of end users may not be provided to the client devices of the user (e.g., phones of the employees of the organization). Rather, the telephone numbers of end users can be stored at the communication services platform (e.g., SaaS platform) and associated with a particular messaging conversation to enhance security of the sensitive data. In some embodiments, to place a call to an end user device associated with the end user, the client device can send an API request to the communication services platform to request placement of a voice call. The voice call request may identify a command (e.g., API command) requesting a voice call be placed. The voice call request may identify a messaging conversation identifier of the messaging conversation between the user of the client device and the end user. In some embodiments, the messaging conversation identifier can be a unique identifier, used by the communication services platform, to reference data (e.g., messaging conversation data, such as participants, telephone numbers, etc.) associated with the messaging conversation. A messaging conversation identifier can typically be associated with messaging services of a messaging conversation system, but can also be leveraged to integrate the voice services with messaging services. In some embodiments, the voice call request may not include the telephone number of the end user device (e.g., recipient telephone number) associated with the end user and/or may not explicitly identify which of the participants of the messaging conversation is the recipient of the voice call (e.g., end user). In some embodiments, the communication services platform can use the messaging conversation identifier to obtain messaging conversation data related to the messaging conversation. The messaging conversation data can include identifiers of the participants of the messaging conversation. In some embodiments, the messaging conversation may have two or more participants. For instance, the messaging conversation may have been transferred among different users of the organization and even among one or more chat bots. In some embodiments, the communication services platform can perform a filtering operation to identify, among the multiple messaging conversation participants, a participant that is the voice call recipient. For example, communication services platform may filter out all the participants of a message conversation that are registered (e.g., associated with user accounts) at the communication services platform to identify the participant that is the voice call recipient. In some embodiments, the identified participant of the messaging conversation (e.g., voice call recipient) can be associated with a telephone number, which is stored securely at the communication services platform. Communication services platform can retrieve the associated telephone number and place the voice call to the telephone number of the end user device and further connect the call to the client device without disclosing the telephone number of the end user device to the client device. As noted, a technical problem addressed by embodiments of the disclosure is integrating a voice system offering voice services with a messaging conversation system offering messaging services while keeping sensitive data, such as telephone numbers, secure. Another technical problem addressed by the embodiments of the disclosure is the integration of voice services with messaging conversation services in a manner that provides a seamless and cohesive user experience. A technical solution to the above-identified technical problem may include using a messaging conversation identifier associated with a messaging conversation to facilitate the integration of a voice services of a voice system with messaging services of a messaging conversation system. The messaging conversation identifier can be used to reference a messaging conversation and to obtain secured end user telephone numbers and identify participants of the messaging conversation from the corresponding messaging conversation data. Communication services platform can filter the participants to identify the recipient of the voice call and the corresponding recipient telephone number to which the voice call is to be placed. Thus, the technical effect may include developing and modifying technical infrastructure that allows a voice system offering user-configurable voice services to be integrated with a messaging conversation system offering user-configurable messaging services and that protects sensitive information. Further, the technical effect may include providing a user-facing application to interface with the technical infrastructure and that allows for a user to seamlessly access both voice services and messaging services provided by the communication services platform. FIG.1illustrates an example system architecture100, in accordance with some embodiments of the disclosure. The system architecture100(also referred to as “system” herein) includes a communication services platform120, a data store106, client devices110A-110Z connected to a network104, client devices112A-112Z communicatively coupled to communication services platform120, and communication channels114A-114Z coupled to the network104(or otherwise communicatively coupled to other elements of the system100). In embodiments, network104may include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., an 802.11 network or a Wi-Fi network), a cellular network (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, and/or a combination thereof. In some embodiments, data store106is a persistent storage that is capable of storing data as well as data structures to tag, organize, and index the data. Data store106may be hosted by one or more storage devices, such as main memory, magnetic or optical storage based disks, tapes or hard drives, NAS, SAN, and so forth. In some embodiments, data store106may be a network-attached file server, while in other embodiments data store106may be some other type of persistent storage such as an object-oriented database, a relational database, and so forth, that may be hosted by communication services platform120or one or more different machines coupled to the communication services platform120via the network104. The client devices110A-110Z (generally referred to as “client device(s)110” herein) may each include a type of computing device such as a desktop personal computer (PCs), laptop computer, mobile phone, tablet computer, netbook computer, wearable device (e.g., smart watch, smart glasses, etc.) network-connected television, smart appliance (e.g., video doorbell), any type of mobile device, etc. In some embodiments, client devices110can be one or more computing devices (such as a rackmount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases), networks, software components, or hardware components. In some embodiments, client devices110A through110Z may also be referred to as “user devices.” In some embodiments, a client device, such as client device110Z, can implement or include one or more applications, such as application154(also referred to as “client application154” herein) executed at client device110Z. In some embodiments, application154can be used to communicate (e.g., send and receive information) with communication services platform120. In some embodiments, application154can implement user interfaces (e.g., graphical user interfaces (GUIs)) that may be webpages rendered by a web browser and displayed on the client device110Z in a web browser window. In another embodiment, the user interfaces of client application154may be included in a stand-alone application downloaded to the client device110Z and natively running on the client device110Z (also referred to as a “native application” or “native client application” herein). In some embodiments, client devices110can communicate with communication services platform120using one or more function calls, such as application programming interface (API) function calls (also referred to as “API calls” herein). For example, the one or more function calls can be identified in a request using one or more application layer protocols, such a HyperText Transfer Protocol (HTTP) (or HTTP secure (HTTPS)), and that are sent to the communication services platform120from the client device110Z implementing application154. In some embodiments, the communication services platform120can respond to the requests from the client device110Z by using one or more API responses using an application layer protocol. Similarly, communication services platform120can communicate with one or more communication channels114A-114Z using API function calls. In some embodiments, one or more of client devices110can be identified by a uniform resource identifier (URI), such as a uniform resource locator (URL). For example, communication services platform120can send an API call to client device110Z addressed to a URL specific to the client device110Z. In some embodiments, the communication services platform120can be identified by a URI. For instance, the API call sent by a client device110to communication services platform120can be directed to the URL of communication services platform120. In some embodiments, the APIs used to access the conversations system122of the communication services platform120can be different from the APIs used to access the voice system124of communication services platform120. In some embodiments, conversations system122and voice system124can communicate between one another using APIs. In some embodiments, the APIs used to communicate between conversations system122and voice system124may be private APIs that are not accessible by client devices110(or client devices112). In some embodiments, client devices112A-112Z (generally referred to as “client device(s)112” herein) may be similar to client devices110. In some embodiments, client devices112can include one or more telephony devices. A telephony device can include a Public Switched Telephone Network (PSTN)-connected device, such as a landline phone, cellular phone, or satellite phone, for example. In some embodiments, a telephony device can also include an internet addressable voice device (e.g., non-PSTN telephony device), such as Voice-Over-Internet-Protocol (VOIP) phones, or Session Initiation Protocol (SIP) devices, for example. In some embodiments, a telephony device can include one or more messaging devices, such as a Short Message Service (SMS) network device that, for example, uses a cellular service to exchange SMS messages or Multimedia Messaging Service (MMS) messages. In some embodiments, the communication services platform120may include one or more computing devices (such as a rackmount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases), networks, software components, or hardware components that may be used to provide a user with access to data or services. Such computing devices may be positioned in a single location or may be distributed among many different geographical locations. For example, communication services platform120may include a plurality of computing devices that together may comprise a hosted computing resource, a grid computing resource or any other distributed computing arrangement. In some embodiments, communication services platform120may correspond to an elastic computing resource where the allotted capacity of processing, network, storage, or other computing-related resources may vary over time. In some embodiments, communication services platform120provides one or more API endpoints166that can expose services, functionality or content of the communication services platform120to one or more of client devices110or communication channels114A-114Z. In some embodiments, an API endpoint166can be one end of a communication channel, where the other end can be another system, such as a client device110Z or communication channel114Z. In some embodiments, the API endpoint166can include or be accessed using a resource locator, such a universal resource locator (URL), of a server or service. The API endpoint166can receive requests from other systems, and in some cases, return a response with information responsive to the request. In some embodiments, HTTP or HTTPS methods can be used to communicate to and from API endpoint166. In some embodiments, the API endpoint166(also referred to as a “request interface” herein) can function as a computer interface through which communication requests, such as message requests, are received and/or created. The communication services platform120may include one or more types of API endpoints. In some embodiments, the API endpoint166can include a messaging API and/or voice API whereby external entities or systems can send a communication to create message content and/or request sending of a message and/or request voice services that are provided via voice system124. The API (e.g., message API and/or voice API) may be used in programmatically creating message content and/or requesting sending of one or more messages. In some embodiments, the API is implemented in connection with a multitenant communication service wherein different accounts (e.g., authenticated entities) can submit independent requests. These requests made through the API can be managed with consideration of other requests made within an account and/or across multiple accounts on the communication service. In some embodiments, the API of the API endpoint166may be used in initiating general messaging or communication requests. For example, a messaging request may indicate one or more destination endpoints (e.g., recipient phone numbers), message content (e.g., text and/or media content), and possibly an origin endpoint (e.g., a phone number to use as the “sending” phone number). In some embodiments, the API of the API endpoint166may be any suitable type of API such as a REST (Representational State Transfer) API, a GraphQL API, a SOAP (Simple Object Access Protocol) API, and/or any suitable type of API. In some embodiments, the communication services platform120can expose through the API, a set of API resources which when addressed may be used for requesting different actions, inspecting state or data, and/or otherwise interacting with the communication platform. In some embodiments, a REST API and/or another type of API may work according to an application layer request and response model. An application layer request and response model may use HTTP (Hypertext Transfer Protocol), HTTPS (Hypertext Transfer Protocol Secure), SPDY, or any suitable application layer protocol. Herein HTTP-based protocol is described for purposes of illustration rather than limitation. The disclosure should not be interpreted as being limited to the HTTP protocol. HTTP requests (or any suitable request communication) to the communication services platform120may observe the principles of a RESTful design or the protocol of the type of API. RESTful is understood in this document to describe a Representational State Transfer architecture. The RESTful HTTP requests may be stateless, thus each message communicated contains all necessary information for processing the request and generating a response. The API service can include various resources, which act as endpoints that can specify requested information or requesting particular actions. The resources can be expressed as URI's or resource paths. The RESTful API resources can additionally be responsive to different types of HTTP methods such as GET, PUT, POST and/or DELETE. In some embodiments, the API endpoint166can include a request instruction module that can be called within an application, script, or other computer instruction execution. For example, a computing platform may support the execution of a set of program instructions where at least one instruction within a script or other application logic is used in specifying a message request and communicating that request. In some embodiments, the API endpoint166can include a console, administrator interface, or other suitable type of user interface. Such a user-facing interface can be a graphical user interface. Such a user interface may additionally work in connection with a programmatic interface In some embodiments, a request, such as a message request, can include a data object characterizing the properties of a message. In some embodiments, the communication services platform120is associated with message requests that are programmatically initiated (e.g., an application-to-person (A2P) message). In some embodiments, the message request could be one initiated from an inbound received message. In some embodiments, a request (e.g., message request and/or voice request) can include one or more of one or more destination endpoints, one or more origin endpoints, and message content. In some embodiments, one or more of these properties may be specified indirectly such as through system or account configuration or messaging conversation identifier. For example, all messages may be automatically assigned an origin endpoint that is associated with an account. In some embodiments, the message content can include any suitable type of media content including, text, audio, image data, video data, multimedia, interactive media, data, and/or any suitable type of message content. In an illustrative example, used for illustration rather than limitation, communication services platform120can include a Software as a Service (SaaS) platform that can at least in part provide one or more services, such as communication services, to one or more clients. The SaaS platform may deploy services, such as software applications, to one or more clients for use as an on-demand service. For example, the SaaS platform may deliver and/or license software applications on a subscription basis while also hosting, at least in part, the software application. The licensed software applications can, at least in part, be hosted on the infrastructure, such as the cloud computing resources of the SaaS platform. In some embodiments, communication services platform120, as noted above, can provide communication services that include, but are not limited to, voice services, messaging services (e.g., SMS services or MMS services), email services, video services, chat messaging services (e.g., internet-based chat messaging services), or a combination thereof. Communication operations using the communication services can use one or more of a communication network (e.g., Internet), telecommunications network (e.g., such as a cellular network, satellite communication network, or landline communication network), or a combination thereof, to transfer communication data between parties. In some embodiments, the conversations system122can function to interface with one or more communication network(s) and/or service(s) for communication of a conversation (e.g., a messaging conversation, such as SMS, MMS, and/or chat messaging). In some embodiments, the conversations system122can include an interface to one or more carrier-based communication routes used in sending SMS, MMS, and/or other carrier-based messages. There may be multiple carrier-based communication routes that serve as different optional “routes” when sending communications over a carrier-based network (e.g., a mobile network). The conversations system122may additionally or alternatively include an interface to one or more over-the-top (OTT) communication channels which may be offered by a third-party messaging platform (e.g., proprietary social media messaging, messaging applications, etc.). A route can refer to a communication delivery path, defined by a series of one or more of computers, routers, gateways and/or carrier networks through which the communication is transferred from a source computer to a destination computer (e.g., through which the transmission of a message occurs). For example, the same route may be used to transfer messages using different communication channels, and the same communication channel may be used to transfer messages using different routes. In some example embodiments, different channels correspond to different applications on a receiving device. For example, a smart phone may have one application to handle SMS messages, another application to handle email, and a third application to handle voicemail. Alternatively, some applications may handle multiple communication channels. For example, one application may handle both SMS and MMS messages. In some embodiments, when the conversations system122elects to send a message using a carrier-based channel, the message is communicated to an appropriate carrier connection for routing to the destination endpoint. Carrier-based channels can use SMPP (Short Message Peer-to-Peer protocol) for communicating to an aggregator or another suitable gateway such that the SMS/MMS message is transferred over a carrier network. Once transmitted to the carrier network, the message can be relayed appropriately to arrive at the intended destination. A message in transit may have multiple routing segments that are used in the delivery to an end destination device. For example, the conversations system122can include an interface to one or more SMS Gateways that enable a computer to send and receive SMS text messages to and from a SMS capable device over the global telecommunications network (normally to a mobile phone). The SMS Gateway translates the message sent and makes it compatible for delivery over the network to be able to reach the recipient. The different SMS gateways (or more generally message gateways) can serve as different route options when the conversations system122is determining a channel and/or route to be used for one or more message transmissions. In some embodiments, SMS Gateways can route SMS text messages to the telco networks via an SMPP interface that networks expose, either directly or via an aggregator that sells messages to multiple networks. SMPP, or Short Message Peer-to-Peer, is a protocol for exchanging SMS messages between Short Message Service Centers (SMSCs) and/or External Short Messaging Entities (ESMEs). In some embodiments, the destination of a message may be used in determining the candidate message routes (and/or channels). For example, a phone number of a destination endpoint or another identifier associated with the intended recipient of the message may be used to identify the destination network of the intended recipient. Each destination network may be assigned a Mobile Country Code (MCC)/Mobile Network Code (MNC) pair that identifies the specific destination network In some embodiments, communication services platform120includes a conversations system122that can use the phone number associated with the intended recipient of the message to lookup the MCC/MCN pair identifying the destination network. For example, the conversations system122can determine the MCC/MNC pair using an MCC/MNC directory that lists the MCC/MNC pair corresponding to each phone number. In some embodiments, the MCC/MNC directory may be stored in a routing provider storage. Alternatively, the MCC/MNC directory may be stored at some other network accessible location. In either case, the conversations system122can use the phone number associated with the intended recipient of the message to query the MCC/MNC directory and identify the MCC/MNC pair that identify the corresponding destination network. In some embodiments, the conversations system122can use the MCC/MNC pair retrieved from the MCC/MNC directory to identify candidate routing providers and routes that are available to deliver a message to the destination network identified by MCC/MNC pair. For example, the routing provider storage may include a routing provider directory that lists each MCC/MNC pair serviced by the conversations system122and the corresponding routing providers and routes available for use with each MCC/MNC pair. That is, the routing provider directory can list the routing providers and routes that are available to the conversations system122to deliver messages to the destination network identified by each MCC/MNC pair listed in the routing provider directory. In some embodiments, voice system124of communication services platform120can enable the placement of an outbound voice call and/or routing of an inbound voice call. A voice call (also referred to as a “call” herein) can refer to a telephone call between at least two user devices to communicate two-way voice data (e.g., voice sound) in real-time. An outbound voice call can refer to a voice call from a client device110associated with an account (e.g., one or more of an organization's account or user account) of the communication services platform120, and to another device that may not be associated with an account. An inbound voice call can refer to a voice call from a device that may not be associated with an account, and to a client device110associated with an account. It can be appreciated that a voice call between two client devices110that are associated with an account can be performed using communication services platform120. Such voice calls can be considered inbound or outbound voice calls relative to the particular client device110. In some embodiments, voice system124can include one or more voice services used in conjunction with a voice call. In some embodiments, the one or more voice services can include a transcription service that transcribes speech to text. In some embodiments, the one or more voice services can include a recording services that can record the audio data of the voice call. In some embodiments, the one or more voice services can include a voice call queue service that can queue inbound voice calls and release the queued voice call pursuant to user-defined logic. In some embodiment, the one or more voice services can include voice mailbox services that store voice messages of at least inbound calls. In some embodiments, the one or more voice services can include an interactive voice response (IVR) service that interacts with callers and gathers information for them by giving the callers choices via a menu, and then performs the actions based on the answers of the caller through the telephone keypad or through voice response. For example, the IVR service can allow a caller to interact with the back-end telephony system, such as voice system124, by pressing keys that emit dual-tone multi-frequency (DTMF) signals or saying words that are processed by a speech recognition system. In some embodiments, the one or more voice services can include conference call service that can connect three or more devices in a single call. In some embodiments, communication services platform120can include a multitenant system. Multitenancy can refer to a mode of operation of software applications where multiple independent instances of one or multiple applications operate in a shared computer environment. In some embodiments, the instances (tenants) can be logically isolated, but physically integrated. The degree of logical isolation can be complete, but the degree of physical integration can vary. The tenants (application instances) can be representations of organizations that obtain access to the multitenant system. The tenants may also be multiple applications competing for shared underlying resources. Multiple organizations can access the resources of communication services platform120without any indication that the resources are shared between the multiple organizations. The data of each of the organizations can be logically isolated from one another such that each organization has access to their own data but not the data of other organizations in the multitenant system. In some embodiments, communication services platform120can include a single tenant system. An organization can be an example of an entity, such as a legal entity, that includes multiple people and that has a particular purpose. A non-limiting example of an organization includes a corporation (e.g., authorized by law to act as a single entity or legal entity). In some embodiments, multiple organizations can include one or more organizations that are independent or distinct from the other organizations. For example, a first organization can be corporation A and a second organization can be corporation B. Corporation A can be considered an independent legal entity from corporation B. Each of corporation A and corporation B can make independent decisions and have a different legal or corporate structure. In some embodiments, a “user” may be represented as a single individual. However, other embodiments of the disclosure encompass a “user” being an entity controlled by a set of users and/or an automated source. For example, a set of individual users federated as one or more departments in an organization may be considered a “user.” In general, functions described in one embodiment as being performed by the communication services platform120can also be performed on the client devices110A through110Z in other embodiments (and vice versa), if appropriate. In addition, the functionality attributed to a particular component can be performed by different or multiple components operating together. The communication services platform120can also be accessed as a service provided to other systems or devices through appropriate APIs. As noted above and in some embodiments, a communication channel can refer to an entity, such as a third-party entity (e.g., organizations different from communication services platform120), that offers services, software or hardware (or a combination thereof) through which messages can be sent to recipient devices. (e.g., organizations different from communication services platform120). A third party can refer to an entity, such as organization or business (e.g., a different legal entity than communication services platform120) that is distinct from another entity, such as the entity controlling or owning the communication services platform120. In some embodiments, the communication services offered by communication channels114A-114Z can be integrated into communication services platform120. In some embodiments, the communication services offered by communication channels114A-114Z can include messaging services. In some embodiments, messaging services can include one or more of a short messaging service (SMS) offered by an SMS channel, a multimedia messaging service (MMS) offered by an MMS channel, or an instant messaging service (e.g., chat messaging) offered by an instant messaging service channel. In some embodiments, communication channels114A-114Z can also include a voice channel. For example, the voice channel may implement an application to send or receive calls. In another example, the voice channel may include a telecommunication service provider and/or PSTN voice services. In some embodiments, an instant messaging service is different from an electronic mail (email) service. In some embodiments, the communication channels114A-114Z can include an email channel. In some embodiments, the communication channels114A-114Z exclude an email channel. In some embodiments, email messages can use a standard protocol for sending and receiving email messages. The standard protocol can be used across different platforms. In some embodiments, instant messages can use protocols specific to a platform that may or may not be compatible with other platforms. In some embodiments, instant messaging can differ from email in that conversations over instant messaging can happen in real-time, while conversations over email are not in real-time. In another illustrative example, client device110Z may want to send an SMS message. In some embodiments, communication services platform120and/or client devices110include an instance of messaging conversation and voice integration module151. In some embodiments, messaging conversation and voice integration module151of client device110Z, of communication services platform120, or a combination thereof can perform one or more aspects of the disclosure. In some embodiments, an entity (e.g., organization) can be associated with an account of communication services platform120. Within the particular account of the organization, one or more user accounts of the communication services platform120may be associated with different users of the organization. In some embodiments, communication services platform120can provision telephone numbers (e.g., 10-digit long code or short code) to an organization's account and assign the telephone numbers to various user accounts associated with the organization. The assignment of telephone numbers can be flexible such that the assignment of a telephone number can be one to one (e.g., one telephone number to one user account) or one to many (e.g., one telephone number to many user accounts). In some embodiments, communication services platform120can dynamically assign or transfer the telephone numbers. For example, user account A may be assigned telephone number A. Telephone number A can be transferred and assigned to another user account Z and unassigned from user account A, or can be assigned to user account Z and user account A, for instance. In some embodiments, voice calls and messages can be dynamically routed or sent to and from different telephone numbers. For instance, a user account A may be assigned telephone number A. Telephone number A may have an area code corresponding to Texas. User account A, via application154of client device110A, sends, via communication services platform120, a message A to an end user device. The end user device can be associated with a telephone number with an area code associated with the state of California. Communication services platform120can associate a telephone number with a California area code to the message conversation and send message A to end user device from the associated telephone number with a California area code. From the perspective of the end user device, the message A can appear to be sent from the telephone number with a California area code, rather than from the telephone number A with a Texas area code. In some embodiments, the telephone number of the client device110(e.g., telephone number assigned to the client device110by the telecommunications carrier) can be different than the telephone number that is assigned to the user account associated with the client device110. In some embodiments, the client device110may not have a telephone number assigned by a telecommunications carrier. For instance, the client device110A may be a desktop computer. In some embodiments, the client device110A can be identified by an internet protocol (IP) address and can send messages of the message conversation using a protocol such as HTTP over TCP/IP (transmission control protocol) or can place a voice call using a Voice over IP (VoIP) protocol (e.g. SIP) via application154, for example. Although embodiments of the disclosure are discussed in terms of communication service platforms, embodiments may also be generally applied to any type of platform, system or service. In situations in which the systems discussed here collect personal information about users, or may make use of personal information, the users may be provided with an opportunity to control whether the communication services platform120collects user information, or to control whether and/or how to receive content from the communication services platform120that may be more relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by the communication services platform120. FIG.2illustrates an example system architecture200used for message conversations and inbound and outbound voice calls, in accordance with some embodiments of the disclosure. Components ofFIG.1are used to help describe aspects ofFIG.2. The system architecture200(also referred to as “system” herein) includes a communication services platform120, a data store106, one or more client devices110A through110Z, one or more communication channels114Z and one or more end user devices210. End user devices210can be similar to client devices110A-110Z or client devices112A-112Z, as described with respect toFIG.1. As illustrated, client devices110can communicate with communication services platform120using application154. In some embodiments, instances of application154are provided by communication services platform120to client device110A through110Z to facilitate messaging conversations and/or voice calls between one or more of client device110and end user devices210. In an illustrative example, users of client devices110can be part of an organization that uses one or more communication services provided by communication services platform120. The users of the client devices110can each be assigned a user account (e.g., unique user account) that is associated with the organization and that allows access the communication services provided by communication services platform120. The organization may use the communication services platform120to facilitate messaging conversations and/or voice calls with end users (e.g., customers represented by end user device210). For instance, client device110A, via application154, can conduct a text messaging conversation and/or a voice call with a customer (e.g., end user device210). As described above, a user of client device110A (associated with user account A) using application154may be participating in a messaging conversation (e.g., exchanging text messages) with an end user associated with end user device210. The user of client device110A may want to initiate an outbound voice call via the application154to the end user of the messaging conversation. To such end, the user of client device110A may initiate an outbound voice call via application154. Outbound voice calls are further described with respect toFIG.3A. In some embodiments, an end user device210may place an inbound call to a telephone number that is associated with an organization consuming communication services offered by communication services platform120. Inbound voice calls are further described with respect toFIGS.3B and3C. In some embodiments, data store106can include messaging conversation data232(also referred to as “conversation data” herein). Messaging conversation data232can include message content236of one or more messaging conversations (e.g., the message content of the individual messages of a message conversation). For example, the message content236can include a series of text message exchanges between participants of the message conversation (e.g., the text content that is displayed at one or more of the sender's device or recipient's device (e.g., “Hello, friend!”)). In some embodiments, the messaging conversation data232can include metadata234associated with the one or more messaging conversations. Metadata234associated with a messaging conversation can include metadata associated with the messaging conversation generally and/or metadata associated with individual messages of a messaging conversation. For example, the metadata234can include the names of the participants of the conversation, the telephone numbers of the devices used in the messaging conversation, time and date indicating when each individual message of the message conversation was sent and/or received, and so forth. In some embodiments, the metadata234can include an identifier of a destination endpoint of a message (e.g., a recipient's telephone number or IP address, etc.). In some embodiments, the metadata234can include an identifier of an origin start point (also referred to “start point identifier” herein) of a message or voice call (e.g. a sender's carrier telephone number or IP address). In some embodiments, the metadata234can include an identifier of the communication services platform120, such as a URL that identifies the communication services platform120or a service thereof. In some embodiments, the metadata234can include one or more user account identifiers identifying user accounts that participated or are participating in the messaging conversation. In some embodiments, the metadata234can include the user names of the participants of the messaging conversation. In some embodiments, the metadata234can include the time and date the messaging conversation was created. In some embodiments, the metadata234can include an identifier of the user (e.g. user account) that created the messaging conversation. In some embodiments, the metadata234can include the time and date each message of a messaging conversation was sent. In some embodiments, each message conversation and/or the associated metadata can be associated with a messaging conversation identifier238. In some embodiments, the messaging conversation identifier238can include a unique identifier that is specific to the particular messaging conversation and the associated metadata. In some embodiments, the messaging conversation identifier238can be used an entry of a record, such as a look up table, such that the messaging conversation identifier238can be used to identify the corresponding messaging conversation and/or associated metadata. In some embodiments, data store106can include user-defined routing logic240. User-defined routing logic240can be configurable by a user and can include instructions or rules indicating the particular communication services (e.g., voice call services and/or message conversation services) to invoke before, during or after a voice call or message conversation. For example, the user-defined routing logic240can include instructions to transcribe a call, to implement IVR during a particular part of the call, or invoke a voice message during another part of the call (e.g., “Welcome”). In some embodiments, user-defined routing logic240can include instructions indicating how a voice call is to be routed. For example, the user-defined routing logic240can instruct communication services platform120to use a local telephone number when placing an outbound call so that the area code of the caller telephone number is the same area code as the recipient telephone number. In another example, the user-defined routing logic240can include instructions as to which user account an inbound call is to be routed. In some embodiments, user-defined routing logic240is configurable by users via APIs made available by communication services platform120. In some embodiments, the user-defined routing logic240can be retrieved from a server (e.g., third-party client server) and stored at communication services platform120. In some embodiments, message conversations and voice calls between a client device110and end user device210are conducted via communication services platform120. For example, the client device110Z can be a mobile phone that has a telephone number assigned by the carrier (e.g., carrier telephone number Z). Additionally, the user account associated with client device110Z may be assigned a telephone number provisioned by communication services platform120(e.g., provisioned telephone number Z). Client device110Z (e.g., user account Z) can send a message of the messaging conversation, via communication services platform120to the end user device210using the HTTP over TCP/IP. The message can addressed to the communication services platform120using a communication services platform identifier, such as a URL. The communication services platform120can determine the destination endpoint (e.g., telephone number B of the end user device210) based on the messaging conversation identifier. For instance, using the messaging conversation identifier, communication services platform120identify from a record (e.g., lookup table) corresponding metadata that includes the telephone number of the end user device210. Communication services platform120can identify the user-defined routing logic associated with the user account Z to determine from which provisioned telephone number the message is to be sent. Communication services platform120can send the message to the telephone number of the end user device210and from the provisioned telephone number identified using the user-defined routing logic, which may or may not be the telephone number Z assigned to client device110Z. In another example, the client device110Z can receive a message of a messaging conversation from end user device210. The message can be received by communication services platform120. Communication services platform120can use the provided telephone number to identify the destination endpoint identifier (also referred to “endpoint identifier” herein). For instance, the endpoint identifier can include one or more of the user account associated with the messaging conversation or the identifier of the client device110Z associated with the user account (e.g., carrier telephone number of the client device110Z, telephone number assigned to client device110Z by communication services platform120, or IP address associated with the client device110Z). The communication services platform120can use a record, such as a lookup table that associates the provided telephone number with the endpoint identifier. Communication services platform120can send the message to client device110Z using the endpoint identifier (e.g., carrier telephone number of client device110Z or IP address). As noted above, the carrier telephone number of the client device110Z may be different from the telephone number assigned to the user account by the communication services platform120, in some embodiments. Elements ofFIG.1andFIG.2are used with respect toFIG.3A,FIG.3B, andFIG.3Cto help describe features of diagrams300,350, and375. The operations described with respect toFIG.3A,FIG.3BandFIG.3Care shown to be performed serially for the sake of illustration, rather than limitation. Although shown in a particular sequence or order, unless otherwise specified, the order of the operations can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated operations can be performed in a different order, while some operations can be performed in parallel. Additionally, one or more operations can be omitted in some embodiments. Thus, not all illustrated operations are required in every embodiment, and other process flows are possible. In some embodiments, the same, different, fewer, or greater operations can be performed. In some embodiments, operations shown in one or more of a particular diagram300,350, and375can be combined with operations of another one of diagram300,350and375. Referring toFIGS.3A,3B, and3C, the respective diagrams300,350and375illustrate communication services platform120, client device110A, server311, and end user device210. The server311can include a server machine or any other type of client device as described above with respect to client device110. In some embodiments, one or more of communication services platform120, client device110A using application154, and in particular messaging conversation and voice integration module151can implement the operations depicted in diagrams300,350and375. In some embodiments, client device110A using client account A conducts a messaging conversation and/or voice call with end user device210. FIG.3Aillustrates a sequence diagram of communications between components of a system used for messaging conversations and outbound voice calls, in accordance with embodiments of the disclosure. In some embodiments, the messaging conversation can include messages using one of a short messaging service (SMS) channel, a multimedia messaging service (MMS) channel, or an instant messaging service channel. In some embodiments, the messaging conversation excludes communication using an electronic mail (e-mail) channel. In some embodiments, at operation302client device110A (e.g., client account A) sends a message A, such as an SMS message, of a messaging conversation to end user device210via communication services platform120. In some embodiments, the message is sent by client device110A using an API call directed to a URL associated with the communication services platform120. In some embodiments, to send message A via communication services platform120, client device110A can send message information that includes one or more of the messaging conversation identifier, the message A (e.g., message content) and metadata. For example, the client device110A can send the message content, a messaging conversation identifier, and some metadata to the communication services platform120. For instance, the metadata can include the account identifier of the user account associated with the client device110A. In some embodiments, client device110A can display a GUI, via application154, having a messaging element GUI that allows a user of client device110A to send and receive messages on behalf of the corresponding user account (e.g., user account A). The user can enter message content into the messaging element GUI and send the message content to a designated recipient. At operation304, communication services platform120can store the message A (e.g., message content) and the metadata at data store106, for example. The message A and the metadata can be associated with the corresponding messaging conversation identifier using a record, such as a look up table. At operation306, communication services platform120can send message A to end user device210. In some embodiments, communication services platform120can use the messaging conversation identifier to identify a corresponding endpoint identifier, such as the telephone number of end user device210. It can be noted that in some embodiments, a destination endpoint, such as a telephone number of end user device210is not stored or otherwise available at the client device110A. By not making the telephone number of the end user device210available to the client device110A, greater security of the telephone numbers of end users can be implemented, in accordance with some embodiments. In other embodiments, the destination endpoint, such as the telephone number of the end user device210can be stored at the client device110A and the metadata provided by the client device110A to the communication services platform120can include the destination endpoint. For example, the communication services platform120can use the messaging conversation identifier to identify the destination endpoint (e.g., telephone number of end user device210) from the metadata associated with the messaging conversation and stored at data store106. At operation308, end user device210sends a message (e.g., message B) to client device110A, via communication services platform120. End user device210sends the message to the telephone number identified in the received message A. As described herein, the telephone number identified in the received message A can be a telephone number that is provisioned by communication services platform120. At operation310, communication services platform120can store the message (e.g., message B) and the metadata at data store106, for example. For example, the message (e.g., message content) and metadata can be stored and associated with the messaging conversation identifier. At operation312, communication services platform120can identify a destination endpoint for message B. In some embodiments, communication services platform120can use the telephone number associated with message B to identify the user account and/or the endpoint identifier (e.g., carrier telephone number, assigned telephone number, or IP address of client device110A, etc.) associated with the user account. In some embodiments, the destination telephone number associated with message B can be the same number that is assigned to client device110A or different than the number assigned to client device110A by communication services platform120. Additional details regarding identifying a destination endpoint associated with a message is further described above with respectFIG.2. At operation314, communication services platform120sends the message (e.g., message B) to client device110A using the endpoint identifier. Communication services platform120can also send metadata with the message, such as the name of the end user for display at the client device110A and/or the messaging conversation identifier. At operation316, the client device110A receives a request to place an outbound call. For example, the user of client device110A may use the GUI of application154to place a voice call. In an example, a user of client device110A may be participating in a messaging conversation with the user of end user device210and may desire to place a call. The user of client device110A may select a GUI element of the GUI of the application154to request a call associated with the message conversation is to be placed. In another example, a message conversation that includes the recipient of the voice call as a participant may not exist. The user of client device110A may retrieve a contact (e.g., name) from the contact list of application154and select a GUI element of the GUI of the application154to place a call to the contact. At operation318, if no messaging conversations that have the recipient of the voice call as a participant are identified, end user device210requests the creation of a new message conversation. In some embodiments, end user device210can send a request to the communication services platform120to create a new message conversation. In response to the request, communication services platform120can generate a new messaging conversation identifier and return the messaging conversation identifier to the client device110A. At operation320, if a message conversation that has the recipient of the voice call as a participant is identified, end user device210can identify the messaging conversation identifier that is associated with the identified message conversation. At operation322, responsive to the request to place an outbound voice call (operation316), client device110A can send a request, such as an API request, to communication services platform120to place an outbound voice call. The request to place the outbound call can be responsive to the initial request as describe with respect to operation316. In some embodiments, the request identifies or includes the messaging conversation identifier. In some embodiments, the request can include a command (e.g., API command) indicating that a voice call is to be placed to a participant (e.g., unidentified participant) of the message conversation. In some embodiments, the telephone number of the end user device210is absent (e.g., not identified or included) in the request to place the voice call. In some embodiments, the user name of the recipient of the voice call is absent from the request to place the voice call. At operation324, communication services platform120determines whether the user account associated with the voice call request is authorized to place a voice call via communication services platform120. For example, an organization may subscribe to voice services. The voice services, provided by communication services platform120, may be accessible to one or more user accounts of the organization. Communication services platform120can identify the user account information from the metadata associated with the messaging conversation identifier and determine whether the particular user account is authorized to place a voice call. If the user account is authorized to place a voice call, communication services platform120can proceed to operation326. If the user account is not authorized to place a voice call, communication services platform120can send a notification back to client device110A indicating that the requested voice services are not authorized for the particular account, for example. At operation326, communication services platform120identifies user-defined routing logic240associated with the user account and or account of the corresponding organization. In some embodiments, the user-defined routing logic240may be stored at communication services platform120. In some embodiments, the user-defined routing logic240may be stored at a third-party server, such as server311. At operation328, in instances that the user-defined routing logic240is stored at a third-party server, such as server311, communication services platform120can request the server311for the user-defined routing logic240. In some embodiments, communication services platform120may not locally store the user-defined routing logic240and may request the user-defined routing logic240from server311for every voice call instance. In some embodiments, communication services platform120can obtain the user-defined routing logic240from the server311and store the user-defined routing logic240at communication services platform120for future use. At operation330, communication services platform120can retrieve the messaging conversation data using the messaging conversation identifier. As described above with at least respect toFIG.2, communication services platform120can associated or index the messaging conversation data of a messaging conversation with the messaging conversation identifier. In some embodiments, the messaging conversation data can identify the participants of the messaging conversation. In some embodiments, the messaging conversation data can include the information corresponding to the recipient identifiers and sender identifiers associated with the particular messaging conversation. For example, the information can include the telephone numbers of the recipient devices and/or sender devices. The information can include the names (e.g., friendly name, such as “John”) of the recipients and senders of the messages of the messaging conversation. In another example, the information can include the user account identifier(s) of the user accounts participating in the messaging conversation. At operation332, communication services platform120identifies the voice call recipient information (e.g. voice call recipient identifier) from the messaging conversation data. In some embodiments, the voice call recipient information can include the telephone number of the end user device210. In some embodiments, the voice call recipient information can include the name of the user of the end user device210. It can be noted, and as described above, that in some embodiments the telephone number of the end user device210is not made available to the client device110A. In some embodiments, the recipient telephone number can be secured at the communication services platform120and not provided to client device110A. It can also be noted that in some embodiments two or more participants can participate in a messaging conversation. For example, an end user, one or more employees of the organization (e.g., associated with different user accounts and/or client devices), and/or one or more bots (e.g., chat bots, such as a program that can interact, often autonomously, with end users) can be participants in a messaging conversation. In some embodiments, communication services platform120performs a filtering operation that implements filtering criteria to identify the recipient of the voice call and/or the corresponding recipient telephone number. In some embodiments, communication services platform120filters, among the participants of the messaging conversation, any participants that are registered at the communication services platform120(e.g., have user accounts and/or in the case of the bots have associated unique identifiers). The participants that are registered with the communication services platform120can be filtered from the participants of the messaging conversation, which can leave the remaining participant as the recipient of the voice call. In some embodiments, one or more of the recipient telephone number or the recipient name (e.g., “Daniel”) can be identified from the messaging conversation data. In other embodiments, other filtering techniques can be implemented to determine the recipient of the voice call. At operation334, communication services platform120executes user-defined routing logic identified at operation326or operation328. For example, communication services platform120can determine the provisioned telephone number that is to be used as the caller telephone number, which may or may not be different than the telephone number provisioned to client device110A. It can be noted that in some embodiments the user-defined routing logic includes instructions as to which voice services are to be executed before, during or after the call. At operation336, communication services platform120can place the voice call to the recipient telephone number. For example, the voice call can be place over PSTN. In some embodiments, additional metadata can be included with the voice call including the caller's telephone number and/or caller's name. The caller's telephone number can be determined from the user-defined routing logic, as described above. At operation338, communication services platform120can route the voice call to client device110A. In some embodiments, the voice call is routed pursuant to user-defined routing logic. For example, the voice call can be routed in response to the recipient (e.g., user of end user device210) answering the voice call or in response to the voice call going to a voice mailbox. In another example, the voice call can be routed immediately following placement of the voice call to the end user device210so that the ring tones, if any, can be transmitted to the client device110A. At operation340, communication services platform120can create and store a voice call message and voice call metadata. In some embodiments, the voice call metadata can include one or more of the time of the voice call, the duration of the voice call, the participants of the voice call, the telephone number of the recipient of the voice call, the telephone number assigned to the client device110A (or the corresponding user account), and/or status identifiers of the voice call. The status identifiers of the voice call can indicate the status of the voice call attempt. For example, a completed identifier can indicate that the called recipient answered the call and was connected to the caller (e.g., user of client device110A). A busy identifier can indicate that a busy signal was received when trying to connect to the called recipient. A no-answer identifier can indicate that the called recipient did not pick up before the timeout period (e.g., specified in user-defined routing logic) passed. A failed identifier can indicating that communication services platform was not able to place the voice call to the end user device210or route the call to the client device110A. In some embodiments, a message (e.g., voice call message) can be created as further described below with respect to operation342. In some embodiments, the voice call message can be a message that is part of the message conversation. In some embodiments, the voice call message can be associated with the messaging conversation identifier and stored at data store106of communication services platform120. For example, the message can be a text message that includes information corresponding to the voice call. For instance, a text message (e.g., voice call message) can include the called recipient's name and the time and/or duration of the voice call and/or an indication that a voice call was placed. At operation342, a message (e.g., voice call message) can be sent to client device110A. For example, a SMS message can be sent to the client device110A. The SMS message can include information pertaining to the voice call. In some embodiments, the voice call message can be displayed along with other messages of the message conversation (e.g., message thread). FIG.3Billustrates a sequence diagram of communications between components of a system used for messaging conversations and inbound voice calls for new messaging conversations, in accordance with embodiments of the disclosure. At operation352, communication services platform120receives an inbound voice call from end user device210. In some embodiments, the inbound voice call can include the telephone number of the caller device (e.g., telephone number of end user device210) and the telephone number of called device. In some embodiments, the telephone number of the called device is a telephone number provisioned by communication services platform120. As noted herein, the provisioned telephone number may or may not be assigned to client device110A. At operation354, communication services platform120searches for relevant messaging conversation data based on the inbound voice call. In some embodiments, communication services platform120can use one or more of the telephone number of the caller device and the telephone number of the called device to search the messaging conversation data to identify a messaging conversation having telephone number(s) that correspond to the telephone number of the caller device and/or the telephone number of the called device. For example, communication services platform120can determine whether any of the messaging conversations associated with the organization are messaging conversations between the two telephone numbers (e.g., the caller device telephone number and the called device telephone number). It can be noted that the inbound call from end user device210may not include a messaging conversation identifier. In such instances, and in some embodiments, the communication services platform120can search the messaging conversation data to find a corresponding messaging conversation without using the messaging conversation identifier. At operation356, communication services platform120returns no relevant messaging conversation data. In some embodiments, a messaging conversation does not exist for the two telephone numbers associated with the inbound call. In such cases, communication services platform120may not identify any relevant messaging conversations (and no relevant messaging conversation identifier) that correspond to the telephone number(s) of the inbound call. At operation358, communication services platform120creates a new messaging conversation responsive to determining that no messaging conversation exists for the two telephone numbers. In some embodiments, creating a new messaging conversation can include creating a new messaging conversation identifier. In some embodiments, creating a new messaging conversation can also include creating a message with participants. The participants can be identified by the caller device telephone number and the called device telephone number. The two telephone numbers can be associated with the messaging conversation, in some embodiments. In some embodiments, the new messaging conversation can be stored at communication services platform120(e.g., at data store106). At operation360, communication services platform120can request server311for user-defined routing logic and server311can return the requested user-defined routing logic. Operation360can be similar to operation328ofFIG.3A, the features of which can apply here but are not repeated for the sake of brevity. In some embodiments, the communication services platform120can request user-defined routing logic instructing communication services platform120as to how to handle the inbound call to the particular telephone number. In some embodiments, the server311can identify the name and/or email of the user that associated with the called device telephone number, and send the information to communication services platform120responsive to the request. In some embodiments, the server311can identify which user account to which the incoming call is to be routed. In some embodiments, the server311can identify information related to the caller device telephone number, such as an email address or name, and send the information to communication services platform120responsive to the request for user-defined routing logic240. At operation362, communication services platform120stores the user-defined routing logic. For example, communication services platform120can store the user-defined routing logic at data store106. In some embodiments, the stored user-defined routing logic can be used for subsequent voice calls, such as inbound voice calls, to the called device telephone number. At operation364, communication services platform120can add information obtained from the server311, such as the name of the employee or name of the end user, to the messaging conversation. For example, an email address or friendly name (e.g., “Joyce”) of the caller and/or called can be added to the messaging conversation and/or associated message. At operation366, communication services platform120executes user-defined routing logic. Operation366can be similar to operation334ofFIG.3A, and implement similar features to operation334, which are not reproduced here for the sake of brevity. In some embodiments, the one or more voice serves that correspond to or are defined by the user-defined routing logic can be executed before, during, and/or after the voice call. At operation368, communication services platform120can route the call to the client device110A pursuant to the user-defined routing logic. For example, the user-defined routing logic can indicate to which user account the inbound call is to be routed. Communication services platform120can identify the corresponding provisioned telephone number, which may the same or different than the called device telephone number, and route the inbound voice call to the provisioned telephone number or other address or telephone number. At operation370, communication services platform120can connect the end user device210to the client device110A. At operation372, communication services platform120can create and store a message (e.g., voice call message) and voice call metadata. Operation370can be similar to operation340ofFIG.3Aand include similar features, which are not repeated here for the sake of brevity. At operation374, communication services platform120can send a message that includes information pertaining to the voice call to client device110A. In some embodiments, the message can be an initial message of the messaging conversation. In some embodiments, the communication services platform120can also send a messaging conversation identifier associated with then new messaging conversation. Operation374can be similar to operation342ofFIG.3Aand include similar features, which are not repeated here for the sake of brevity. FIG.3Cillustrates a sequence diagram of communications between components of a system used for messaging conversations and inbound voice calls associated with existing messaging conversations, in accordance with embodiments of the disclosure. At operation376, client device110A and end user device210can conduct a messaging conversation via communication services platform120. In some embodiments, one or more of operations302through314ofFIG.3Acan be performed at operation376. At operation378, communication services platform120receives an inbound voice call from end user device210. Operation378can be similar to operation352ofFIG.3Band include similar features, which are not repeated here for the sake of brevity. At operation380, communication services platform120searches for relevant messaging conversation data based on the inbound voice call. In some embodiments, communication services platform120can use one or more of the telephone number of the caller device and the telephone number of the called device to search the messaging conversation data to identify a messaging conversation having telephone number(s) that correspond to the telephone number of the caller device and/or the telephone number of the called device. Operation380can be similar to operation354ofFIG.3Band include similar features, which are not repeated here for the sake of brevity. At operation382, communication services platform120returns relevant messaging conversation data. In some embodiments, a messaging conversation does exist for the two telephone numbers associated with the inbound call. In such cases, communication services platform120may identify a corresponding messaging conversation (e.g. operation280) and return the messaging conversation identifier associated with the identified messaging conversation. At operation384, communication services platform120identifies the voice call recipient identifier from the messaging conversation data. In some embodiments, the voice call recipient identifier can include the telephone number assigned to the client device110A or another identifier, such as an email or user account identifier of the user of the client device110A. In some embodiments, to identify the voice call recipient identifier, the communication services platform120can use the messaging conversation identifier to identify all the participants in the messaging conversation. In some embodiments, communication services platform120can perform a filtering operation based on one or more filter criteria to identify, among the participants, the recipient of the voice call and/or the corresponding voice call recipient identifier. In some embodiments, communication services platform120filters, among the participants of the messaging conversation, any participants that are not registered at the communication services platform120(e.g., are not associated with a user account). The participants that are not registered with the communication services platform120can be filtered from the participants of the messaging conversation, which can leave the remaining participant as the recipient of the voice call. In some embodiments, if multiple participants remain subsequent to the filtering operation (e.g., several user accounts participating in the messaging conversation), communication services platform120can implement additional filtering criteria. For example, the communication services platform120can filter out participants by a time criteria such that the participant that last communicated with the end user of end user device210can be identified as the recipient of the voice call. At operation386, communication services platform120can identify the name of the voice call recipient. In some embodiments, to identify the voice call recipient name, communication services platform120can identify from the messaging conversation data the name that is associated with identified recipient. At operation388, communication services platform120retrieves user-defined routing logic240. Operation388can be similar to operation366ofFIG.3Band include similar features, which are not repeated here for the sake of brevity. At operation390, communication services platform120routes the voice call per user-defined routing logic. Operation390can be similar to operation368ofFIG.3Band include similar features, which are not repeated here for the sake of brevity. At operation392, communication services platform120connects end user device210to the voice call. Operation392can be similar to operation370ofFIG.3Band include similar features, which are not repeated here for the sake of brevity. At operation394, communication services platform120creates and stores a message, such as a voice call message, and voice call metadata. Operation394can be similar to operation372ofFIG.3Band include similar features, which are not repeated here for the sake of brevity. At operation396, communication services platform120sends the message, such as a voice call message, to client device110A. In some embodiments, the voice call message can be part of an existing messaging conversation between end user device210and client device110A. Operation396can be similar to operation374ofFIG.3Band include similar features, which are not repeated here for the sake of brevity. FIG.4illustrates method400. Method400and/or each of the aforementioned method's individual functions, routines, subroutines, or operations can be performed by a processing device, having one or more processing units (CPU) and memory devices communicatively coupled to the CPU(s). In some embodiments, the method400can be performed by a single processing thread or alternatively by two or more processing threads, each thread executing one or more individual functions, routines, subroutines, or operations of the method. Method400as described below can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, method400is performed by messaging conversation and voice integration module151described inFIGS.1and2. Although shown in a particular sequence or order, unless otherwise specified, the order of the operations can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated operations can be performed in a different order, while some operations can be performed in parallel. Additionally, one or more operations can be omitted in some embodiments. Thus, not all illustrated operations are required in every embodiment, and other process flows are possible. In some embodiments, the same, different, fewer, or greater operations can be performed. It may be noted that elements ofFIG.1-3Cmay be used herein to help describeFIG.4. FIG.4depicts a flow diagram of an example method400for placing an outbound voice call, in accordance with some embodiments of the disclosure. In some embodiments, method400can be performed by communication services platform120, and in particular messaging conversation and voice integration module151of communication services platform120. At operation402, processing logic receives a first request to place a voice call. In some embodiments, processing logic receives, via a first application programming interface (API) call from a first client device associated with a first user account of a communication services platform, a first request to place a voice call. The first request can include a messaging conversation identifier that identifies a messaging conversation. In some embodiments, the messaging conversation includes first messages using one of a short messaging service (SMS) channel, a multimedia messaging service (MMS) channel, or an instant messaging service channel. At operation404, processing logic determines that the first user account is authorized to place the voice call via the communication services platform. At operation406, processing logic obtains conversation data (also referred to as “messaging conversation data”) associated with the messaging conversation identifier. In some embodiments, processing logic obtains conversation data associated with the messaging conversation identifier and stored in a data store, where the conversation data identifies multiple participants of the messaging conversation. At operation408, processing logic identifies, among the multiple participants of the messaging conversation, a recipient of the voice call. In some embodiments, processing logic identifies, among the multiple participants of the messaging conversation, a recipient of the voice call from the first client device based on filter criteria. In some embodiments, to identify, among the multiple participants of the messaging conversation, the recipient of the voice call from the first client device based on the filter criteria, processing logic filters, among the multiple participants, any participants that are registered at the communication services platform. In some embodiments, the identified participant (e.g., recipient) can be associated with a telephone number. The identified participant can be used to identify the associated telephone number of the end user device. At operation410, processing logic places the voice call to a telephone number of an end user device associated with the recipient. In some embodiments, wherein the voice call placed to the telephone number of the end user device is from a different telephone number than a telephone number assigned to the first user account of the first client device. In some embodiments, the telephone number of the end user device is absent from the first request to place the voice call. In some embodiments, a user name of the recipient is absent from the first request to place the voice call. At operation412, processing logic routes the voice call to the first client device. In some embodiments, processing logic routes the voice call to the first client device subsequent to placing the voice call to the telephone number of the end user device. In some embodiments, routing the voice call to the first client device is responsive to an answering of the voice call at the end user device. At operation414, processing logic executes one or more voice services in accordance with the user-defined routing logic. In some embodiments, the user-defined routing logic identifies the one or more voice services of the communication services platform to implement for the voice call. At operation416, processing logic sends, to the first client device, a user name of the recipient for display concurrent with the voice call. At operation418, processing logic generates a new message comprising information related to the voice call. At operation420, processing logic sends, to the first client device, the new message for display as part of the messaging conversation. FIG.5is a block diagram illustrating an exemplary computer system500, in accordance with an embodiment of the disclosure. The computer system500executes one or more sets of instructions that cause the machine to perform any one or more of the methodologies discussed herein. Set of instructions, instructions, and the like may refer to instructions that, when executed by computer system500, cause computer system500to perform one or more operations of messaging conversation and voice integration module151. The machine may operate in the capacity of a server or a client device in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute the sets of instructions to perform any one or more of the methodologies discussed herein. The computer system500includes a processing device502, a main memory504(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory506(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device516, which communicate with each other via a bus508. The processing device502represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device502may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processing device implementing other instruction sets or processing devices implementing a combination of instruction sets. The processing device502may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device502is configured to execute instructions of the system architecture100and messaging conversation and voice integration module151for performing the operations discussed herein. The computer system500may further include a network interface device522that provides communication with other machines over a network518, such as a local area network (LAN), an intranet, an extranet, or the Internet. The computer system500also may include a display device510(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device512(e.g., a keyboard), a cursor control device514(e.g., a mouse), and a signal generation device520(e.g., a speaker). The data storage device516may include a non-transitory computer-readable storage medium524on which is stored the sets of instructions of the system architecture100of messaging conversation and voice integration module151embodying any one or more of the methodologies or functions described herein. The sets of instructions of the system architecture100and of messaging conversation and voice integration module151may also reside, completely or at least partially, within the main memory504and/or within the processing device502during execution thereof by the computer system500, the main memory504and the processing device502also constituting computer-readable storage media. The sets of instructions may further be transmitted or received over the network518via the network interface device522. While the example of the computer-readable storage medium524is shown as a single medium, the term “computer-readable storage medium” can include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the sets of instructions. The term “computer-readable storage medium” can include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the disclosure. The term “computer-readable storage medium” can include, but not be limited to, solid-state memories, optical media, and magnetic media. In the foregoing description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the disclosure. Some portions of the detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It may be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, it is appreciated that throughout the description, discussions utilizing terms such as “authenticating”, “providing”, “receiving”, “identifying”, “determining”, “sending”, “enabling” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system memories or registers into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including a floppy disk, an optical disk, a compact disc read-only memory (CD-ROM), a magnetic-optical disk, a read-only memory (ROM), a random access memory (RAM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), a magnetic or optical card, or any type of media suitable for storing electronic instructions. The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims may generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an implementation” or “one implementation” or “an embodiment” or “one embodiment” throughout is not intended to mean the same implementation or embodiment unless described as such. The terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation. For simplicity of explanation, methods herein are depicted and described as a series of acts or operations. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. In additional embodiments, one or more processing devices for performing the operations of the above described embodiments are disclosed. Additionally, in embodiments of the disclosure, a non-transitory computer-readable storage medium stores instructions for performing the operations of the described embodiments. Also in other embodiments, systems for performing the operations of the described embodiments are also disclosed. It is to be understood that the above description is intended to be illustrative, and not restrictive. Other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure may, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. | 97,699 |
11943183 | DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. Among other things, embodiments of the present disclosure improve the functionality of electronic messaging software and systems by allowing senders to transmit messages and content using a messaging system, and recipients to access such messages and content, even if the recipients do not have access to the messaging system. FIG.1is a block diagram showing an example of a messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple client devices102, each of which hosts a number of applications including a messaging client application104. Each messaging client application104is communicatively coupled to other instances of the messaging client application104and a messaging server system108via a network106(e.g., the Internet). As used herein, the term “client device” may refer to any machine that interfaces to a communications network (such as network106) to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smart phones, tablets, ultra books, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. In the example shown inFIG.1, each messaging client application104is able to communicate and exchange data with another messaging client application104and with the messaging server system108via the network106. The data exchanged between messaging client applications104, and between a messaging client application104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). The network106may include, or operate in conjunction with, an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard setting organizations, other long range protocols, or other data transfer technology. The messaging server system108provides server-side functionality via the network106to a particular messaging client application104. While certain functions of the messaging system100are described herein as being performed by either a messaging client application104or by the messaging server system108, it will be appreciated that the location of certain functionality either within the messaging client application104or the messaging server system108is a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108, but to later migrate this technology and functionality to the messaging client application104where a client device102has a sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client application104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client application104. This data may include, message content, client device information, geolocation information, media annotation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client application104. Turning now specifically to the messaging server system108, an Application Program Interface (API) server110is coupled to, and provides a programmatic interface to, an application server112. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the application server112. Dealing specifically with the Application Program Interface (API) server110, this server receives and transmits message data (e.g., commands and message payloads) between the client device102and the application server112. Specifically, the Application Program Interface (API) server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client application104in order to invoke functionality of the application server112. The Application Program Interface (API) server110exposes various functions supported by the application server112, including account registration, login functionality, the sending of messages, via the application server112, from a particular messaging client application104to another messaging client application104, the sending of electronic media files (e.g., electronic images or video) from a messaging client application104to the messaging server application114, and for possible access by another messaging client application104, the setting of a collection of media data (e.g., story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the adding and deletion of friends to a social graph, the location of friends within a social graph, opening and application event (e.g., relating to the messaging client application104). The application server112hosts a number of applications and subsystems, including a messaging server application114, an image processing system116and a social network system122. The messaging server application114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content including images and video clips) included in messages received from multiple instances of the messaging client application104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available, by the messaging server application114, to the messaging client application104. Other processor and memory intensive processing of data may also be performed server-side by the messaging server application114, in view of the hardware requirements for such processing. The application server112also includes an image processing system116that is dedicated to performing various image processing operations, typically with respect to electronic images or video received within the payload of a message at the messaging server application114. The social network system122supports various social networking functions services, and makes these functions and services available to the messaging server application114. To this end, the social network system122maintains and accesses an entity graph304within the database120. Examples of functions and services supported by the social network system122include the identification of other users of the messaging system100with which a particular user has relationships or is “following”, and also the identification of other entities and interests of a particular user. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the messaging server application114. Some embodiments may include one or more wearable devices, such as a pendant with an integrated camera that is integrated with, in communication with, or coupled to, a client device102. Any desired wearable device may be used in conjunction with the embodiments of the present disclosure, such as a watch, eyeglasses, goggles, a headset, a wristband, earbuds, clothing (such as a hat or jacket with integrated electronics), a clip-on electronic device, or any other wearable devices. FIG.2is block diagram illustrating further details regarding the messaging system100, according to exemplary embodiments. Specifically, the messaging system100is shown to comprise the messaging client application104and the application server112, which in turn embody a number of some subsystems, namely an ephemeral timer system202, a collection management system204and an annotation system206. The ephemeral timer system202is responsible for enforcing the temporary access to content permitted by the messaging client application104and the messaging server application114. To this end, the ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a SNAPCHAT® story), selectively display and enable access to messages and associated content via the messaging client application104. The collection management system204is responsible for managing collections of media (e.g., collections of text, image, video and audio data). In some examples, a collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client application104. The collection management system204furthermore includes a curation interface208that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface208enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system204employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain embodiments, compensation may be paid to a user for inclusion of user generated content into a collection. In such cases, the curation interface208operates to automatically make payments to such users for the use of their content. The annotation system206provides various functions that enable a user to annotate or otherwise modify or edit media content associated with a message. For example, the annotation system206provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The annotation system206operatively supplies a media overlay (e.g., a SNAPCHAT® filter) to the messaging client application104based on a geolocation of the client device102. In another example, the annotation system206operatively supplies a media overlay to the messaging client application104based on other information, such as, social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., an image or video) at the client device102. For example, the media overlay including text that can be overlaid on top of a photograph/electronic image generated by the client device102. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the annotation system206uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database120and accessed through the database server118. In some exemplary embodiments, as discussed in more detail below, embodiments of the present disclosure may generate, display, distribute, and apply media overlays to media content items. For example, embodiments may utilize media content items generated by a client device102(e.g., an image or video captured using a digital camera coupled to the client device102) to generate media overlays that can be applied to other media content items. FIG.3is a schematic diagram300illustrating data300that is stored in the database120of the messaging server system108, according to certain exemplary embodiments. While the content of the database120is shown to comprise a number of tables, the data could be stored in other types of data structures (e.g., as an object-oriented database). The database120includes message data stored within a message table314. The entity table302stores entity data, including an entity graph304. Entities for which records are maintained within the entity table302may include individuals, corporate entities, organizations, objects, places, events etc. Regardless of type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph304furthermore stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. The database120also stores annotation data, in the example form of filters, in an annotation table312. Filters for which data is stored within the annotation table312are associated with and applied to videos (for which data is stored in a video table310) or images (for which data is stored in an image table308). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of varies types, including a user-selected filters from a gallery of filters presented to a sending user by the messaging client application104when the sending user is composing a message. Other types of filters include geolocation filters (also known as Geofilters) which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client application104, based on geolocation information determined by a GPS unit of the client device102. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client application104, based on other inputs or information gathered by the client device102during the message creation process. Example of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device102or the current time. Other annotation data that may be stored within the image table308is so-called “Lens” data. A “Lens” may be a real-time special effect and sound that may be added to an image or a video. As mentioned above, the video table310stores video data which, in one embodiment, is associated with messages for which records are maintained within the message table314. Similarly, the image table308stores image data associated with messages for which message data is stored in the entity table302. The entity table302may associate various annotations from the annotation table312with various images and videos stored in the image table308and the video table310. A story table306stores data regarding collections of messages and associated image, video or audio data, which are compiled into a collection (e.g., a SNAPCHAT® story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table302). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client application104may include an icon that is user selectable to enable a sending user to add specific content to his or her personal story. A collection may also constitute a “live story,” which is a collection of content from multiple users that is created manually, automatically or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from varies locations and events. Users, whose client devices have location services enabled and are at a common location event at a particular time may, for example, be presented with an option, via a user interface of the messaging client application104, to contribute content to a particular live story. The live story may be identified to the user by the messaging client application104, based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story,” which enables a user whose client device102is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some embodiments, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). Embodiments of the present disclosure may generate and present customized images for use within electronic messages/communications such as short message service (SMS) or multimedia message service (MMS) texts and emails. The customized images may also be utilized in conjunction with the SNAPCHAT stories, SNAPCHAT filters, and ephemeral messaging functionality discussed herein. Embodiments of the present disclosure may transmit and receive electronic communications containing media content items, media overlays, and/or other content (or links to such content as described below) using any form of electronic communication, such as SMS texts, MMS texts, emails, and other communications. FIG.4depicts an exemplary process according to various aspects of the present disclosure. In this example, method400includes receiving content adapted for display by a software application from a sender (405) and addressed to a recipient, and determining the software application is not installed on a computing device of the recipient (410). The method further includes, in response to determining the software application is not installed on the computing device of the recipient: identifying an electronic communication supported by the recipient's computing device (415), generating an electronic communication in the supported format containing a link to the content (420), and transmitting the electronic communication to the recipient's computing device (425). Method400further includes receiving a selection of the link to the content from the recipient's computing device (430), displaying the content on the recipient's computing device (435), and displaying on the recipient's computing device one or more of: a message, a notification, and an offer to install the software application (440). The steps of method400may be performed in whole or in part, may be performed in conjunction each other as well as with some or all of the steps in other methods, and may be performed by any number of different systems, such as the systems described inFIGS.1and7. Embodiments of the present disclosure may receive content (405) adapted for use with a variety of different software applications, such as messaging applications. For example, a first user may generate content (such as an image, video, audio, etc.) using a messaging application installed on the first user's computing device, such as “Snapchat” by Snap, Inc. The messaging software application may provide a variety of custom features that allow the content to be displayed on the user's computing device, such as by applying various filters as described above.FIG.5Adepicts an example of content generated by a first user's computing device (a smartphone in this example), namely an image captured using the camera of the smartphone with several media overlays applied (two smiley face “stickers” and text providing an invitation to a movie in the lower-left corner. The content may be addressed to, or otherwise designated for distribution to, any number of different users or computing devices. For example, the first user may generate content (e.g. an image containing a filter) using a messaging software application installed on the user's computing device (e.g., the first user's smartphone or other mobile device) and address the content to a group of other users from a list of contacts stored on the first user's device. The system may determine (410) whether or not the software adapted to display the content (the messaging software application in this example) is installed on the computing devices associated with the recipient users. Continuing the example from above, the system may determine that out of three recipient users, the computing device of one user (referred to hereafter as the “second user”) does not have the software installed, whereas the software is installed on the devices of the other two users. In this case, the system may transmit the content to the devices of the two users having the software application normally. The system may determine (410) whether software is installed on a user's computing device using a variety of different methods and information. In some embodiments, the system transmits a request to the recipient's computing device to identify whether the software is installed. Additionally or alternatively, the system may scan the registry of a computing device to determine if the software application is installed. The system may also transmit a communication to the device designated for the software application and use the response from the software application (or lack thereof if the application is not present) to determine if the software is installed. The system may also attempt to identify the recipient user within a registry of users of the software application maintained by the system (e.g., by determining if the user has an account associated with the software). For a device the system determines does not have the software installed (e.g., the second user's computing device from the example above), the system identifies (415) a communication format that is currently supported by the device. The system may identify a supported communication format in a variety of different ways, including by detecting various messaging applications operating on the device as described in the preceding paragraph. For example, the system may identify an alternate messaging application present on the user's device, such as an email application or text message application capable of supporting Short Message Service (SMS) and Multimedia Message Service (MMS) text messages. The system generates (420) a communication in the supported format and that contains a link to the content generated by the first user on the first computing device, and transmits (425) the electronic communication to the device. In some embodiments, the may store the content (e.g., in a database in communication with the system) and direct the link to the stored content. In the exemplary method400shown inFIG.4, the system receives a selection (430) of the link in the communication (e.g. by the second user, selecting the link via a user interface of the second user's computing device) and displays the content (435) on the display screen of the second user's computing device. In one example, the system displays the content via a web-based interface that displays the content when the second user selects the link in Hyper Text Markup Language (HTML) format. The system may display content such as the image illustrated inFIG.5Aon the second user's computing device. The system may display (440) a variety of messages, notifications, offers and other content to the sending user (e.g., the first user in the example above) as well as the recipient user (e.g., the second user in the example above). InFIG.5B, for example, the system displays (440) a message on the display screen of the first user indicating that one of the recipients of the user's content (fromFIG.5A) does not have the messaging application (Snapchat) installed to view the content, and indicates the content will be instead be provided to the recipient via a link transmitted by SMS text.FIG.5Cdepicts the SMS text message displayed on the screen of the second user's computing device. In this example, the text message contains information about the content and the link to the content. Among other things embodiments of the present disclosure may help enforce viewing restrictions on the content. In some embodiments, for example, they system may cause the content to be displayed on the display screen of a user's computing device for a predetermined period of times, and/or a predetermined number of times. InFIG.5D, for example, the system displays a message (440) on the display screen of the second user's computing device notifying the second user that the content (“Snaps” in this example) can only be replayed once, and providing an option for the second user to replay/redisplay the content. Additionally, the example inFIG.5Ddisplays an offer to install the messaging software application (Snapchat) used to create the content. In this manner, embodiments of the present disclosure not only provide access to content for users without a particular software application, but help expand the user base of the software application in conjunction with exposing non-users to the content generated by the application. The system may cause a user's system to display a message (440) that the content is no longer accessible once a restriction requirement is satisfied. Referring now toFIG.5E, for example, a message is displayed to the user after the content has been displayed a predetermined number of times. A similar message could be displayed after a predetermined time limit to view the content has expired. The system may also remove content (e.g., delete it from storage in a database to which the link is directed) and display a general message that the content cannot be found after such a deletion, as shown inFIG.5F. In some embodiments, the system may allow a predetermined number of views of content without installing the software application by a user, but pre-empt further such uses. For example, the first user may create a plurality of content items and address them for delivery to the second user via the system. The system may generate and transmit a respective communication containing a respective link for each content item in a first subset of the content items to the second user, but also display (e.g., on the display screen of the second user's computing device) a notification that viewing content in a second subset of the content items requires installation of the software application on the second user's computing device. In a particular example, consider that the first user generates five content items (e.g., images with stickers as inFIG.5A) and addresses them to the second user. The system may provide SMS text messages with links to two of the messages, but also a notification that the second user must download the messaging software application to view the remaining three messages. FIG.5Gillustrates another exemplary process according to various aspects of the present disclosure. In step 1 of this example, a first user of the Snapchat messaging application creates content (a “Snap”) and sends addresses the content to a second user via the system at “app.snapchat.com.” The system creates a record in a database associated with the content for later retrieval and monitoring (e.g., for adherence to ephemeral access and other restrictions on the content's read/write life cycle). In step 2, the system determines the second user's computing device does not have the Snapchat messaging application and loads the web-based “Snap Player Web App” to display the content. The content is loaded (step 3) and displayed (step 4). Subsequent attempts to view the Snap may be blocked (step 5) in response to expiration of a predetermined time period and/or display of the content a predetermined number of times. An example of a life cycle of a link to such content is similarly described in the chart depicted inFIG.5H. In this example, events are listed in the X and Y axes of the chart, the boxes indicate intersections between the events, and the arrows between the boxes indicate state changes. Software Architecture FIG.6is a block diagram illustrating an exemplary software architecture606, which may be used in conjunction with various hardware architectures herein described.FIG.6is a non-limiting example of a software architecture and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture606may execute on hardware such as machine700ofFIG.7that includes, among other things, processors704, memory714, and I/O components718. A representative hardware layer652is illustrated and can represent, for example, the machine700ofFIG.7. The representative hardware layer652includes a processing unit654having associated executable instructions604. Executable instructions604represent the executable instructions of the software architecture606, including implementation of the methods, components and so forth described herein. The hardware layer652also includes memory or storage modules memory/storage656, which also have executable instructions604. The hardware layer652may also comprise other hardware658. As used herein, the term “component” may refer to a device, physical entity or logic having boundaries defined by function or subroutine calls, branch points, application program interfaces (APIs), or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various exemplary embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a Field-Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. A processor may be, or in include, any circuit or virtual circuit (a physical circuit emulated by logic executing on an actual processor) that manipulates data values according to control signals (e.g., “commands”, “op codes”, “machine code”, etc.) and which produces corresponding output signals that are applied to operate a machine. A processor may, for example, be a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC) or any combination thereof. A processor may further be a multi-core processor having two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In embodiments in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an Application Program Interface (API)). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some exemplary embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other exemplary embodiments, the processors or processor-implemented components may be distributed across a number of geographic locations. In the exemplary architecture ofFIG.6, the software architecture606may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture606may include layers such as an operating system602, libraries620, applications616and a presentation layer614. Operationally, the applications616or other components within the layers may invoke application programming interface (API) API calls608through the software stack and receive messages612in response to the API calls608. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special purpose operating systems may not provide a frameworks/middleware618, while others may provide such a layer. Other software architectures may include additional or different layers. The operating system602may manage hardware resources and provide common services. The operating system602may include, for example, a kernel622, services624and drivers626. The kernel622may act as an abstraction layer between the hardware and the other software layers. For example, the kernel622may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services624may provide other common services for the other software layers. The drivers626are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers626include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. The libraries620provide a common infrastructure that is used by the applications616or other components or layers. The libraries620provide functionality that allows other software components to perform tasks in an easier fashion than to interface directly with the underlying operating system602functionality (e.g., kernel622, services624or drivers626). The libraries620may include system libraries644(e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, the libraries620may include API libraries646such as media libraries (e.g., libraries to support presentation and manipulation of various media format such as MPREG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D in a graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries620may also include a wide variety of other libraries648to provide many other APIs to the applications616and other software components/modules. The frameworks/middleware618(also sometimes referred to as middleware) provide a higher-level common infrastructure that may be used by the applications616or other software components/modules. For example, the frameworks/middleware618may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks/middleware618may provide a broad spectrum of other APIs that may be utilized by the applications616or other software components/modules, some of which may be specific to a particular operating system602or platform. The applications616include built-in applications638or third-party applications640. Examples of representative built-in applications638may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, or a game application. Third-party applications640may include an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform, and may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or other mobile operating systems. The third-party applications640may invoke the API calls608provided by the mobile operating system (such as operating system602) to facilitate functionality described herein. The applications616may use built in operating system functions (e.g., kernel622, services624or drivers626), libraries620, and frameworks/middleware618to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems interactions with a user may occur through a presentation layer, such as presentation layer614. In these systems, the application/component “logic” can be separated from the aspects of the application/component that interact with a user. FIG.7is a block diagram illustrating components (also referred to herein as “modules”) of a machine700, according to some exemplary embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.7shows a diagrammatic representation of the machine700in the example form of a computer system, within which instructions710(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine700to perform any one or more of the methodologies discussed herein may be executed. As such, the instructions710may be used to implement modules or components described herein. The instructions710transform the general, non-programmed machine700into a particular machine700programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine700operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine700may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine700may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions710, sequentially or otherwise, that specify actions to be taken by machine700. Further, while only a single machine700is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions710to perform any one or more of the methodologies discussed herein. The machine700may include processors704, memory memory/storage706, and I/O components718, which may be configured to communicate with each other such as via a bus702. The memory/storage706may include a memory714, such as a main memory, or other memory storage, and a storage unit716, both accessible to the processors704such as via the bus702. The storage unit716and memory714store the instructions710embodying any one or more of the methodologies or functions described herein. The instructions710may also reside, completely or partially, within the memory714, within the storage unit716, within at least one of the processors704(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine700. Accordingly, the memory714, the storage unit716, and the memory of processors704are examples of machine-readable media. As used herein, the term “machine-readable medium,” “computer-readable medium,” or the like may refer to any component, device or other tangible media able to store instructions and data temporarily or permanently. Examples of such media may include, but is not limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)) or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” may also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., code) for execution by a machine, such that the instructions, when executed by one or more processors of the machine, cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” may refer to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes signals per se. The I/O components718may include a wide variety of components to provide a user interface for receiving input, providing output, producing output, transmitting information, exchanging information, capturing measurements, and so on. The specific I/O components718that are included in the user interface of a particular machine700will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components718may include many other components that are not shown inFIG.7. The I/O components718are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various exemplary embodiments, the I/O components718may include output components726and input components728. The output components726may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components728may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. The input components728may also include one or more image-capturing devices, such as a digital camera for generating digital images or video. In further exemplary embodiments, the I/O components718may include biometric components730, motion components734, environmental environment components736, or position components738, as well as a wide array of other components. One or more of such components (or portions thereof) may collectively be referred to herein as a “sensor component” or “sensor” for collecting various data related to the machine700, the environment of the machine700, a user of the machine700, or a combinations thereof. For example, the biometric components730may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components734may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, velocity sensor components (e.g., speedometer), rotation sensor components (e.g., gyroscope), and so forth. The environment components736may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometer that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components738may include location sensor components (e.g., a Global Position system (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. For example, the location sensor component may provide location information associated with the system700, such as the system's700GPS coordinates or information regarding a location the system700is at currently (e.g., the name of a restaurant or other business). Communication may be implemented using a wide variety of technologies. The I/O components718may include communication components740operable to couple the machine700to a network732or devices720via coupling722and coupling724respectively. For example, the communication components740may include a network interface component or other suitable device to interface with the network732. In further examples, communication components740may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices720may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)). Moreover, the communication components740may detect identifiers or include components operable to detect identifiers. For example, the communication components740may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components740, such as, location via Internet Protocol (IP) geo-location, location via Wi-Fi® signal triangulation, location via detecting a NFC beacon signal that may indicate a particular location, and so forth. Where a phrase similar to “at least one of A, B, or C,” “at least one of A, B, and C,” “one or more A, B, or C,” or “one or more of A, B, and C” is used, it is intended that the phrase be interpreted to mean that A alone may be present in an embodiment, B alone may be present in an embodiment, C alone may be present in an embodiment, or that any combination of the elements A, B and C may be present in a single embodiment; for example, A and B, A and C, B and C, or A and B and C. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. | 56,858 |
11943184 | DETAILED DESCRIPTION Throughout the disclosure, the expression “at least one of a, b or c” indicates only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or variations thereof. Hereinafter, embodiments of the disclosure will be described in detail with reference to the accompanying drawings for one of skill in the art to be able to perform the disclosure without any difficulty. The disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments of the disclosure set forth herein. In order to clearly describe the disclosure, portions that are not relevant to the description of the disclosure are omitted, and similar reference numerals are assigned to similar elements throughout the present specification. Throughout the present specification, when a part is referred to as being “connected to” another part, it may be “directly connected to” the other part or be “electrically connected to” the other part through an intervening element. In addition, when an element is referred to as “including” a component, the element may additionally include other components rather than excluding other components as long as there is no particular opposing recitation. Hereinafter, the disclosure will be described in detail with reference to the accompanying drawings. FIG.1is a diagram illustrating an example in which a device1000provides a notification message about image content, according to an embodiment of the disclosure. Referring toFIG.1, the device1000may receive image content from an external source (e.g., another device), generate a notification message describing the received image content, and display the notification message on a screen of the device1000. The device1000may generate the notification message by analyzing the image content, and may use a plurality of artificial intelligence models to analyze the image content. The device1000may recognize an action of at least one object in the image content, identify an object in a target image selected from the image content, and generate the notification message for describing the image content based on the recognized action and the identified object. The device1000may accurately identify an object in the image content by using information about a user of the device1000, and generate and provide the notification message by using a name of the object to which the information about the user is reflected. According to an embodiment of the disclosure, the device1000and another device may be, but are not limited to, a smart phone, a tablet personal computer (PC), a PC, a smart television (TV), a mobile phone, a personal digital assistant (PDA), a laptop computer, a media player, a microserver, a global positioning system (GPS) device, an electronic book terminal, a digital broadcasting terminal, a navigation device, a kiosk, an MP3 player, a digital camera, a home appliance, a closed-circuit TV (CCTV), and other mobile or non-mobile computing devices. The device1000may be a wearable device, such as a watch, glasses, a hair band, or a ring, which has a communication function and a data processing function. However, the device1000is not limited thereto, and may include any type of device capable of transmitting and receiving image content to and from another device and a server. A network communicatively connected to the device1000, the other device, and the server may be implemented as a wired network such as a local area network (LAN), a wide area network (WAN), or a value-added network (VAN), or any type of wireless network such as a mobile radio communication network or a satellite communication network. The network may include a combination of at least two of a LAN, a WAN, a VAN, a mobile radio communication network, or a satellite communication network, and is a data communication network having a comprehensive meaning for allowing each network constituent to communicate smoothly with each other, and includes a wired Internet, a wireless Internet, and a mobile wireless communication network. Examples of wireless communication may include, but are not limited to, a wireless LAN (e.g., Wi-Fi), Bluetooth, Bluetooth Low Energy, Zigbee, Wi-Fi Direct (WFD), ultra wideband (UWB), Infrared Data Association (IrDA), and near-field communication (NFC). FIG.2is a block diagram of the device1000according to an embodiment of the disclosure. Referring toFIG.2, the device1000may include a user input unit1100, a display1200, a communication interface1300, a camera1400, a storage1500, and a processor1600. The storage1500may include an action recognition module1510, a target image determination module1515, an object recognition module1520, a notification message generation module1525, a user object identification module1530, a training module1535, a domain identification module1540, a model selection module1545, a thumbnail generation module1570, artificial intelligence models1550, and user databases (DBs)1560. The user input unit1100refers to an interface via which a user inputs data for controlling the device1000. For example, the user input unit1100may include, but is not limited to, at least one of a key pad, a dome switch, a touch pad (e.g., a touch-type capacitive touch pad, a pressure-type resistive overlay touch pad, an infrared sensor-type touch pad, a surface acoustic wave conduction touch pad, an integration-type tension measurement touch pad, a piezoelectric effect-type touch pad), a jog wheel, or a jog switch. The display1200displays information processed by the device1000. For example, the display1200may display a communication application executed by the device1000, or may display a graphical user interface (GUI) for generating a notification message about image content received from the other device or displaying the image content. The communication application may include, for example, a chat application and a messaging application. When the display1200and a touch pad form a layer structure and thus constitute a touch screen, the display1200may also be used as an input device in addition to being used as an output device. The display1200may include at least one of a liquid-crystal display, a thin-film-transistor liquid-crystal display, an organic light-emitting diode, a flexible display, a three-dimensional (3D) display, or an electrophoretic display. The device1000may include two or more displays1200according to an implementation of the device1000. The communication interface1300may include one or more components for communicating with the other device and the server. For example, the communication interface1300may include a short-range wireless communication unit, a mobile communication unit, and a broadcast receiver. The short-range wireless communication unit may include, but is not limited to, a Bluetooth communication unit, a Bluetooth Low Energy (BLE) communication unit, an NFC unit, a Wi-Fi communication unit, a Zigbee communication unit, an IrDA communication unit, a WFD communication unit, a UWB communication unit, an Ant+ communication unit, and the like. The mobile communication unit transmits and receives a wireless signal to and from at least one of a base station, an external terminal, or a server, on a mobile communication network. Here, the wireless signal may include various types of data according to transmission and reception of voice call signals, video call signals, or text/multimedia messages. The broadcast receiver1receives a broadcast signal and/or broadcast-related information from the outside via a broadcast channel. The broadcast channels may include satellite channels and terrestrial channels. In addition, the communication interface1300may transmit and receive information required to generate a notification message about image content to and from the other device and the server. The camera1400may capture an image of the surroundings of the device1000. When a program that requires an image capture function is executed, the camera1400may obtain an image frame such as a still image or a moving image, by using an image sensor. For example, the camera1400may capture an image of the surroundings of the device1000while the communication application is executed. An image frame processed by the camera1400may be stored in the storage1500or transmitted to the outside through the communication interface1300. Two or more cameras1400may be provided according to the configuration of the device1000. The storage1500may store a program to be executed by the processor1600, which will be described below, and may store data that is input to the device1000or output from the device1000. The storage1500may include at least one of a flash memory-type storage medium, a hard disk-type storage medium, a multimedia card micro-type storage medium, a card-type memory (e.g., SD or XD memory), random-access memory (RAM), static RAM (SRAM), read-only memory (ROM), electrically erasable programmable ROM (EEPROM), programmable ROM (PROM), magnetic memory, a magnetic disk, or an optical disc. Programs stored in the storage1500may be classified into a plurality of modules according to their functions, and may include, for example, the action recognition module1510, the target image determination module1515, the object recognition module1520, the notification message generation module1525, the user object identification module1530, the training module1535, the domain identification module1540, the model selection module1545, the artificial intelligence models1550, and the user DBs1560. The processor1600controls the overall operation of the device1000. For example, the processor1600may generally control the user input unit1100, the display1200, the communication interface1300, the camera1400, and the storage1500, by executing the programs stored in the storage1500. The processor1600may receive image content transmitted from the other device by controlling the communication interface1300. The device1000may receive the image content from the other device via a communication application installed in the device1000. The image content may include still image content and moving image content. For example, the image content may be video content or a set of images. When the image content is video content, a plurality of images in the image content may be a plurality of frame images in the video content. Also, the processor1600may call image content stored in the storage1500to provide the image content to the other device. In addition, for example, the device1000may receive image content from the other device including a camera. The device1000may receive, from another device in a home, such as a CCTV, a TV including a camera, or a cleaning robot including a camera, image content generated by the other device. In this case, when a certain motion of a subject is detected in a preset area in the home, the other device may generate image content in which the subject is photographed. Alternatively, the other device may generate image content in which the subject is photographed at preset intervals. In addition, the other device may be, for example, a device for a home care service for recognizing managing a situation in a house, and a device for a pet management service for monitoring and managing a condition of a pet, but is not limited thereto. The processor1600may identify an action related to one or more objects in the image content by executing the action recognition module1510stored in the storage1500. The action recognition module1510may recognize an action related to one or more objects in the image content, by applying the image content to an action recognition model1551. The action recognition model1551may be an artificial intelligence model trained to recognize an action related to an object in image content. For example, when the image content is video content, the device1000may apply all frame images of the video content to the action recognition model1551, to identify actions of objects that are determined based on the frame images of the video content. Alternatively, for example, the device1000may apply frame images related to a particular scene in the video content to the action recognition model1551, to identify actions of objects that are determined based on frames in the video content. Alternatively, for example, when the image content is a set of images, the action recognition module1510may apply the set of images to the action recognition model1551, to identify actions of objects that are determined based on the set of images. In this case, at least some of the plurality of images in the set may be selected and input to the action recognition model1551. Examples of actions related an object in image content may include “riding”, “cleaning”, and “birthday party”, but are not limited thereto. For example, the action recognition model1551may recognize actions related to an object from first to n-th images included in the image content, and output identification values indicating the actions related to the image content, based on the recognized actions. An identification value indicating an action may include at least one word for identifying the action of an object in the image content, and may indicate an action or situation of at least one object in the image content, for example, “riding”, “running”, “birthday party”, or “shouting”, but is not limited thereto. The processor1600may determine one or more target images to be used to identify one or more objects in the image content, by executing the target image determination module1515stored in the storage1500. The target image may be an image used to identify an object in the image content. When the image content is video content, the target image determination module1515may extract a target image from frame images in the video content. For example, the target image determination module1515may extract target images from the frame images in the video content at preset intervals. Alternatively, for example, when a scene in the video content is changed or an object in the video content is changed, the target image determination module1515may extract a frame image corresponding to the changed scene or a frame image including the changed object, as a target image. In this case, whether a scene in the video content is changed and whether an object in the video content is changed may be determined based on an output value output from the action recognition model1551. Alternatively, when the image content is a set of images, the device1000may determine at least some of the plurality of images as target images. The processor1600may obtain identification information of one or more objects in the image content by executing the object recognition module1520stored in the storage1500. The identification information of an object is information for identifying the object in the image content, and may include, for example, at least one of a visual feature of the object or an identification value of the object. The visual feature of an object may be a feature of a region where the object is located in the target image. In addition, identification values of an object may include, but are not limited to, text indicating a type of the object, such as “human face”, “background”, or “pet”, and an identification value indicating a name of the object, such as “person”, “dog”, “cat”, or “lawn”. Also, an identification value of an object included in a target image may be labeled with a visual feature of the object. The object recognition module1520may obtain identification information of one or more objects in the image content by applying a target image to an object recognition model1552. The object recognition model1552may be an artificial intelligence model trained to identify an object in an image. The object recognition module1520may input a plurality of target images to the object recognition model1552, and obtain, from the object recognition model1552, identification values of objects included in the plurality of target images and/or visual features of the objects. For example, the object recognition module1520may obtain identification values and/or visual features of a person, an object, and a background in the target image. Also, for example, the object recognition model1552may include an object extraction model for obtaining a visual feature of an object in a target image, and an object identification model for obtaining an identification value of an object in a target image. In this case, the object recognition module1520may apply the target image to the object extraction model to obtain a visual feature of an object output from the object extraction model. In addition, the object recognition module1520may apply the visual feature of the object to the object identification model to obtain an identification value of the object output from the object identification model. Also, the object recognition module1520may match the identification value of the object with the visual feature of the object. For example, the object recognition module1520may match the identification value of the object with the visual feature of the object by labeling the identification value of the object with the visual feature of the object. Alternatively, for example, a single object recognition model1552may be provided, and in this case, the object recognition module1520may apply the target image to the object recognition model1552to obtain an identification value and a visual feature of an object output from the object recognition model1552. Also, the object recognition module1520may match the identification value of the object with the visual feature of the object. For example, the object recognition module1520may match the identification value of the object with the visual feature of the object by labeling the identification value of the object with the visual feature of the object. Also, the object recognition model1552may output visual features with respect to all of the target images, which may include visual features of objects in the target images. The processor1600may obtain a notification message about the image content by executing the notification message generation module1525stored in the storage1500. The notification message generation module1525may apply, to a notification message generation model1553, an identification value indicating an action and identification information of one or more objects to obtain the notification message about the image content. The notification message generation model1553may be an artificial intelligence model trained to generate a notification message about image content. For example, the notification message generation module1525may input, to the notification message generation model1553, an identification value indicating an action and visual features of one or more objects. Alternatively, for example, the notification message generation module1525may input, to the notification message generation model1553, an identification value indicating an action, visual features of one or more objects, and identification values of the one or more objects. In this case, the identification values of the one or more objects may be labeled with the visual features of the one or more objects, and the visual features of the one or more objects labeled with the identification values of the one or more objects may be input to the notification message generation model1553. In addition, for example, the notification message generation module1525may input visual features of all of target images to the notification message generation model1553, such that visual features of objects in the target images or the visual features of the objects labeled with identification values of the objects are input to the notification message generation model1553. In this case, the visual features of all of the target images may include the visual features of the objects or the visual features of the objects labeled with the identification values of the objects. The notification message generation module1525may obtain notification candidate messages by using the notification message generation model1553, and generate a notification message by using the notification candidate messages. The notification message generation module1525may obtain a notification candidate message for each of the target images. The notification message generation module1525may input, to the notification message generation model1553, an identification value indicating an action output from the action recognition model1551and identification information of an object output from the object recognition model1552, and obtain a notification candidate message output from the notification message generation model1553. The notification message generation model1553may output a notification candidate message for each target image. Also, the notification message generation module1525may generate a notification message to be provided to the user, by using a plurality of notification candidate messages corresponding to a plurality of target images, respectively. For example, the notification message generation module1525may obtain first to n-th notification candidate messages for generating a notification message. The notification message generation module1525may input, to the notification message generation model1553, a first target image, an action identification value output from the action recognition model1551, and identification information of objects in the first target image output from the object recognition model1552, and obtain a first notification candidate message output from the notification message generation model1553. In addition, the notification message generation module1525may input, to the notification message generation model1553, a second target image, an action identification value output from the action recognition model1551, and identification information of objects in the second target image output from the object recognition model1552, and obtain a second notification candidate message output from the notification message generation model1553. Also, the notification message generation module1525may input, to the notification message generation model1553, an n-th target image, an action identification value output from the action recognition model1551, and identification information of objects in the n-th target image output from the object recognition model1552, and obtain an n-th notification candidate message output from the notification message generation model1553. In this case, the identification information of the objects in the target images may include identification values of the objects, which may be metadata of the target images, but are not limited thereto. In addition, the notification message generation module1525may generate a notification message by using the first notification candidate message to the n-th notification candidate message. The notification message generation module1525may compare the first notification candidate message to the n-th notification candidate message with each other, and generate the notification message based on a comparison result. For example, the notification message generation module1525may determine a word or phrase representing an object based on the frequencies of words or phrases representing the object in the first notification candidate message to the n-th notification candidate message, and generate the notification message based on the determined word or phrase. Alternatively, the notification message generation module1525may input the first notification candidate message to the n-th notification candidate message to an artificial intelligence model trained to generate a notification message from notification candidate messages, and obtain a notification message output from the artificial intelligence model. Accordingly, the device1000may accurately recognize, by using the image content, an action of an object in the image content, and may effectively identify, by using a target image selected from the image content, an object in the target image. Also, the device1000may efficiently generate the notification message about the image content by using the action of the object and the identification information of the object, and may efficiently use a plurality of artificial intelligence models to generate the notification message about the image content. The processor1600may determine a name of an object in a target image by executing the user object identification module1530stored in the storage1500. The user object identification module1530may compare objects in the target image with images stored in the user DBs1560. In the images stored in the user DBs1560, at least one of the user of the device1000or acquaintances of the user may be photographed. For example, the images stored in the user DBs1560may include, but are not limited to, images stored in contact information in the device1000, images captured by the device1000, and image received by the device1000from another device. For example, the user object identification module1530may compare objects in a first target image with objects in the images stored in the user DBs1560, compare objects in a second target image with the objects in the images stored in the user DBs1560, and compare objects in a third target image with the objects in the images stored in the user DBs1560. For example, the user object identification module1530may search the images stored in the user DBs1560for an image including an object corresponding to an object in a target image, by using at least one of identification values or visual features of objects obtained from the object recognition model1552. The user object identification module1530may determine names of objects in target images. The user object identification module1530may identify, from images stored in the device1000, the same object as an object in a target image, and determine a name corresponding to the identified object. For example, the name of the object may be determined based on an address book, or may be determined based on metadata of an image corresponding to the object. When the device1000receives information about a name of the object in the image content from another device, the user object identification module1530may determine the name of the object based on the information received from the other device. The other device may determine the name of the object in the image content based on user information of the other device, and provide the device1000with the information about the name of the object while providing the image content to the device1000. For example, the name of the object in the image content may be included in metadata of the image content, and the other device may transmit the metadata including the name of the object to the device1000together with the image content. Accordingly, the device1000may use the name of the object received from the other device to generate the notification message describing the image content. In this case, the notification message generation module1525may generate the notification message related to the image content by inputting, to the notification message generation model1553, an identification value indicating an action in the image content, and identification information and names of one or more objects in the image content. The notification message generation module1525may input, to the notification message generation model1553, a target image, a name of an object, an identification value indicating an action, and identification information of the object, and obtain a notification candidate message output from the notification message generation model1553. Also, the notification message generation module1525may generate a notification message to be provided to the user, by using a plurality of notification candidate messages corresponding to a plurality of target images, respectively. The thumbnail generation module1570may generate a thumbnail image related to the notification message related to the image content. The thumbnail generation module1570may generate the thumbnail image related to the notification message by cropping the target image to extracting a partial image based on the notification message. For example, when the notification message is “A man is riding a skateboard”, the thumbnail generation module1570may generate the thumbnail image by cropping the target image including the man and the skateboard to obtain a partial image including the man and the skateboard. An example of generating a thumbnail image from a target image by using a notification message will be described in more detail with reference toFIG.12. When the device1000provides image content to another device, the user object identification module1530may determine a name of an object in a target image determined from the image content to be provided to the other device, and the device1000may provide information about the determined name to the other device together with the image content. Accordingly, the device1000may effectively reflect information related to the user in generating a notification message, and may efficiently use a plurality of artificial intelligence models to generate the notification message related the image content. The processor1600may train the notification message generation model1553by executing the training module1535stored in the storage1500. The training module1535may obtain training images and messages describing the training images, respectively, to be used for training the notification message generation model1553. In addition, the training module1535may extract at least one word or phrase of a predefined part of speech from the messages describing the training images, and use data including the training images and the extracted word or phrase as training data to train the notification message generation model1553. For example, the training module1535may extract verbs from the messages describing the training images, and use data including the training images and the extracted verbs, as training data. For example, the training module1535may input, to the notification message generation model1553, the training data including the training images and the extracted verbs, and train the notification message generation model1553to output messages including the extracted verbs. Although it is described above that the notification message generation model1553is trained based on the extracted verbs, the disclosure is not limited thereto. The notification message generation model1553may be trained based on various parts of speech. Although it is described above that the notification message generation model1553is trained by the device1000, the disclosure is not limited thereto. The server may train the notification message generation model1553in the same manner as described above, and the device1000may receive, from the server, the notification message generation model1553trained by the server. In addition, the device1000may receive, from the server, the notification message generation model1553refined by the server. The training module1535may retrain the notification message generation model1553received from the server to refine it. In this case, the training module1535may retrain the notification message generation model1553by using data stored in the user DBs1560of the device1000. The training module1535may train the action recognition model1551and the object recognition model1552, or may receive, from the server, the action recognition model1551and the object recognition model1552, which are trained by the server. In addition, the training module1535may retrain the action recognition model1551and the object recognition model1552, or may receive, from the server, the action recognition model1551and the object recognition model1552, which are refined by the server. According to an embodiment of the disclosure, a plurality of action recognition models1551may be used by the device1000, and may correspond to a plurality of domains, respectively. The domain indicates a field to which image content is related, and may be preset according to, for example, the type of another device that transmitted the image content, the type of a service that uses a notification message related to the image content, the type of the image content, the category of the image content, and the like. In addition, the action recognition models1551may be trained for respective domains. In this case, the action recognition models1551may be models trained by using target images extracted from image content related to the respective domains, and ground truth data corresponding the target images. Also, according to an embodiment of the disclosure, a plurality of object recognition models1552may be used by the device1000, and may correspond to a plurality of domains, respectively. The domain indicates a field to which image content is related, and may be preset according to, for example, the type of another device that transmitted the image content, the type of a service that uses a notification message related to the image content, the type of the image content, the category of the image content, and the like. In addition, the object recognition models1551may be trained for respective domains. In this case, the object recognition models1552may be models trained by using target images extracted from image content related to the respective domains, and ground truth data corresponding the target images. In addition, according to an embodiment of the disclosure, a plurality of notification message generation models1553may be used by the device1000, and may correspond to a plurality of domains, respectively. The domain indicates a field to which image content is related, and may be preset according to, for example, the type of another device that transmitted the image content, the type of a service that uses a notification message related to the image content, the type of the image content, the category of the image content, and the like. In addition, the notification message generation models1553may be trained for respective domains. In this case, the notification message generation models1553may be models trained by using target images extracted from image content related to the respective domains, and ground truth data corresponding the target images. The processor1600may identify a domain corresponding to image content by executing the domain identification module1540stored in the storage1500. For example, based on an identification value of another device that provided the image content, the domain identification module1540of the device1000may identify the domain corresponding to the other device as the domain of the image content. In this case, the domain corresponding to the other device may be preset. In addition, for example, the domain identification module1540may identify the domain of the image content by inputting the image content or an image extracted from the image content to a separate artificial intelligence model trained to identify a domain of image content. Also, for example, when the device1000receives a text message together with the image content, the domain identification module1540may identify the domain of the image content by using the received text message. In addition, for example, the domain identification module1540may determine the domain of the image content by using an output value from the action recognition model1551. In this case, the action recognition model1551used to determine the domain may be a general-purpose artificial intelligence model, which is not specialized in a certain domain. In addition, the processor1600may select a model corresponding to the domain of the image content from among a plurality of models by executing the model selection module1545stored in the storage1500. For example, the model selection module1545may select the action recognition model1551corresponding to the domain of the image content from among the plurality of action recognition models1551. Accordingly, the action recognition module1510may identify an action of the image content by using the selected action recognition model1551. Also, for example, the model selection module1545may select the object recognition model1552corresponding to the domain of the image content from among the plurality of object recognition models1552. The object recognition module1520may recognize an object in a target image by using the object recognition model1552selected by the model selection module1545. Also, for example, the model selection module1545may select the notification message generation model1553corresponding to the domain of the image content from among the plurality of notification message generation models1553. Accordingly, the notification message generation module1525may generate a notification message describing the image content by using the selected notification message generation model1553. The user DBs1560may include a contact information DB1561and an image DB1562. The contact information DB1561may store information about contacts of acquaintances of the user. For example, the contact information DB1561may store names and images of the acquaintances of the user. In addition, the image DB1562may store images in which at least one of the user or the acquaintances of the user is photographed. For example, the images stored in the image DB1562may include images stored in contact information in the device1000, images captured by the device1000, and image received by the device1000from another device or a server. For example, the image stored in the image DB1562may include metadata about a name of an object in the image. Although it is described above that the device1000receives the image content from the other device and generates the notification message, the disclosure is not limited thereto. The other device may provide the image content to a server, and the server may generate a notification message related to the image content and transmit the notification message to the device1000. In this case, the device1000may set a condition for receiving a notification message. For example, the device1000may set a condition for receiving, from the server, a notification message related to image content generated by the server when a certain object is included in the image content, when an object in the image content is performing a certain action, and when the image content is related to a certain domain. In this case, the server may include modules and models that perform the same functions as those of the modules and models stored in the storage1500of the device1000illustrated inFIG.2. Alternatively, for example, the role of at least some of the action recognition module1510, the target image determination module1515, the object recognition module1520, the notification message generation module1525, the user object identification module1530, the training module1535, the domain identification module1540, the model selection module1545, the thumbnail generation module1570, and the artificial intelligence models1550may be performed by the server. In this case, the device1000may transmit and receive, to and from the server, information necessary for the server to perform the function of at least some of the action recognition module1510, the target image determination module1515, the object recognition module1520, the notification message generation module1525, the user object identification module1530, the training module1535, the domain identification module1540, the model selection module1545, the thumbnail generation module1570, and the artificial intelligence models1550. FIG.3is a diagram illustrating image content and target images, according to an embodiment of the disclosure. Referring toFIG.3, the image content may be video content30including a plurality of frame images, and the video content30may include frame images showing two men riding skateboards. In addition, target images32may be selected from among the frame images in the video content30. The target images32may be selected according to a predefined criterion. For example, target images34,35, and36may be extracted from the video content30at preset frame intervals. Alternatively, for example, when objects in the video content30are changed, the target images32including the changed object may be extracted from the video content30. FIG.4is a diagram illustrating an example in which an action of an object in image content is identified by the action recognition model1551, according to an embodiment of the disclosure. Referring toFIG.4, the image content may be input to the action recognition model1551. When the image content is video content, the video content may be input to the action recognition model1551. For example, all frame images in the video content may be input to the action recognition model1551. Alternatively, when the image content is a set of images, the set of images may be input to the action recognition model1551. For example, by inputting, to the action recognition model1551, a file of the image content or feature data obtained from the image content, the image content may be input to the action recognition model1551. In addition, the action recognition model1551may output actions of objects in the image content. When the image content is video content, the action recognition model1551may output, based on actions of objects recognized from respective frame images in the video content, action identification values indicating the actions of the objects in the video content. When the image content is a set of images, the action recognition model1551may output, based on actions of objects recognized from the respective images, action identification values indicating the actions of the objects in the set of images. An action identification value indicating an action of an object may be text data. FIG.5is a diagram illustrating an example in which actions of people in the video content30are identified, according to an embodiment of the disclosure. Referring toFIG.5, the video content30may be input to the action recognition model1551. The video content30may include frame images showing two men riding skateboards. In this case, the action recognition model1551may output “riding”, which is an action identification value indicating an action of the men in the video content30. FIG.6Ais a diagram illustrating an example in which an object in a target image is identified by the object recognition model1552, according to an embodiment of the disclosure. Referring toFIG.6A, the target image may be input to the object recognition model1552, and identification information of an object may be output from the object recognition model1552. For example, the identification information of the object may include at least one of an identification value of the object or a visual feature of the object. The object recognition model1552may be an artificial intelligence model trained to identify an object in an image. The device1000may input a plurality of target images to the object recognition model1552, and obtain, from the object recognition model1552, identification information of objects included in the plurality of target images. For example, the object recognition model1552may output identification values and/or visual features of a person, an object, and a background in the target image. In addition, for example, by inputting, to the object recognition model1552, a file of the target image or feature data obtained from the target image, the target image may be input to the object recognition model1552. FIG.6Bis a diagram illustrating an example in which an object in a target image is identified by the object recognition model1552including a plurality of artificial intelligence models, according to an embodiment of the disclosure. Referring toFIG.6B, the target image may be input to an object extraction model1552-1in the object recognition model1552, and a visual feature of an object in the target image may be output from the object extraction model1552-1. The object extraction model1552-1may be an artificial intelligence model trained to extract an object in a target image. In addition, the visual feature of the object output from the object extraction model1552-1may be input to an object identification model1552-2, and an identification value of the object may be output from the object identification model1552-2. The object identification model1552-2may be an artificial intelligence model trained to identify an identification value of an object from a visual feature of the object. Accordingly, the visual feature and the identification value of the object may be output from the object recognition model1552, and the identification value of the object may be labeled with the visual feature of the object. FIG.7is a diagram illustrating an example in which identification information of objects is obtained from the target images34,35, and36, according to an embodiment of the disclosure. Referring toFIG.7, the target images34,35, and36may be input to the object recognition model1552. Then, “man”, “man”, “skateboard”, “skateboard”, and “street”, which are identification values of objects in the target image34, may be output from the object recognition model1552, “boy”, “boy”, “skateboard”, “skateboard”, and “street”, which are identification values of objects in the target image35, may be output from the object recognition model1552, and “boy”, “skateboard”, and “street”, which are identification values of objects in the target image36, may be output from the object recognition model1552. In addition, visual features of the objects in the target image34, visual features of the objects in the target image35, and visual features of the objects in the target image36may be output from object recognition model152, respectively. In addition, the identification values of the objects may be labeled with the visual features of the objects, respectively. FIG.8is a diagram illustrating an example in which a notification message is generated by the notification message generation module1525, according to an embodiment of the disclosure. Referring toFIG.8, an identification value indicating an action of an object output from the action recognition model1551and identification information of the object output from the object recognition model1552may be input to the notification message generation model1553. For example, “riding” indicating an action output from the action recognition model1551and the visual features of the objects in the target image34may be input to the notification message generation model1553. Alternatively, for example, in addition to “riding” indicating the action output from the action recognition model1551, and the visual features of the objects in the target image34, “man”, “man”, “skateboard”, “skateboard”, “street”, which are identification values of the objects may be additionally input to the notification message generation model1553. Also, for example, “riding” indicating the action output from the action recognition model1551and the visual features of the objects in the target image35may be input to the notification message generation model1553. Alternatively, for example, in addition to “riding” indicating the action output from the action recognition model1551, and the visual features of the objects in the target image35, “boy”, “boy”, “skateboard”, “skateboard”, “street”, which are identification values of the objects may be additionally input to the notification message generation model1553. Also, for example, “riding” indicating the action output from the action recognition model1551and the visual features of the objects in the target image36may be input to the notification message generation model1553. Alternatively, for example, in addition to “riding” indicating the action output from the action recognition model1551, and the visual features of the objects in the target image36, “boy”, “skateboard”, and “street”, which are identification values of the objects may be additionally input to the notification message generation model1553. In addition, a notification candidate message corresponding to the target image34, a notification candidate message corresponding to the target image35, and a notification candidate message corresponding to the target image36may be output from the notification message generation model1553. AlthoughFIG.8illustrates that a plurality of target images are input to the notification message generation model1553, when only one target image is used to generate a notification message, a notification candidate message generated by the notification message generation model1553from the target image may be determined as a notification message. FIG.9is a diagram illustrating an example in which a notification message is generated from a plurality of notification candidate messages, according to an embodiment of the disclosure. Referring toFIG.9, an action identification value corresponding to the target image34may be “riding”, and identification values of objects in identification information of the objects in the target image34may be “man”, “man”, “skateboard”, “skateboard”, and “street”. When the action identification value corresponding to the target image34and the identification information of the objects in the target image34are input to the notification message generation model1553, “A couple of men riding skateboards down a street.”, which is a first notification candidate message, may be obtained. In addition, an action identification value corresponding to the target image35may be “riding”, and identification values of objects in identification information of the objects in the target image35may be “boy”, “boy”, “skateboard”, “skateboard”, and “street”. When the action identification value corresponding to the target image35and the identification information of the objects in the target image35are input to the notification message generation model1553, “Two young men riding skateboards down a street.”, which is a second notification candidate message, may be obtained. In addition, an action identification value corresponding to the target image36may be “riding”, and identification values of objects in identification information of the objects in the target image36may be “boy”, “skateboard”, and “street”. When the action identification value corresponding to the target image36and the identification information of the objects in the target image36are input to the notification message generation model1553, “A young man riding skateboard on a street.”, which is a third notification candidate message, may be obtained. The notification candidate messages generated from the respective target images may be compared with each other, and a notification message may be generated based on a comparison result. For example, words in “A couple of men riding skateboards down a street.”, which is the first notification candidate message, words in “Two young men riding skateboards down a street.”, which is the second notification candidate message, and words in “A young man riding skateboard on a street.”, which is the third notification candidate message, may be compared with each other, and words to be included in the notification message may be determined based on the frequencies of words of the same meanings, respectively. Based on the comparison result, a notification message “Two young men riding skateboards down a street.” may be generated. Alternatively, for example, the notification message generation module1525may generate a notification message by using an artificial intelligence model trained to generate a notification message from notification candidate messages. In this case, the first notification candidate message “A couple of men riding skateboards down a street.”, the second notification candidate message “Two young men riding skateboards down a street.”, and the third notification candidate message “A young man riding skateboard on a street.” may be input to the artificial intelligence model, and a notification message “Two young men riding skateboards down a street.” may be output from the artificial intelligence model. FIG.10Ais a diagram illustrating an example in which a notification message is generated considering user information, according to an embodiment of the disclosure. Referring toFIG.10A, the device1000may activate or deactivate a function of generating a notification message considering user information. When the function of generating a notification message considering user information is activated, the device1000may input, to the notification message generation model153, identification information102of objects in a target image100, an identification value104of an action related to the target image100, and a name106of an object in the target image100, which is generated based on the user information. Accordingly, a message “Mother cleans the floor in front of a television.” may be output from the notification message generation model1553. In addition, when the function of generating a notification message considering user information is deactivated, the device1000may input, to the notification message generation model153, the identification information102of the objects in the target image100, and the identification value104of the action related to the target image100. In this case, the device1000may not input, to the notification message generation model1553, the name106of the object in the target image100, which is generated based on the user information. Accordingly, a message “A woman cleans the floor in front of a television.” may be output from the notification message generation model1553. FIG.10Bis a diagram illustrating a GUI for setting preferences related to generation of a notification message, according to an embodiment of the disclosure. Referring toFIG.10B, a user may use the GUI for setting the preferences related to generation of a notification message to set which information to use to generate a notification message. For example, the GUI for setting the preferences related to generation of a notification message may include objects for setting whether to use identification information of an object, setting whether to recognize an action of an object, and setting whether to use user information for generating a notification message. For example, the objects for setting whether to use identification information of an object may include a button for selecting whether to use identification information of an object and a slider for adjusting the degree of use of identification information of an object. In addition, for example, the objects for setting whether to recognize an action of an object may include a button for selecting whether to recognize an action of an object and a slider for adjusting the degree of recognition of an action of an object. In addition, for example, the objects for setting whether to use user information for generating a notification message may include a button for selecting whether to use user information for generating a notification message and a slider for adjusting the degree of use of user information. FIG.11is a diagram illustrating an example in which the notification message generation model1553is trained, according to an embodiment of the disclosure. Referring toFIG.11, the device1000may obtain a training image110and a message112describing the training image110to be used for training the notification message generation model1553. In addition, the device1000may extract, from the message112describing the training image110, a word or phrase114representing an action of an object in the training image110. The device1000may extract, from the message112describing the training image110, the word or phrase114representing the action of the object in the training image110by using, for example, a rule-based syntax analyzer. For example, the word or phrase114representing the action of the object in the training image110may be a verb or a verb phrase, but is not limited thereto. The device1000may train the notification message generation model153by using the training image110and the extracted word or phrase114, as training data116. For example, the device1000may input, to the notification message generation model1553, the training data116including the training image110and the extracted word or phrase114, and train the notification message generation model1553to output a message118including the extracted word or phrase114. For example, the notification message generation model114may be trained to output a notification message in which the word or phrase114extracted from the message112is arranged at a position of a verb or a verb phrase. Although it is described above that the notification message generation model153is trained by using the training data116including the training image110, the disclosure is not limited thereto. For example, instead of the training image110, identification information of objects in the training image110may be included in the training data116. In this case, the word or phrase114extracted from the message114and the identification information of the objects in the training image110may be input, as the training data116, to the notification message generation model114. Then, the notification message generation model114may be trained to output a notification message in which the word or phrase114extracted from the message112is arranged at a position of a verb or a verb phrase. The identification information of the objects in the training image110for use as the training data116may be obtained by inputting the training image110to the object recognition model1552. Alternatively, the device1000may obtain the identification information of the objects in the training image110from the server. FIG.12is a diagram illustrating an example in which the device1000generates a thumbnail image related to image content, according to an embodiment of the disclosure. Referring toFIG.12, when a notification message is generated from a target image, the device1000may crop the target image to obtain a partial image including main objects by using words in the notification message and identification information of objects obtained from the target image. For example, when a notification message122is generated from a target image120, the device1000may identify words “mother” and “television” in the notification message122, and identify a region123corresponding to “mother” and a region124corresponding to “television” from the target image120, based on identification information of objects obtained from the target image120by using the object recognition model1552. For example, the device1000may identify “mother” and “television”, which are nouns indicating objects in the notification message122, identify the objects corresponding to “mother” and “television” from the target image120, and crop the target image120to obtain a partial image in which a region including the identified objects is photographed. For example, the region123corresponding to “mother” and the region124corresponding to “television” may be identified by using the identification information obtained from the target image120and user information for determining names of objects, but the disclosure is not limited thereto. In addition, the device100may obtain a thumbnail image126related to the notification message by cropping the target image120to obtain a partial image in which a region125including the region123corresponding to “mother” and the region124corresponding to “television” is photographed. FIG.13is a diagram illustrating an example in which the device1000shares image content with another device, according to an embodiment of the disclosure. Referring toFIG.13, the device1000may provide a notification message for sharing, with another device, a group of images related to the same action or similar actions among a plurality of images stored in the device1000. The device1000may select, from among the plurality of images stored in the device1000, images in which a particular person performing a particular action is photographed, and generate a notification message for providing the selected images to the particular person. The device1000may obtain notification candidate messages related to the selected images by inputting, to the notification message generation model1553, actions related to objects in the respective selected images and identification information of the objects in the respective selected images, and obtain a notification message from the notification candidate messages. For example, the device1000may select, from among images130,132,134, and136stored in the device1000, the images130,132, and134in which “mother” who is cleaning is photographed, and generate a notification message describing the selected images. For example, the device1000may generate a message “Mother cleans the floor in front of a television”. In addition, the device1000may provide the user of the device1000with a GUI for sharing the selected images with a person photographed in the selected images. For example, the GUI for sharing the selected images may include, but is not limited to, a region where the selected images are displayed, a region where the recipient is displayed, and a region where a message is displayed. FIG.14is a flowchart of a method, performed by the device1000, of generating a notification message, according to an embodiment of the disclosure. In operation S1400, the device1000may obtain image content including a plurality of images. The device1000may receive the image content transmitted from another device. For example, the image content may be video content or a set of images. When the image content is video content, a plurality of images in the image content may be a plurality of frame images in the video content. The device1000may receive the image content from the other device via a communication application installed in the device1000. The communication application may include, for example, a chat application and a messaging application. In operation S1410, the device1000may identify an action related to one or more objects in the image content by applying the image content to the action recognition model1551. The action recognition model1551may be an artificial intelligence model trained to identify an action related to an object in image content. When the image content is video content, the device1000may apply, to the action recognition model1551, the entire video content or frames related to a particular scene in the video content, to identify actions of objects that are determined based on the frames in the video content. When the image content is a set of images, the device1000may apply the set of images to the action recognition model1551, to identify actions of objects that are determined based on the set of images. Examples of actions related an object in image content may include “riding”, “cleaning”, and “birthday party”, but are not limited thereto. In operation S1420, the device1000may determine one or more target images to be used to identify one or more objects in the image content. When the image content is video content, the device1000may extract a target image from frame images in the video content. For example, the device1000may extract target images from the frame images in the video content at preset intervals. Alternatively, for example, when a scene in the video content is changed or an object in the video content is changed, the device1000may extract a frame image corresponding to the changed scene or a frame image including the changed object, as a target image. In this case, whether a scene in the video content is changed and whether an object in the video content is changed may be determined based on an output value output from the action recognition model1551. Alternatively, when the image content is a set of images, the device1000may determine at least some of the plurality of images as target images. In operation S1430, the device1000may obtain identification information of one or more objects in the image content by applying the target images to the object recognition model1552. The object recognition model1552may be an artificial intelligence model trained to identify an object in an image. The device1000may input a plurality of target images to the object recognition model1552, and obtain, from the object recognition model1552, identification information of objects included in the plurality of target images. For example, the device1000may obtain identification information of a person, an object, and a background in the target image. The device1000may obtain at least one of identification values or visual features of one or more objects in the image content by applying the target image to the object recognition model1552. The visual feature of an object may be a feature of a region where the object is located in the target image. Also, an identification value of an object included in a target image may be labeled with a visual feature of the object. In operation S1440, the device1000may obtain a notification message related to the image content, by applying, to the notification message generation model1553, an identification value indicating an action and identification information of one or more objects. The notification message generation model1553may be an artificial intelligence model trained to generate a notification message related to image content. For example, the device1000may input, to the notification message generation model1553, an identification value indicating an action output from the action recognition model1551, identification information of an object output from the object recognition model1552, and the target image, and obtain a notification candidate message output from the notification message generation model1553. The notification message generation model1553may output a notification candidate message for each target image. In addition, the device1000may generate a notification message to be provided to the user, by using a plurality of notification candidate messages corresponding to a plurality of target images, respectively. In operation S1450, the device1000may output the notification message. When the image content is received through the communication application, a notification message describing the received image content may be displayed on the screen of the device1000. For example, when the device1000is locked, the notification message describing the image content may be displayed on a lock screen of the device1000. In addition, for example, when an execution screen of the communication application having received the image content is not activated on the screen of the device1000, the notification message describing the image content may be displayed through a tray window or a pop-up window. FIG.15is a flowchart of a method, performed by the device1000, of generating a notification message, according to an embodiment of the disclosure. In operation S1500, the device1000may obtain a first notification candidate message generated from a first target image. The device1000may input, to the notification message generation model1553, an identification value of an action output from the action recognition model1551and identification information of objects in the first target image output from the object recognition model1552, and obtain a first notification candidate message output from the notification message generation model1553. In operation S1510, the device1000may obtain a second notification candidate message generated from a second target image. The device1000may input, to the notification message generation model1553, an identification value of an action output from the action recognition model1551and identification information of objects in the second target image output from the object recognition model1552, and obtain a second notification candidate message output from the notification message generation model1553. In operation S1520, the device1000may obtain an n-th notification candidate message generated from an n-th target image. The device1000may input, to the notification message generation model1553, an identification value of an action output from the action recognition model1551and identification information of objects in the n-th target image output from the object recognition model1552, and obtain an n-th notification candidate message output from the notification message generation model1553. In operation S1530, the device1000may generate a notification message by using the first notification candidate message to the n-th notification candidate message. The device1000may compare the first notification candidate message to the n-th notification candidate message with each other, and generate the notification message based on a comparison result. For example, the device1000may determine a word or phrase representing an object based on the frequencies of words or phrases representing the object in the first notification candidate message to the n-th notification candidate message. Then, the device1000may generate the notification message based on the determined word or phrase. Alternatively, the device1000may input the first notification candidate message to the n-th notification candidate message to an artificial intelligence model trained to generate a notification message, and obtain a notification message output from the artificial intelligence model. FIG.16is a flowchart of a method, performed by the device1000, of generating a notification message considering user information, according to an embodiment of the disclosure. Operations S1600to S1630correspond to operations S1400to S1430, respectively, and thus a description of operations S1600to S1630will be omitted for convenience of description. In operation S1640, the device1000may compare objects in the target images with images stored in the device1000. In the images stored in the device1000, at least one of the user of the device1000or acquaintances of the user may be photographed. For example, the images stored in the device1000may include, but are not limited to, images stored in contact information in the device1000, images captured by the device1000, and image received by the device1000from another device. For example, the device1000may compare objects in a first target image with objects in the images stored in the device1000, compare objects in a second target image with the objects in the images stored in the device1000, and compare objects in a third target image with the objects in the images stored in the device1000. In operation S1650, the device1000may determine names of the objects in the target images. The device1000may identify, from the images stored in the device1000, the same object as an object in a target image, and identify a name corresponding to the identified object. For example, the name of the object may be determined based on an address book, or may be determined based on metadata of an image corresponding to the object. When the device1000receives information about a name of the object in the image content from another device, the device1000may determine the name of the object based on the information received from the other device. The other device may determine the name of the object in the image content based on user information of the other device, and provide the device1000with the information about the name of the object while providing the image content to the device1000. For example, the name of the object in the image content may be included in metadata of the image content, and the other device may transmit the metadata including the name of the object to the device1000together with the image content. Accordingly, the device1000may use the name of the object received from the other device to generate the notification message describing the image content. In operation S1660, the device1000may generate a notification message related to the image content by inputting, to the notification message generation model1553, an identification value indicating an action in the image content, and identification information and names of one or more objects in the image content. The device1000may input, to the notification message generation model1553, an identification value indicating an action output from the action recognition model1551, identification information of an object output from the object recognition model1552, and the names of the objects, and obtain a notification candidate message output from the notification message generation model1553. The notification message generation model1553may output a notification candidate message for each target image. In addition, the device1000may generate a notification message to be provided to the user, by using a plurality of notification candidate messages corresponding to a plurality of target images, respectively. In operation S1670, the device1000may output the notification message. When the image content is received through the communication application, a notification message describing the received image content may be displayed on the screen of the device1000. FIG.17is a flowchart of a method, performed by the device1000, of providing another device with a name of an object in image content that is to be also transmitted to the other device, according to an embodiment of the disclosure. In operation S1700, the device1000may obtain the image content to be transmitted to the other device. The device1000may obtain the image content including a plurality of images. For example, the image content may be video content or a set of images. The device1000may generate image content or extract image content stored in the device1000in order to provide the image content to the other device via the communication application installed in the device1000. In operation S1710, the device1000may determine one or more target images to be used to identify one or more objects in the image content. When the image content is video content, the device1000may extract a target image from frame images in the video content. For example, the device1000may extract target images from the frame images in the video content at preset intervals. Alternatively, for example, when a scene in the video content is changed or an object in the video content is changed, the device1000may extract a frame image corresponding to the changed scene or a frame image including the changed object, as a target image. Alternatively, when the image content is a set of images, the device1000may determine at least some of the plurality of images as target images. In operation S1720, the device1000may obtain identification information of one or more objects in the image content by applying the target images to the object recognition model1552. The object recognition model1552may be an artificial intelligence model trained to identify an object in an image. The device1000may input a plurality of target images to the object recognition model1552, and obtain, from the object recognition model1552, identification information of objects included in the plurality of target images. For example, the device1000may obtain identification information of a person, an object, and a background in the target image. In operation S1730, the device1000may compare objects in the target images with the images stored in the device1000. In the images stored in the device1000, at least one of the user of the device1000or acquaintances of the user may be photographed. For example, the images stored in the device1000may include, but are not limited to, images stored in contact information in the device1000, images captured by the device1000, and image received by the device1000from another device. For example, the device1000may compare objects in a first target image with objects in the images stored in the device1000, compare objects in a second target image with the objects in the images stored in the device1000, and compare objects in a third target image with the objects in the images stored in the device1000. In operation S1740, the device1000may determine names of the objects in the target images. The device1000may identify, from the images stored in the device1000, the same object as an object in a target image, and identify a name corresponding to the identified object. For example, the name of the identified object may be determined based on an address book, or may be determined based on metadata of an image corresponding to the identified object. In operation S1750, the device1000may transmit the image content and the names of the objects to the other device. The names of the objects may be included in metadata of the image content, and the device1000may transmit, to the other device, the metadata including the names of the objects together with the image content. The names of the objects transmitted to the other device may be used by the other device to generate a notification message describing the image content. Functions related to artificial intelligence according to the disclosure are operated by a processor and a memory. The processor may include one or more processors. In this case, the one or more processors may be a general-purpose processor such as a central processing unit (CPU), an application processor (AP), or a digital signal processor (DSP), a dedicated graphics processor such as a graphics processing unit (GPU) or a vision processing unit (VPU), or a dedicated artificial intelligence processor such as a neural processing unit (NPU). The one or more processors may perform control to process input data according to predefined operation rules or an artificial intelligence model stored in the memory. Alternatively, in a case where the one or more processors are dedicated artificial intelligence processors, the dedicated artificial intelligence processor may be designed with a hardware structure specialized for processing a particular artificial intelligence model. The predefined operation rules or artificial intelligence model is generated via a training process. Here, being generated via a training process may mean that the predefined operation rules or artificial intelligence model set to perform desired characteristics (or purposes), is generated by training a basic artificial intelligence model by using a learning algorithm that utilizes a large amount of training data. The training process may be performed by a device itself on which artificial intelligence according to the disclosure is performed, or by a separate server and/or system. Examples of the learning algorithm may include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning. The artificial intelligence model may include a plurality of neural network layers. Each of the plurality of neural network layers has a plurality of weight values, and performs a neural network operation through an operation between an operation result of a previous layer and the plurality of weight values. The plurality of weight values in each of the plurality of neural network layers may be optimized by a result of training the artificial intelligence model. For example, the plurality of weight values may be refined to reduce or minimize a loss or cost obtained by the artificial intelligence model during the training process. An artificial neural network may include a deep neural network (DNN), and may be, for example, a convolutional neural network (CNN), a DNN, a recurrent neural network (RNN), a restricted Boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent DNN (BRDNN), or a deep Q-network, but is not limited thereto. In the method, performed by the device1000, of generating a notification message according to the disclosure, for recognizing an action of an object from image content and identifying the object in a target image, the image content may be used as input data of the action recognition model1551and the target image may be used as input data of the object recognition model1552. In addition, the action recognition model1551may recognize an action of an object in the image content and output data, and the object recognition model1552may identify the object in the target image and output data. The artificial intelligence model may be generated via a training process. Here, being generated via a training process may mean that the predefined operation rules or artificial intelligence model set to perform desired characteristics (or purposes), is generated by training a basic artificial intelligence model by using a learning algorithm that utilizes a large amount of training data. The artificial intelligence model may include a plurality of neural network layers. Each of the plurality of neural network layers has a plurality of weight values, and performs a neural network operation through an operation between an operation result of a previous layer and the plurality of weight values. Visual understanding is a technology for recognizing and processing objects as in human vision and includes object recognition, object tracking, image retrieval, human recognition, scene recognition, three-dimensional (3D) reconstruction/localization, image enhancement, etc. An embodiment of the disclosure may be implemented as a recording medium including computer-executable instructions such as a computer-executable program module. A computer-readable medium may be any available medium which is accessible by a computer, and may include a volatile or non-volatile medium and a removable or non-removable medium. Also, the computer-readable media may include computer storage media and communication media. The computer storage media include both volatile and non-volatile, removable and non-removable media implemented in any method or technique for storing information such as computer readable instructions, data structures, program modules or other data. The communication medium may typically include computer-readable instructions, data structures, or other data of a modulated data signal such as program modules. A computer-readable storage medium may be provided in the form of a non-transitory storage medium. Here, the term ‘non-transitory storage medium’ refers to a tangible device and does not include a signal (e.g., an electromagnetic wave), and the term ‘non-transitory storage medium’ does not distinguish between a case where data is stored in a storage medium semi-permanently and a case where data is stored temporarily. For example, the non-transitory storage medium may include a buffer in which data is temporarily stored. According to an embodiment, the method according to various embodiments disclosed herein may be included in a computer program product and provided. The computer program product may be traded between a seller and a purchaser as a commodity. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disk read only memory (CD-ROM)), or may be distributed online (e.g., downloaded or uploaded) through an application store (e.g., Play Store™) or directly between two user devices (e.g., smart phones). In the case of online distribution, at least a portion of the computer program product (e.g., a downloadable app) may be temporarily stored in a machine-readable storage medium such as a manufacturer's server, an application store's server, or a memory of a relay server. In addition, in the present specification, the term “unit” may be a hardware component such as a processor or a circuit, and/or a software component executed by a hardware component such as a processor. The above-described description of the disclosure is provided only for illustrative purposes, and those of skill in the art will understand that the disclosure may be easily modified into other detailed configurations without modifying technical aspects and essential features of the disclosure. Therefore, it should be understood that the above-described embodiments are exemplary in all respects and are not limited. For example, the elements described as single entities may be distributed in implementation, and similarly, the elements described as distributed may be combined in implementation. The scope of the disclosure is not defined by the detailed description of the disclosure but by the following claims, and all modifications or alternatives derived from the scope and spirit of the claims and equivalents thereof fall within the scope of the disclosure. | 85,126 |
11943185 | DETAILED DESCRIPTION Systems and methods described herein relate to dynamic media overlays with one or more smart widgets that allow for dynamic content to be displayed in a media overlay for a message, based on context data associated with a computing device and/or a video or image. A smart widget is a display element associated with dynamic content. For example, a creator of a media overlay may be an artist or designer associated with a messaging network, a company, a service, or other entity, or the creator may be an individual user not associated with any particular entity. The creator may wish to create a media overlay to be made available to users that can display a location of the computing device associated with a user, information about audio playing on the computing device, event information associated with the location, user information, time or weather information, and so forth. For example, the creator may want to create a “Cinco de Mayo” media overlay for the upcoming holiday on May 5th. Instead of creating a separate media overlay for each location, or each potential event, weather, or what not, example embodiments allow the creator to add a smart widget to his media overlay image that will later fill in with a user's location or other information when the media overlay is displayed. This provides a scalable way of adding a location or other element to any media overlay, which significantly increases the media overlay's relevance and provides a more efficient system for providing media overlays. A user may be creating a message comprising a photograph or video and text. The user may be located in Venice and it may be May 5th. The user may then be able to access the “Cinco de Mayo” media overlay to augment his message comprising the photograph or video. The media overlay would be rendered on the user's device to display “Cinco de Mayo Venice” as shown in the example202inFIG.2. If the user was instead located in Manhattan, the example204would be displayed on the user's device. Example embodiments address a number of technical challenges and provide a number of advantages. For example, typically a creator of a media overlay would need to manually create a separate media overlay for each location where the creator wanted the media overlay to be available. Using the “Cinco de Mayo” media overlay example above, the creator would have to manually create a separate design for each city (e.g., San Francisco, Los Angeles, Albuquerque, Miami, etc.). This is not scalable to a single state or region, let alone an entire country or worldwide. This is especially true for current events or trends where a creator would want to create and release a media overlay quickly. Accordingly, example embodiments provide for a scalable solution that allows for a faster creation and release process. Instead of manually creating separate media overlays, a media overlay platform is provided that allows a creator to set up media overlays that can dynamically display airports, train stations, parks, and other location based data, as one example of dynamic data. FIG.1is a block diagram illustrating a networked system100, according to some example embodiments. The system100may include one or more client devices such as client device110. The client device110may comprise, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistant (PDA), smart phone, tablet, Ultrabook, netbook, laptop, multi-processor system, microprocessor-based or programmable consumer electronic, game console, set-top box, computer in a vehicle, or any other communication device that a user may utilize to access the networked system100. In some embodiments, the client device110may comprise a display module (not shown) to display information (e.g., in the form of user interfaces). In further embodiments, the client device110may comprise one or more of touch screens, accelerometers, gyroscopes, cameras, microphones, global positioning system (GPS) devices, and so forth. The client device110may be a device of a user that is used to create or generate messages comprising images (e.g., photographs), video, and/or text. The client device110may be a device of a user that is used to create and edit media overlays. One or more users106may be a person, a machine, or other means of interacting with the client device110. In example embodiments, the user106may not be part of the system100, but may interact with the system100via the client device110or other means. For instance, the user106may provide input (e.g., touch screen input or alphanumeric input) to the client device110, and the input may be communicated to other entities in the system100(e.g., third party servers130, server system102, etc.) via a network104. In this instance, the other entities in the system100, in response to receiving the input from the user106, may communicate information to the client device110via the network104to be presented to the user106. In this way, the user106may interact with the various entities in the system100using the client device110. The system100may further include a network104. One or more portions of network104may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the public switched telephone network (PSTN), a cellular telephone network, a wireless network, a WiFi network, a WiMax network, another type of network, or a combination of two or more such networks. The client device110may access the various data and applications provided by other entities in the system100via web client112(e.g., a browser, such as the Internet Explorer® browser developed by Microsoft® Corporation of Redmond, Washington State) or one or more client applications114. The client device110may include one or more client applications114(also referred to as “apps”) such as, but not limited to, a web browser, messaging application, electronic mail (email) application, an e-commerce site application, a mapping or location application, media overlay application, and the like. In some embodiments, one or more client applications114may be included in a given one of the client devices110and configured to locally provide the user interface and at least some of the functionalities, with the client application114configured to communicate with other entities in the system100(e.g., third party servers130, server system102, etc.), on an as needed basis, for data and/or processing capabilities not locally available (e.g., to process user queries, to authenticate a user106, to verify a method of payment, etc.). Conversely, one or more applications114may not be included in the client device110, and then the client device110may use its web browser to access the one or more applications hosted on other entities in the system100(e.g., third party servers130, server system102, etc.). A server system102may provide server-side functionality via the network104(e.g., the Internet or wide area network (WAN)) to one or more third party servers130and/or one or more client devices110. The server system102may include an application program interface (API) server120, a web server122, and a media overlay platform server124, which may be communicatively coupled with one or more databases126. The one or more databases126may be storage devices that store media overlays, smart widgets, messaging data, user data, computing device context data, media content data (e.g., data associate with video and images), and other data. The one or more databases126may further store information related to third party servers130, third party applications132, client devices110, client applications114, users106, and so forth. The one or more databases126may include cloud-based storage. The server system102may be a cloud computing environment, according to some example embodiments. The server system102, and any servers associated with the server system102, may be associated with a cloud-based application, in one example embodiment. The media overlay platform server124may provide back-end support for third-party applications132and client applications114, which may include cloud-based applications. In one embodiment, the media overlay platform server124may receive requests from third party servers or client devices for one or more media overlays, process the requests, provide one or more media overlays, and so forth.FIG.2show examples media overlays202and204that may be created and then stored by the media overlay platform server124and which may be accessed and analyzed by the media overlay platform server124for delivery to a computing device. The system100may further include one or more third party servers130. The one or more third party servers130may include one or more third party application(s)132. The one or more third party application(s)132, executing on third party server(s)130, may interact with the server system102via API server120via a programmatic interface provided by the API server120. For example, one or more the third party applications132may request and utilize information from the server system102via the API server120to support one or more features or functions on a website hosted by the third party or an application hosted by the third party. The third party website or application132, for example, may provide functionality that is supported by relevant functionality and data in the server system102. FIG.3is a block diagram illustrating a networked system300(e.g., a messaging system) for exchanging data (e.g., messages and associated content) over a network. The networked system300includes multiple client devices110, each of which hosts a number of client applications114. Each client application114is communicatively coupled to other instances of the client application114and a server system308via a network104. The client device110, client application114, and network104, are described above with respect toFIG.1. The client device110may be a device of a user that is used to create media content items such as video, images (e.g., photographs), and audio, and send and receive messages containing such media content items to and from other users. In one example, a client application114may be a messaging application that allows a user to take a photograph or video, add a caption, or otherwise edit the photograph or video, and then send the photograph or video to another user. In one example, the message may be ephemeral and be removed from a receiving user device after viewing or after a predetermined amount of time (e.g., 10 seconds, 24 hours, etc.). An ephemeral message refers to a message that is accessible for a time-limited duration. An ephemeral message may be a text, an image, a video, and other such content that may be stitched together in accordance with embodiments described herein. The access time for the ephemeral message may be set by the message sender. Alternatively, the access time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory. The messaging application may further allow a user to create a gallery or message collection. A gallery may be a collection of photos and videos which may be viewed by other users “following” the user's gallery (e.g., subscribed to view and receive updates in the user's gallery). In one example, the gallery may also be ephemeral (e.g., lasting 24 hours, lasting for a duration of an event (e.g., during a music concert, sporting event, etc.), or other predetermined time). An ephemeral message may be associated with a message duration parameter, the value of which determines an amount of time that the ephemeral message will be displayed to a receiving user of the ephemeral message by the client application114. The ephemeral message may be further associated with a message receiver identifier and a message timer. The message timer may be responsible for determining the amount of time the ephemeral message is shown to a particular receiving user identified by the message receiver identifier. For example, the ephemeral message may only be shown to the relevant receiving user for a time period determined by the value of the message duration parameter. In another example, the messaging application may allow a user to store photographs and videos and create a gallery that is not ephemeral and that can be sent to other users. For example, a user may assemble photographs and videos from a recent vacation to share with friends and family. A server system308may provide server-side functionality via the network104(e.g., the Internet or wide area network (WAN)) to one or more client device110. The server system308may include an application programming interface (API) server310, an application server312, a messaging application server314, a media content processing system316, and a social network system322, which may each be communicatively coupled with each other and with one or more data storage(s), such as database(s)320. The server system308may also comprise the server system102ofFIG.1or at least the media overlay platform server124ofFIG.1. The server system308may be a cloud computing environment, according to some example embodiments. The server system308, and any servers associated with the server system308, may be associated with a cloud-based application, in one example embodiment. The one or more database(s)720may be storage devices that store information such as untreated media content, original media content from users (e.g., high-quality media content), processed media content (e.g., media content that is formatted for sharing with client devices110and viewing on client devices110), context data related to a media content item, context data related to a user device (e.g., computing or client device110), media overlays, media overlay smart widgets or smart elements, user information, user device information, and so forth. The one or more database(s)320may include cloud-based storage external to the server system308(e.g., hosted by one or more third party entities external to the server system308). While the storage devices are shown as database(s)320, it is understood that the system100may access and store data in storage devices such as databases320, blob storages, and other type of storage methods. Accordingly, each client application114is able to communicate and exchange data with other client applications114and with the server system308via the network104. The data exchanged between client applications114, and between a client application114and the server system308, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). The server system308provides server-side functionality via the network104to a particular client application114. While certain functions of the system300are described herein as being performed by either a client application114or by the server system308, it will be appreciated that the location of certain functionality either within the client application114or the server system308is a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the server system308, but to later migrate this technology and functionality to the client application114where a client device110has a sufficient processing capacity. The server system308supports various services and operations that are provided to the client application114. Such operations include transmitting data to, receiving data from, and processing data generated by the client application114. This data may include message content, client device information, geolocation information, media annotation and overlays, message content persistence conditions, social network information, live event information, and date and time stamps, as examples. Data exchanges within the networked system300are invoked and controlled through functions available via user interfaces (UIs) of the client application114. In the server system308, an application program interface (API) server310is coupled to, and provides a programmatic interface to, an application server312. The application server312is communicatively coupled to a database server318, which facilitates access to one or more database(s)320in which is stored data associated with messages processed by the application server312. The API server310receives and transmits message data (e.g., commands and message payloads) between the client device110and the application server312. Specifically, the API server310provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the client application114in order to invoke functionality of the application server312. The API server310exposes various functions supported by the application server312, including account registration; login functionality; the sending of messages, via the application server312, from a particular client application114to another client application114; the sending of media files (e.g., images or video) from a client application114to the messaging application server314, and for possible access by another client application114; the setting of a collection of media data (e.g., a gallery, story, message collection, or media collection); the retrieval of such collections; the retrieval of a list of friends of a user of a client device110; the retrieval of messages and content; the adding and deletion of friends to a social graph; the location of friends within a social graph; opening an application event (e.g., relating to the client application114); and so forth. The application server312hosts a number of applications and subsystems, including a messaging application server314, a media content processing system316, and a social network system322. The messaging application server314implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client application114. The text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available, by the messaging application server314, to the client application114. Other processor- and memory-intensive processing of data may also be performed server-side by the messaging application server314, in view of the hardware requirements for such processing. The application server312also includes a media content processing system316that is dedicated to performing various media content processing operations, typically with respect to images or video received within the payload of a message at the messaging application server314. The media content processing system316may access one or more data storages (e.g., database(s)320) to retrieve stored data to use in processing media content and to store results of processed media content. The social network system322supports various social networking functions and services, and makes these functions and services available to the messaging application server314. To this end, the social network system322maintains and accesses an entity graph504(depicted inFIG.5) within the database320. Examples of functions and services supported by the social network system322include the identification of other users of the networked system300with which a particular user has relationships or is “following,” and also the identification of other entities and interests of a particular user. The messaging application server314may be responsible for generation and delivery of messages between users of client devices110. The messaging application server314may utilize any one of a number of message delivery networks and platforms to deliver messages to users. For example, the messaging application server314may deliver messages using electronic mail (e-mail), instant message (IM), Short Message Service (SMS), text, facsimile, or voice (e.g., Voice over IP (VoIP)) messages via wired networks (e.g., the Internet), plain old telephone service (POTS), or wireless networks (e.g., mobile, cellular, WiFi, Long Term Evolution (LTE), Bluetooth). FIG.4is block diagram400illustrating further details regarding the system300, according to example embodiments. Specifically, the diagram400is shown to comprise the messaging client application114and the application server312, which in turn embody a number of subsystems, namely an ephemeral timer system402, a collection management system404, and an annotation system406. The ephemeral timer system402is responsible for enforcing the temporary access to content permitted by the messaging client application114and the messaging application server314. To this end, the ephemeral timer system402incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., otherwise referred to herein as media collections, galleries, message collections, stories, and the like), selectively display and enable access to messages and associated content via the messaging client application114. The collection management system404is responsible for managing collections of media (e.g., collections of text, image, video, and audio data). In some examples, a collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “Story” for the duration of that music concert. The collection management system404may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client application114. The collection management system404furthermore includes a curation interface408that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface408enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system404employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain embodiments, compensation (e.g., money, non-money credits or points associated with the communication system or a third party reward system, travel miles, access to artwork or specialized lenses, etc.) may be paid to a user for inclusion of user-generated content into a collection. In such cases, the curation interface408operates to automatically make payments to such users for the use of their content. The annotation system406provides various functions that enable a user to annotate or otherwise modify or edit media content associated with a message. For example, the annotation system406provides functions related to the generation and publishing of media overlays for messages processed by the networked system300. In one example, the annotation system406operatively supplies a media overlay (e.g., a filter or media augmentation) to the messaging client application114based on a geolocation of the client device110. In another example, the annotation system406operatively supplies a media overlay to the messaging client application114based on other information, such as social network information of the user of the client device110. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device110. For example, the media overlay includes text that can be overlaid on top of a photograph taken by the client device110. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the annotation system406uses the geolocation of the client device110to identify a media overlay that includes the name of a merchant at the geolocation of the client device110. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database320and accessed through the database server318. In one example embodiment, the annotation system406provides a user-based publication platform that enables users to select a geolocation on a map and upload content associated with the selected geolocation. The user may also specify circumstances under which a particular media overlay is to be offered to other users. The annotation system406generates a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In another example embodiment, the annotation system406provides a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, the annotation system406associates the media overlay of a highest bidding merchant with a corresponding geolocation for a predefined amount of time. In another example embodiment, the annotation system406provides one or more smart widgets, comprising one or more dynamic elements, that may be included with a media overlay to allow for a dynamic media overlay to be presented to a user based on various context data associated with the user or user device, such as location, event, and so forth. Functionality related to smart widgets are described in further detail below. FIG.5is a schematic diagram500illustrating data which may be stored in the database(s)320of the server system308, according to certain example embodiments. While the content of the database320is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database320includes message data stored within a message table514. The entity table502stores entity data, including an entity graph504. Entities for which records are maintained within the entity table502may include individuals, corporate entities, organizations, objects, places, events, and the like. Regardless of type, any entity regarding which the server system308stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph504furthermore stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization), interest-based, or activity-based, merely for example. The database320also stores annotation data, in the example form of media overlays or filters, in an annotation table512. Annotation data may also be referred to herein as “creative tools.” Media overlays or filters, for which data is stored within the annotation table512, are associated with and applied to videos (for which data is stored in a video table510) and/or images (for which data is stored in an image table508). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of various types, including user-selected filters from a gallery of filters presented to a sending user by the messaging client application114when the sending user is composing a message. Other types of filters include geolocation filters (also known as geo-filters) which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client application114, based on geolocation information determined by a GPS unit of the client device110. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client application114, based on other inputs or information gathered by the client device110during the message creation process. Example of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device110, or the current time. Other annotation data that may be stored within the image table508is so-called “lens” data. A “lens” may be a real-time special effect and sound that may be added to an image or a video. As mentioned above, the video table510stores video data which, in one embodiment, is associated with messages for which records are maintained within the message table514. Similarly, the image table508stores image data associated with messages for which message data is stored in the entity table502. The entity table502may associate various annotations from the annotation table512with various images and videos stored in the image table508and the video table510. A story table506stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a story, gallery, or media collection). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table502). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client application114may include an icon that is user-selectable to enable a sending user to add specific content to his or her personal story. A media or message collection may also constitute a “live story,” which is a collection of content from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from various locations and events. Users, whose client devices110have location services enabled and are at a common location or event at a particular time, may, for example, be presented with an option, via a user interface of the messaging client application114, to contribute content to a particular live story. The live story may be identified to the user by the messaging client application114, based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story,” which enables a user whose client device110is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some embodiments, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). FIG.6is a schematic diagram illustrating a structure of a message600, according to some embodiments, generated by a client application114for communication to a further client application114or the messaging application server314. The content of a particular message600is used to populate the message table514stored within the database320, accessible by the messaging application server314. Similarly, the content of a message600is stored in memory as “in-transit” or “in-flight” data of the client device110or the application server312. The message600is shown to include the following components:A message identifier602: a unique identifier that identifies the message600.A message text payload604: text, to be generated by a user via a user interface of the client device110and that is included in the message600.A message image payload606: image data, captured by a camera component of a client device110or retrieved from memory of a client device110, and that is included in the message600.A message video payload608: video data, captured by a camera component or retrieved from a memory component of the client device110and that is included in the message600.A message audio payload610: audio data, captured by a microphone or retrieved from the memory component of the client device110, and that is included in the message600.A message annotations612: annotation data (e.g., media overlays such as filters, stickers, or other enhancements) that represents annotations to be applied to message image payload606, message video payload608, or message audio payload610of the message600.A message duration parameter614: parameter value indicating, in seconds, the amount of time for which content of the message600(e.g., the message image payload606, message video payload608, message audio payload610) is to be presented or made accessible to a user via the messaging client application114.A message geolocation parameter616: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message600. Multiple message geolocation parameter616values may be included in the payload, with each of these parameter values being associated with respect to content items included in the content (e.g., a specific image within the message image payload606or a specific video in the message video payload608).A message story identifier618: identifier values identifying one or more content collections (e.g., “stories”) with which a particular content item in the message image payload606of the message600is associated. For example, multiple images within the message image payload606may each be associated with multiple content collections using identifier values.A message tag620: each message600may be tagged with multiple tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in the message image payload606depicts an animal (e.g., a lion), a tag value may be included within the message tag620that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition.A message sender identifier622: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device110on which the message600was generated and from which the message600was sent.A message receiver identifier624: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device110to which the message600is addressed. The contents (e.g., values) of the various components of message600may be pointers to locations in tables within which content data values are stored. For example, an image value in the message image payload606may be a pointer to (or address of) a location within an image table508. Similarly, values within the message video payload608may point to data stored within a video table510, values stored within the message annotations612may point to data stored in an annotation table512, values stored within the message story identifier618may point to data stored in a story table506, and values stored within the message sender identifier622and the message receiver identifier624may point to user records stored within an entity table502. FIG.7is a flow chart illustrating aspects of a method, according to some example embodiments. For illustrative purposes, method700is described with respect to the networked system100ofFIG.1. It is to be understood that method700may be practiced with other system configurations in other embodiments. In operation702, a computing system (e.g., server system102or media overlay platform server124) receives a background image for a media overlay to be applied to a message comprising an image or video. For example, a graphical user interface (GUI) may be provided to a creator of a media overlay (e.g., an artist or designer associated with a messaging network, a company, a service, or other entity, or the creator may be an individual user not associated with any particular entity) via a computing device (e.g., client device110) to create the media overlay.FIG.8shows an example GUI800for creating a media overlay. In the example ofFIG.8, the creator has uploaded a background image802(e.g., a .png file) and has selected the “Smart Location” smart widget806from a selection of a plurality of smart widgets804. The list of smart widgets in the example GUI800is an example of possible smart widgets. In other embodiments, there may be more of a selection of smart widgets, less of a selection of smart widgets, or a different selection of smart widgets than what is shown inFIG.8. The creator may select the smart location by interacting with the GUI (e.g., touching a display of the computing device, using one or more buttons on the computing device, using a mouse or other device to interact with the display, etc.). In response to the selection of the smart widget, a box or other visual element may appear on the background for the media overlay. This box may be moved to the location desired on the media overlay and resized to the size at which the creator wishes the smart widget to be displayed. The box may also be rotated. The GUI further provides tools808for providing attributes to be associated with the smart widget. For example, the creator of the media overlay may select a font type, capitalization (e.g., all caps, Camel Case, first word capitalized, all lower case, etc.), shadowing (color, radius, x/y offsets, with some presets, etc.), alignment of text (e.g., left, center, right), color, and a preview of how text for the smart widget may appear.FIG.10shows example interface elements1002-1012for each of these attribute options. These attributes are just example attributes. More, less, or different attributes may be provided in the GUI. Other examples of attributes may be underlining, italics, font size, transparency, bold, and so forth. The GUI may further provide an option to choose the granularity for the smart widget, as shown inFIG.9. In the GUI900ofFIG.9, a granularity element902is displayed for the creator to input the level of granularity for the location. For example, the creator may choose city, state, neighborhood, city and state initials, zip code, country, county, school, college, airport, train station, venue, or other location. For example, a city selection may display “Seattle,” a city and state initials may display “Seattle, WA,” a state may display “New York,” a state initials may display “NY,” a zip code may display “94706,” neighborhood may display “Santa Monica,” a high school may display “Los Angeles High School,” a college may display “California State University,” an airport may display “LAX,” and so forth. The GUI may further provide an option to input default text for when location information may be unavailable (e.g., the computing system cannot get GPS or other location data from a computing device). The GUI800or900may further provide an option810to save the media overlay. Returning toFIG.7, after the computing system receives the background image (e.g., uploaded .png file) for the media overlay, it receives at least one smart widget selection to be associated with the media overlay in operation704. For example, the computing system may receive a selection of a smart location widget or one or more other smart widgets via the GUI800. The computing system may further receive attributes for each selected smart widget. As explained above, an attribute may include a location of the widget on the media overlay, a font for text associated with the smart widget, a color for text associated with the smart widget, a bounding box for text associated with the smart widget, a maximum font size for text associated with the smart widget, a minimum font size for text associated with the smart widget, an alignment for text associated with the smart widget, a shadow for text associated with the smart widget, a default spelling for text associated with the smart widget, a transparency value for text associated with the smart widget, and so forth. In operation706, the computing system stores the media overlay comprising the background image and the one or more smart widget selections in one or more databases (e.g., database126or320). The media overlay may be stored separate from the one or more smart widgets. For example, the media overlay may be stored as a .png image and the one or more smart widgets may be stored separately and associated with one or more elements (e.g., font type, alignment, etc.). The stored media overlay may be provided to a computing device to be applied to a message comprising a video or an image. In operation708, the computing system receives, from a computing device (e.g., client device110), a request for a media overlay to be applied to a message comprising a video or image. The request may comprise context data (e.g., a location of the computing device, audio playing on the computing device (e.g., song, speech, etc.), weather at the location of computing device, time of day, day of the week, name of a user associated with the computing device, etc.) associated with the computing device. For example, a user associated with the computing device may use the computing device to capture the video or image using a camera of the computing device. The user may wish to augment the video or image with text, a media overlay, a lens, or other creative tool. The computing device may detect that the user is capturing an image or video and send a request to the computing system to request media overlays to provide to the user. The request may comprise context data associated with the computing device, such as GPS or other location information, user information, information associated with the image or video, a copy or portion of the image or video, weather data, time data, date data, or other information. In operation710, the computing system may analyze the request to determine whether one or more media overlays are relevant to one or more aspects (e.g., elements) of the context data. In one example, media overlays may be associated with one or more triggers for which they are relevant to be sent to a computing device. Example triggers may be a geolocation (e.g., city, state, venue, restaurant, location of interest, school, etc.), time of day (e.g., breakfast time, sunrise, commute time, 3:30 pm, etc.), date (e.g., Wednesday, a holiday, the date of an event, etc.), audio detected by the computing device (e.g., background audio, audio associated with a video being captured, etc.), and so forth. For example, the media overlay inFIG.2may be triggered if the date is May 5th. Another media overlay may be triggered if the computing device is located in a particular venue (e.g., concert venue, theater, etc.). Another media overlay may be triggered based on a time of day, such as morning, or sunset. And yet another media overlay may be triggered based on a song being detected. For example, the computing system may analyze the context data to determine whether data in the context data triggers one or more media overlays of a plurality of media overlays. In one example, the computing system may determine that the context data comprises geolocation data. The computing system compares the geolocation data of the context data to a geolocation trigger for one or more of the plurality of media overlays to determine a match between the geolocation data of the context data and a geolocation trigger for one or more media overlays. In another example, the computing system determines that the context data comprises geolocation data and determines that the geolocation data is associated with an event. For example, the computing system may use map data (e.g., stored in one or more databases320or via third party sources) to determine that there is a concert venue associated with the geolocation and use event scheduling data (e.g., stored in one or more databases320or via third party sources) for the concert venue to determine that a particular concert is currently occurring at the concert venue. The computing system compares the event (e.g., the particular concert) to an event trigger (e.g., a trigger for that particular concert) for one or more media overlays to determine a match between the event associated with the geolocation data of the context data and an event trigger for one or more media overlays. In another example, the computing system determines that the context data comprises a date for an image or video to be included in a message. The computing system compares the date for the image or video to a date trigger for one or more media overlays to determine a match between the date for the image or video and a date trigger for one or more media overlays. In yet another example, the computing system determines that the context data includes a portion of an audio stream or an audio footprint. The computing system may determine that the audio is for a particular song or speech, associated with a particular artist, or the like. The computing system compares the song name, speech, artist name, or the like, to an audio trigger for one or more media overlays to determine a match between the audio in the context data and the audio trigger for one or more media overlays. In this way, the computing system determines that one or more media overlays is relevant to provide to the computing device. In another example, the computing device may request one or more specific media overlays. In operation712, for each relevant media overlay (or for each specifically requested media overlay), the computing system accesses the media overlay and determines whether the media overlay comprises one or more smart widgets. For example, the computing system may access the media overlay in the one or more databases and also access information associated with the media overlay that indicates that one or more smart widgets are associated with the media overlay. In operation714, the computing system determines data associated with the one or more smart widgets. For example, the computing system may use the context data received from the computing device, other data (e.g., data associated with a user of the computing device, date or time information, etc.), data derived from the context data (e.g., venue or place of interest from geolocation information in the context data), and so forth, to determine the date for the one or more smart widgets. Using the example media overlay inFIG.2, the computing device may determine that the computing device is located in Venice and thus, Venice is the data that is associated with the location smart widget in this example (e.g., the data string “Venice”). In operation716, the computing system transmits the media overlay and data associated with the at least one smart widget to the computing device. The data associated with the at least one smart widget may also comprise attributes associated with the smart widget. After receiving the media overlay and data associated with the at least one smart widget for the media overlay, the computing device renders the content for the at least one media overlay. For example, the computing device would render the text “Cinco de Mayo Venice” using data for the city name and in the font, color, and so forth indicated by the attributes for the smart widget. The computing device would then apply the media overly “Cinco de Mayo Venice” to the user's video or image. FIG.11is a flow chart illustrating aspects of a method, according to some example embodiments, for rendering a media overlay. For illustrative purposes, method1100is described with respect to the networked system100ofFIG.1. It is to be understood that method1100may be practiced with other system configurations in other embodiments. In operation1102, the computing device sends a request for one or more media overlays, as described above (e.g., a request for one or more specific media overlays or for relevant media overlays) to the computing system. The computing device may generate the context data for the request based on GPS or other location data detected by the computing device, data from the image or video (e.g., object recognition of elements within the image or video, audio, etc.), speed detected by an accelerometer or other means, altitude detected by the computing device, time of day or local time zone, day of the week, weather, temperature, a Quick Response (QR) Code or bar code, and so forth. The computing device receives the requested one or more media overlays in operation1104, from the computing system. In operation1106, the computing device determines that the media overlay comprises one or more smart widgets based on the data received with the media overlay from the computing system. For example, the data may specify the one or more smart widgets, or the fact that there are one or more smart widgets may be inferred by the amount and type of data received by the computing device. In operation1108, the computing device renders the content for the smart widget and applies it to the media overlay. For example, the computing device would render the text “Cinco de Mayo Venice” using data for the city name and in the font, color, and so forth indicated by the attributes for the smart widget. For example, the computing device may take the text “Venice” and fit it into a bounding box based on the attributes, such as font type and alignment, and resize intelligently if necessary to fit it within the bounding box. The computing device applies the color, shadow, and other attributes. The computing device applies the rendered content on the media overlay to generate the media overlay to display to the user. The user may then select the media overlay to be applied to a message comprising an image or video. The computing device receives the selection and in1110, applies the media overlay with the rendered content for the smart widget to the message (e.g., the media overlay is overlaid on the image or video in the message), and displays the media overlay with the rendered content for the smart widget on a display of the computing device, in operation1112. For example, the computing device would then apply the media overly “Cinco de Mayo Venice” to the user's video or image. The user may wish to send the message to one or more other users. The computing device may receive a request from the user to send the message, and send the message comprising the media overlay and the rendered content for the smart widget, in operation1114. For example, the computing device may send the message via the computing system to the one or more users. The computing system receives the message comprising the background image and the rendered content for the smart widget (this could be separate or in one file), and then the computing system sends the message to the one or more users. The message with the media overlay and rendered smart widget is then displayed on a computing device for the one or more users. Example embodiments describe certain processes or actions performed by a computing system and/or a computing device. It is understood that in other embodiments the computing system and/or computing device may perform all or a different subset of the processes or actions described. FIG.12is a block diagram1200illustrating software architecture1202, which can be installed on any one or more of the devices described above. For example, in various embodiments, client devices110and server systems102,120,122,124,130,308,310,312,314,316,322may be implemented using some or all of the elements of software architecture1202.FIG.12is merely a non-limiting example of a software architecture, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the software architecture1202is implemented by hardware such as machine1300ofFIG.13that includes processors1310, memory1330, and input/output (I/O) components1350. In this example, the software architecture1202can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, the software architecture1202includes layers such as an operating system1204, libraries1206, frameworks1208, and applications1210. Operationally, the applications1210invoke API calls1212through the software stack and receive messages1214in response to the API calls1212, consistent with some embodiments. In various implementations, the operating system1204manages hardware resources and provides common services. The operating system1204includes, for example, a kernel1220, services1222, and drivers1224. The kernel1220acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel1220provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services1222can provide other common services for the other software layers. The drivers1224are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers1224can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. In some embodiments, the libraries1206provide a low-level common infrastructure utilized by the applications1210. The libraries1206can include system libraries1230(e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1206can include API libraries1232such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and in three dimensions (3D) graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries1206can also include a wide variety of other libraries1234to provide many other APIs to the applications1210. The frameworks1208provide a high-level common infrastructure that can be utilized by the applications1210, according to some embodiments. For example, the frameworks1208provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks1208can provide a broad spectrum of other APIs that can be utilized by the applications1210, some of which may be specific to a particular operating system1204or platform. In an example embodiment, the applications1210include a home application1250, a contacts application1252, a browser application1254, a book reader application1256, a location application1258, a media application1260, a messaging application1262, a game application1264, and a broad assortment of other applications such as a third party applications1266. According to some embodiments, the applications1210are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications1210, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third party application1266(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third party application1266can invoke the API calls1212provided by the operating system1204to facilitate functionality described herein. Some embodiments may particularly include a media overlay application1267. In certain embodiments, this may be a stand-alone application that operates to manage communications with a server system such as third party servers130or server systems102or708. In other embodiments, this functionality may be integrated with another application (e.g., messaging application1262). The media overlay application1267may request and display various data related to messaging, media content, media collections, media overlays, and so forth, and may provide the capability for a user106to input data related to the system via a touch interface, keyboard, or using a camera device of machine1300, communication with a server system via I/O components1350, and receipt and storage of object data in memory1330. Presentation of information and user inputs associated with the information may be managed by media overlay application1267using different frameworks1208, library1206elements, or operating system1204elements operating on a machine1200. FIG.13is a block diagram illustrating components of a machine1300, according to some embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.13shows a diagrammatic representation of the machine1300in the example form of a computer system, within which instructions1316(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1300to perform any one or more of the methodologies discussed herein can be executed. In alternative embodiments, the machine1300operates as a standalone device or can be coupled (e.g., networked) to other machines. In a networked deployment, the machine1300may operate in the capacity of a server machine or system102,120,122,124,130,308,310,312,314,316,322, and the like, or a client device110in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1300can comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1316, sequentially or otherwise, that specify actions to be taken by the machine1300. Further, while only a single machine1300is illustrated, the term “machine” shall also be taken to include a collection of machines1300that individually or jointly execute the instructions1216to perform any one or more of the methodologies discussed herein. In various embodiments, the machine1300comprises processors1310, memory1330, and I/O components1350, which can be configured to communicate with each other via a bus1302. In an example embodiment, the processors1310(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) include, for example, a processor1312and a processor1314that may execute the instructions1316. The term “processor” is intended to include multi-core processors1310that may comprise two or more independent processors1312,1314(also referred to as “cores”) that can execute instructions1316contemporaneously. AlthoughFIG.13shows multiple processors1310, the machine1300may include a single processor1310with a single core, a single processor1310with multiple cores (e.g., a multi-core processor1310), multiple processors1312,1314with a single core, multiple processors1312,1314with multiples cores, or any combination thereof. The memory1330comprises a main memory1332, a static memory1334, and a storage unit1336accessible to the processors1310via the bus1302, according to some embodiments. The storage unit1336can include a machine-readable medium1338on which are stored the instructions1316embodying any one or more of the methodologies or functions described herein. The instructions1316can also reside, completely or at least partially, within the main memory1332, within the static memory1334, within at least one of the processors1310(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1300. Accordingly, in various embodiments, the main memory1332, the static memory1334, and the processors1310are considered machine-readable media1338. As used herein, the term “memory” refers to a machine-readable medium1338able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium1338is shown, in an example embodiment, to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store the instructions1316. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., instructions1316) for execution by a machine (e.g., machine1300), such that the instructions1316, when executed by one or more processors of the machine1300(e.g., processors1310), cause the machine1300to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more data repositories in the form of a solid-state memory (e.g., flash memory), an optical medium, a magnetic medium, other non-volatile memory (e.g., erasable programmable read-only memory (EPROM)), or any suitable combination thereof. The term “machine-readable medium” specifically excludes non-statutory signals per se. The I/O components1350include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. In general, it will be appreciated that the I/O components1350can include many other components that are not shown inFIG.13. The I/O components1350are grouped according to functionality merely for simplifying the following discussion, and the grouping is in no way limiting. In various example embodiments, the I/O components1350include output components1352and input components1354. The output components1352include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor), other signal generators, and so forth. The input components1354include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In some further example embodiments, the I/O components1350include biometric components1356, motion components1358, environmental components1360, or position components1362, among a wide array of other components. For example, the biometric components1356include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components1358include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components1360include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensor components (e.g., machine olfaction detection sensors, gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components1362include location sensor components (e.g., a Global Positioning System (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication can be implemented using a wide variety of technologies. The I/O components1350may include communication components1364operable to couple the machine1300to a network1380or devices1370via a coupling1382and a coupling1372, respectively. For example, the communication components1364include a network interface component or another suitable device to interface with the network1380. In further examples, communication components1364include wired communication components, wireless communication components, cellular communication components, near field communication (NFC) components, BLUETOOTH® components (e.g., BLUETOOTH® Low Energy), WI-FI® components, and other communication components to provide communication via other modalities. The devices1370may be another machine1300or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)). Moreover, in some embodiments, the communication components1364detect identifiers or include components operable to detect identifiers. For example, the communication components1364include radio frequency identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect a one-dimensional bar codes such as a Universal Product Code (UPC) bar code, multi-dimensional bar codes such as a Quick Response (QR) code, Aztec Code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, Uniform Commercial Code Reduced Space Symbology (UCC RSS)-2D bar codes, and other optical codes), acoustic detection components (e.g., microphones to identify tagged audio signals), or any suitable combination thereof. In addition, a variety of information can be derived via the communication components1364, such as location via Internet Protocol (IP) geo-location, location via WI-FI® signal triangulation, location via detecting a BLUETOOTH® or NFC beacon signal that may indicate a particular location, and so forth. In various example embodiments, one or more portions of the network1380can be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the public switched telephone network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a WI-FI® network, another type of network, or a combination of two or more such networks. For example, the network1380or a portion of the network1380may include a wireless or cellular network, and the coupling1382may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling1382can implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology. In example embodiments, the instructions1316are transmitted or received over the network1380using a transmission medium via a network interface device (e.g., a network interface component included in the communication components1364) and utilizing any one of a number of well-known transfer protocols (e.g., Hypertext Transfer Protocol (HTTP)). Similarly, in other example embodiments, the instructions1316are transmitted or received using a transmission medium via the coupling1372(e.g., a peer-to-peer coupling) to the devices1370. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions1316for execution by the machine1300, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Furthermore, the machine-readable medium1338is non-transitory (in other words, not having any transitory signals) in that it does not embody a propagating signal. However, labeling the machine-readable medium1338“non-transitory” should not be construed to mean that the medium is incapable of movement; the medium1338should be considered as being transportable from one physical location to another. Additionally, since the machine-readable medium1338is tangible, the medium1338may be considered to be a machine-readable device. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. | 74,798 |
11943186 | Throughout the Figures, like reference numbers refer to like elements. The above-listed figures are illustrative and are provided as merely examples of embodiments for implementing the various principles and features of the present invention. It should be understood that the features and principles of the present invention may be implemented in a variety of other embodiments and the specific embodiments as illustrated in the Figures should in no way be construed as limiting the scope of the invention. DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS The invention will now be described in detail with reference to various embodiments thereof as illustrated in the accompanying drawings. In the following description, specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art, that the invention may be practiced without using some of the implementation details set forth herein. It should also be understood that well known operations have not been described in detail in order to not unnecessarily obscure the invention. I. The Use of the Email and DNS Infrastructure to Define the Routing for the Delivery of Messages Containing Time-Based Media Using a Near Real-Time Communication Protocol for the Actual Delivery of the Media Referring toFIG.1, a diagram of a representative network system capable of (i) supporting “live” or near real-time communication of time-based media and (ii) routing using the infrastructure of email and DNS according to one possible embodiment of the invention is shown. The system10includes a network12with users A, B, C and D using communication devices14A,14B,14C and14D and Servers16A,16B,16C and16D located on the network12. The network12further includes a DNS server18. In various embodiments, the network12may include the Internet, an intranet, a mobile IP network, or any other type of network that relies on the Internet Protocol and/or DNS, or any combination thereof. Users A, B and C are each addressed by the servers16A through16D by their respective globally addressable email addresses “UserA@Domain A”, “UserB@Domain B”, and “UserC@Domain C”. User D is intentionally not identified on the network12by a globally addressable email address for reasons mentioned below. The Servers16A,16B,16C and16D are each configured to provide one or more services to Users A, B, C and D respectively. In this example, Server A defines Domain A and provides User A with the standard email delivery service using SMTP (or a similar proprietary or non-proprietary service) and MX DNS records, hereafter referred to as “MX”. Server A further provides User A with a real-time communication service, hereafter referred to as “RVX”. Server16B defines Domain B and provides User B with the real-time communication service RVX, but not the email service MX. Server16C defines Domain C and provides User C with the email service MX, but not the real-time domain RVX service. Server16D does not provide user D with either the real-time communication service RVX nor the email domain MX service, but possibly other services that are not identified because they are not relevant. In one embodiment, the real-time service RVX may rely on any communication protocol that allows users to communicate time-based media in near real-time, but does not require the recipient to review the time-based media in a near real-time mode. Known protocols with these properties include the Cooperative Transmission Protocol (CTP) described in detail in the U.S. application Ser. No. 12/028,400 and Ser. No. 12/192,890 or the near real-time synchronization protocol of voice or other time-based media as described in U.S. application Ser. Nos. 12/253,816, 12/253,833 and 12/253,842. The above-listed U.S. applications are assigned to the assignee of the present invention and are incorporated herein by reference for all purposes. In alternate embodiments, the RVX service may rely on other communications protocols, individually or in combination, that provide near real-time communication, such as SIP, RTP, Skype, VoIP, etc. The communication devices14A through14D may each be any type of communication device, such as land-line telephones, VoIP telephones, cellular radios, satellite radios, military or first responder radios, mobile Internet devices, or just about any other type of communication device. In addition, a given user might have multiple communication devices14. For example, a user may have one or more of the following; a home computer, a work computer, a Push to Talk radio, a mobile phone or a personal digital assistant (PDA). Regardless of the number of communication devices14each user A, B, C and D has, each will operate essentially the same and receive the services provided by the servers16A,16B,16C and16D as described herein respectively. It should be noted that the system10as illustrated has been greatly simplified compared to what would typically be implemented in actual embodiments. For the sake of illustration, the RVX and MX services as (or not) provided to Users A, B, C and D as listed above have been purposely selected to highlight and describe certain features and principles of the invention. In actual embodiments, however, there would likely be a significantly larger number of users, each with one or more communication devices14and associated servers on the network12, providing a variety of services to each user. In addition, any combination ranging from a single server or a suite of servers16may be included on the network12to provide the RVX and/or MX for one to multiple users respectively. The communication devices14A,14B and14C and the servers16A,16B and16C may also communicate with one another in a manner similar to that described above using DNS, SMTP, or other proprietary or non-proprietary email protocols for route discovery across one or more hops on the network12. The delivery route for a message to a recipient in the same domain is typically delivered to an inbox on the same server16or an associated server in the same domain. A message sent to a recipient in another domain will typically be sent to the email server of the recipient via one or more hops across the network12. With each hop, the media is transmitted using the real-time protocol as soon as the delivery path to the next hop is discovered. If multiple hops are required, then media is typically being transmitted between hops using the real-time protocol before the complete delivery route to the recipient is known (i.e., the path through subsequent hops). This differs significantly from convention emails, where the body of the email is typically first received in full and stored at each hop and forwarded to the next hop only after the route to the next hop is discovered. Referring toFIG.2, a diagram of a communication device14according to one embodiment of the present invention is shown. In this embodiment, the communication device14is a mobile device20capable of wirelessly communicating with the network12, such as a mobile phone or PTT radio. The mobile device20may optionally include one or more of the following; a keypad22, a display24, speaker26, microphone28, volume control30, camera32capable of generating still photos and/or video, a display control element34, a start function element36and an end function element38. In various embodiments, the device20(i) is IP based, meaning it is designed to communicate over the network12using the Internet Protocol and (ii) runs one or more RVX protocols, including any of those listed above or any other near real-time communication protocol. In addition, the mobile device20may optionally also locally run an email client, access an email client located on one of the servers16located on the network12, or be capable of both running and accessing an email client on the network. Referring toFIG.3, a diagram of a communication device according to another embodiment of the present invention is shown. In this embodiment, the communication device14is a computer40connected to the network12, either through a wired or wireless connection (not shown). The computer40optionally includes one or more of the following; a keyboard42, a display44, speakers46, a microphone48, a camera50capable of generating still photos or video, a mouse52, a start function element54and an end function element56. The computer40is capable of running an email client, accessing an email client located on the network12, or both. In various embodiments, the computer40(i) is IP based, meaning it is designed to communicate over the network12using the Internet Protocol and (ii) runs one or more RVX protocols, including any of those listed above or any other near real-time communication protocol. The computer40could be a portable computer, such as a laptop or personal digital assistant, and is not limited to the desktop computer as shown. In addition, the computer40may optionally also locally run an email client, access an email client located on one of the servers16located on the network12, or be capable of both running and accessing email client on the network. The start function elements36/54and the end function elements38/56of the mobile device20and computer40are meant to be symbolic of their respective functions. It is not necessary for mobile device20, computer40, or any other type of communication device14, to physically include start and end buttons per se. Rather, it should be understood that each of these functions might be implemented in a variety of ways, for example, by entering a voice command, a predefined keystroke or command using a touch screen or other input device such as a mouse, stylus or pointer, etc. In one specific embodiment, the start and/or end functions may be implemented by default. In other words, the start function may automatically be implemented by the creation of media after the email address of a recipient is defined. For example, a sender may select a recipient from their contacts list, and then begin talking or creating other time-based media. By virtue of defining the recipient and the creation of media, the “start” function34may automatically be implemented. Similarly, the end function may be implemented by default. After the sender stops creating media, the end function may automatically be implemented after a predetermined period of time. In one non-exclusive embodiment, the network12uses the existing email infrastructure, including the globally recognizable email addresses of the recipient users and DNS for route discovery, while using a near real-time RVX protocol for the actual transport of messages containing time-based media to the addressed recipient once the route is discovered. Like conventional emails, each message relies on a header that defines, among other things, a globally addressable email address of one or more recipients for routing purposes. Unlike conventional store and forward emails, however, the time-based media of the message is transmitted using a near real-time RVX protocol. As a result, time-based media may be simultaneously and progressively transmitted across the network12, as the sender creates the media. In addition, the recipient may optionally simultaneously and progressively render the time-based media as it is received over the network. When two or more parties are conversing (e.g., generating and reviewing time-based media) at approximately the same time, the network12is supporting full-duplex, near real-time communication, using one or more RVX protocol(s) for media delivery, while using the existing email infrastructure and DNS for routing. With full duplex real-time communication, the user experience is very similar to a conventional telephone conversation, except the hassles of dialing a telephone number and waiting and listening to the phone ring while a circuit connection is established is avoided. Alternatively, if the recipient does not reply at approximately the same time, then the user experience is similar to an asynchronous messaging system, such as voice mail, but again without the hassles of dialing the telephone number of the recipient, listening to the phone ring while the establishment of a circuit connection is attempted, and then the eventual rolling-over into the voice mail system of the recipient. On the contrary, the sending party simply has to select or otherwise define the email address of the recipient and then start generating media. The media is routed to the recipient automatically without waiting for a circuit connection to be established. Referring toFIG.4A, a flow diagram illustrating one possible sequence for creating and transmitting time-based media associated with a message on a communication device14in accordance with the principles of the present invention is shown. If the user of a communication device14wishes to communicate with a particular recipient, the user will either select the recipient from their list of contacts or reply to an already received message from the intended recipient. Alternatively, the globally addressable email address of the recipient is manually entered into the device14. As soon as the email address of the recipient is defined, two operations are performed. A message header is created (step62) and the defined email address is included in a header field (i.e., the “To”, CC, and/or “BCC” field). In addition, the route for delivering the media associated with the message to the globally addressed recipient is immediately discovered using a DNS lookup result. The result can be either an actual DNS lookup or a cached result from a previous lookup. Thereafter, the start function36/54is initiated, either manually or by default, and the user may begin creating time-based media (step64), for example by speaking into the microphone, generating video, or both. The time-based media is then progressively and simultaneously encoded (step66), transmitted (step68) over the network12using an RVX protocol using the discovered delivery route, and optionally persistently stored on the device14(step70). It should be noted that although these steps62through70are illustrated in the diagram in a sequence, for all practical purposes, they occur at substantially the same time. As the media is created, the RVX protocol progressively and simultaneously transmits the media across the network12to the recipient, as the route is typically discovered without any perceptible delay to the sending user. The time-based media of outgoing messages may optionally be persistently stored on the sending communication device14for a number of reasons. For example, if time-based media of a message is created before the delivery route is discovered, then the time-based media may be transmitted from storage when the delivery route at least to the next hop is discovered. If time-based media is still being created after the route is discovered, then the time-based media is transmitted progressively and simultaneously as the media is being created. Alternatively with the storage of time-based media, the sender may review stored outgoing messages at an arbitrary later time. A message may also be created and stored when the communication device14is not connected to the network12, where connected is defined as the ability to send messages over the network and not connected is defined as the inability to send messages over the network. When the device14later connects, the message may be transmitted to the intended recipient from storage, using either an RVX protocol or as an attachment to an email. Referring toFIG.4B, a flow diagram100illustrating one possible sequence for creating a message header (step62inFIG.4A) in accordance with the principles of the invention is shown. In the step62a, the globally addressable email address of the sender is provided in the “From” field of the message header. In step62b, the globally addressable email address of the recipient is entered into the “To” field of the message header. If there are multiple recipients, the email address of each is entered into the “To” field. In additional embodiments, a “CC” or “BCC” field may be used for one or all recipients. In step62c, a globally unique message ID or number is assigned to the message. In step62d, other information, such as a conversation name, or the subject of the message, is provided in the header. In step62e, the start date/time the message was created and possibly the end date/time of the message may be included in the header. In one embodiment, the steps62athrough62egenerally all occur at substantially the same time, with the possible exception of defining the end date/time. In other embodiments, the steps62athrough62emay occur in any order. The start and end date/times ordinarily coincide with the implementation of the start function36/54and end function38/56on the sending device14respectively. In certain embodiments, the steps62athrough62emay be performed on a sending communication device14. In other embodiments, the sending communication device may send some or all of the message header information to a server16, where the steps62athrough62eare performed. The time-based media of the message may also be optionally stored on a server16for later review by the sending user or transmission to the recipient. In the embodiments described above, a message header with various fields including a To, From, Message ID number, Conversation Name, and message Start and End time is provided. It should be understood that not all of these fields are necessary, and other fields may be included. The only required information is at least one recipient specified in one of the To, CC, or BCC fields, which defines the globally addressable email address of a recipient. The other fields are all optional. The format of the message header is also variable. In one embodiment, the structure of the message header may be similar to that used with conventional emails or the enveloped used with emails. In other embodiments, the structure of the message header may take any form that is suitable for transmitting the globally addressable email address of the recipient(s), along with possibly other header information, across the network12. While specific email header fields are discussed for specifying recipients, the actual header field containing the recipient address information may not necessarily include the globally addressable email address of the recipient per se. As is well known in the art, an “envelope recipient” may be used to specify the email address of the recipient, even though the envelope recipient may differ from the recipients listed in the email headers. Thus as used herein, the term message header should be broadly construed to include both envelope information and conventional message or email headers including any number of fields, such as but not limited to those specified in RFC 822 or 5322. In addition, the usage of the terms “addressing” or “globally addressable email address” should be broadly construed to include any addressing method, including usage in conventional message or email headers or in a message envelope. The network12, under certain circumstances, may deliver messages containing time-based media that can (i) be simultaneously and progressively transmitted to a recipient over the network12and (ii) reviewed in near real-time by the addressed recipient as the time-based media is being created and sent by the sending user. Under other circumstances, the messages cannot be delivered in real-time. Both the near real-time and non real-time scenarios are discussed below with regard toFIGS.5A through5Crespectively. Referring toFIG.5Aa flow diagram80illustrating one possible sequence for conducting near real-time communication with messages containing time-based media in accordance with the principles of the present invention is shown. The sequence is described in the context of user A sending a message to user B using any near real-time RVX protocol. As noted above, server16B provides user B with an RVX service, but not the MX service. In this example, the steps62through70as described above with regard toFIGS.4A and4Bmay occur either on the communication device14A of the sender or the server16A. In the initial step82, server16A receives the message header (or the header information allowing the server to perform some or all of the steps62a-62e). As soon as user B′s globally addressable email address (userB@DomainB) is received, server16A requests that DNS server18using standard DNS protocols perform a DNS lookup of domain B or accesses a previously cached lookup for the RVX of domain B (step84). Regardless of how obtained, the result is positive (decision86) since the RVX exists for domain B. Typically at substantially the same time, the server16A receives the time-based media of the message. As soon as the delivery path to server16B is at least partially known, the media is progressively and simultaneously sent using the RVX protocol from the server16A to server16B. The time-based media may be transmitted across one or more hops between the two servers16A and16B. At each hop, a DNS lookup result is used to discover the delivery route to the next hop, while the RVX protocol is used to deliver the time-based media to each next hop. In one embodiment, the media is simultaneously and progressively transmitted to the communication device14B of the recipient when the time-based media arrives at server16B. The recipient is notified of the incoming message, and in response, the recipient may elect to simultaneously review the media in the near real-time mode as the media of the message is progressively received. In an alternative embodiment, the media of the message is also optionally placed in an inbox and persistently stored on the recipient device14B. With the persistent storage of the message, the recipient has the option of reviewing the media in the near real-time mode as the media is received or at an arbitrary later time from storage. In yet another embodiment, the message may also be stored in an inbox located at the server16B associated with the user B. In this manner, the user of device14B may access the message in either real-time or at an arbitrary later time. As noted above, user B is not provided the MX service and therefore cannot receive emails. But in situations where recipient can receive emails, the message can be encapsulated into a file and the file attached to an email that is forwarded to the inbox of the recipient. In yet other embodiments, the media of the message may be stored in an out-box of the sending user, either located on the user's sending communication device14A, or on the server16A associated with the sender. Referring toFIG.5B, a flow diagram80illustrating one possible example of the communication sequence between user A and user C in accordance with the principles of the invention is shown. As previously noted, server16C provides user C with the MX service, but not a real-time RVX service. When user A wishes to communicate with user C, the initial sequence is essentially the same as that described above. Server16A initially receives a message header (or the header information necessary to optionally perform steps62a-62e) with the globally addressable email address of user C (userC@domainC) and the progressive and simultaneous transmission of time-based media by user A (step82). Since the RVX lookup result (decision86) is negative, server16A performs a DNS lookup or uses a previously cached MX lookup for domain C (step90). With a positive result (decision92), server16A sends a conventional email with the time-based media encapsulated as an attachment (step96) to server16C. At the server16C, the email is placed in the recipient's inbox. The email may also be forwarded to an inbox on communication device14C. Thus, when the recipient does not have the RVX service, the time-based media of the message is sent across the network12by Server16A to server16C, and possibly communication device14C, using the store and forward procedure of SMTP or a similar proprietary or non-proprietary email protocol. Referring toFIG.5C, a flow diagram80illustrating one possible example of the communication sequence between user A and user D in accordance with the principles of the invention is shown. As previously noted, user D is not provided with either the email MX service or a near real-time RVX service. When user A wishes to communicate with user D, the initial sequence is essentially the same as that described above. Server16A receives a message header with the globally addressable email address of user D (userD@domainD) and the progressive transmission of time-based media by user A (step82). Since the RVX lookup (decision86) and the MX lookup for domain D (diamond92) are both negative, an error message is generated (step94) and the message cannot be delivered (step96). In various embodiments, the time-based media of the message may be stored at either the sending communication device14A, the server16A, or both. The message may later be sent when the RVX and/or MX service is provided to user D. The scenario described with regard toFIG.5Ctypically occurs if an incorrect email is provided for a recipient. When the sender attempts to send a message using an invalid email address, the error message (step94) results. If the correct email address is provided, the message can then be forwarded using either an RVX protocol or as an attachment to an email using the MX service, depending on the services provided to user D. In an alternative embodiment, the communication devices14A through14C may be arranged in a peer-to-peer configuration. With this arrangement, at least the sending communication devices14are capable of performing the RVX and/or MX lookups on DNS server18directly and caching the results, without the aid of an intervening server16to perform the these functions. The communication devices14may also be capable of progressively transmitting the media of the messages directly to other communication devices. Depending on whether the recipient is a member or not of the RVX and/or MX domains, the sending communication device14A will either (i) progressively transmit the time-based media of a message to the recipient over the network12as the media is created; (ii) encapsulate the time-based media of the message into a file and transmit an email including the file as an attachment to the recipient using SMTP or a similar proprietary protocol; (iii) or receive an error message if an invalid email address was used. Referring toFIG.5D, a flow diagram illustrating one possible example of peer-to-peer communication in accordance with the principles of the invention is shown. In the initial step101, a sending communication device14indicates that it would like to communicate with a receiving communication device14. In decision diamond102, the communication device14of the sender relies on either an actual or cached DNS lookup result of the recipient's globally addressable email address to determine if the peer recipient receives the RVX service. If the result is positive, then the time-based media created (step103) using the sending communication device14is progressively transmitted (step104) to the recipient as it is created using the delivery route defined by the RVX lookup. In decision diamond105, it is determined if real-time communication is established. If yes, then the transmitted media is progressively rendered at the communication device14of the recipient as the media is received (box106). If real-time communication is not established, then the media of the message is placed in the inbox of the recipient (box107), either on the device14of the recipient, a server16associated with the recipient, or possible both. Real-time communication may not take place with the recipient for a number of reasons, such as the recipient is not available, out of network range, or has indicated a desire to not review the message in the near real-time mode. In another alternative embodiment, the message may always be placed in the inbox of the recipient, regardless if it is reviewed in real-time. On the other hand, if the recipient does not receive the RVX service (decision102), then the media of the message is delivered by email, provided the recipient receives the MX domain service. The time-based media is encapsulated into a file and attached to an email (step108). When the message is complete, the email is transmitted using the route defined by the MX lookup result (step109) to the inbox of the recipient. In various embodiments, the inbox may be located on the device14of the recipient, a server16associated with the recipient, or both. In situations where both peers are running an email client, media may be sent in the form of an attachment to an email from the sending communication device14to the receiving communication device14. This differs from known telephone messaging systems, where a server, as opposed to a sending peer, emails a voice message to the recipient. In certain embodiments, an attachment may be substituted or augmented by a link to a web page containing the time-based media, as described in more detail below. It should be noted that the discussion above with regard toFIGS.4A,4B and5A through5Chas been simplified to illustrate certain aspects of the invention. It should be understood that actual implementations could be modified in several ways. For example, each time the server16A received an email address, the server16A would first determine if the domain of the recipient (i.e., domain A, domain B or domain C), is within one or more local domains of the server16A. If not, then the procedures described above with regard toFIGS.5A,5B and5Care performed respectively. On the other hand if the domain of the recipient is within a local domain of the server16A, then the server16A may deliver the message directly to the recipient either (i) in real-time if the recipient receives a real-time communication service or (ii) as an attachment to an email if the recipient receives the MX service, but not a real-time service. In addition, it may not be necessary for the Server16A to perform a DNS lookup in each instance. As is well known, previous DNS lookup results may be cached and used rather than performing a new DNS lookup each time an email address of a recipient is received. Referring toFIG.6, a flow diagram110illustrating one possible sequence for sending time-based media encapsulated in an email attachment in accordance with the principles of the invention is shown. When the time-based media of a message is to be sent in the form of an email (e.g., box98inFIG.5Bor box107inFIG.5D), the time-based media generated by user A is first encapsulated in a file (step112). The file is then attached to the email (step114) when the message is complete. When the time-based media of the message is complete, the email with the attachment is then transmitted (step116) to the MX lookup result of the recipient in a manner similar to a conventional email. With either the server or peer-to-peer models described above, the RVX lookup result is initially used to deliver the time-based media. If the RVX attempt fails, then the MX result is used as a backup. With this arrangement, a conventional email with the time-based media included in an attachment and/or web link is used to deliver the media in circumstances where the recipient is not provided RVX service. The email may be created either on a server or on the sending device. II. Delivery Options Referring toFIG.7, a diagram illustrating another embodiment for the delivery of time-based media over the network12in accordance with the principles of the invention is shown. With this embodiment, the network12is essentially the same as that described above with regard toFIG.1, with at least one exception. One or more of the servers16A-16C are configured as web servers, in addition to providing the RVX and/or MX services as described above. With this embodiment, users receive an email from their respective server16containing a URL link when a message is sent to them. When the user selects the link through a web browser running on their communication device14, the appropriate web server16serves up web pages allowing the recipient to access and review the message. The served web pages may also provide a variety of rendering options, such as review the media of the message in either the real-time or time-shifted modes, catch up to live, pause a live conversation, jump to the head of a conversation, jump to a previous point in time of the conversation, render faster, render slower, jump between different conversations, etc. In the figure, the web server functionality is provided as one of the services provided by servers16A,16B and/or16C. In an alternative embodiment, the web server functionality can be implemented using one or more other dedicated web servers (not illustrated) on the network12besides16A,16B or16C. III. Email Protocol Modifications and Progressive Emails The messages as described above are routed using globally addressable email address and the DNS infrastructure for defining a delivery route, while using an RVX protocol for the actual delivery of the time-based media in near real-time. Although the SMTP and other proprietary and non-proprietary email protocols as currently defined and used are essentially store and forward protocols, with certain modifications, these protocols can be used as an RVX messaging protocol for the near real-time delivery of time-based media as contemplated herein. With conventional emails, the media content must be composed in full and packaged before the email can be sent. On the receiving end, the email must be received in full before the recipient can review it. As described in detail below, SMTP, Microsoft Exchange or any other proprietary email protocol may be used for creating “progressive” emails, where media may be sent in real-time. The existing email infrastructure can be used to support the real-time transmission of time-based media by modifying the way the SMTP, Microsoft Exchange or other proprietary and non-proprietary email protocols (hereafter generically referred to as an email protocol or protocols) are used on the sending side and modifying the way that emails are retrieved from the server on the receiving side. Current email protocols do not strictly require that the entire message be available for sending before delivery is started, although this is typically how email protocols are used. Time-based media can therefore be delivered progressively, as it is being created, using standard email protocols. Email is typically delivered to a recipient through an access protocol like POP or IMAP. These protocols do not support the progressive delivery of messages as they are arriving. However, by making modifications to these access protocols, a message may be progressively delivered to a recipient as the media of the message is arriving over the network. Such modifications include the removal of the current requirement that the email server know the full size of the email message before the message can be downloaded to the client. By removing this restriction, a client may begin downloading the time-based media of an email message as the time-based media of the email message is received at the server over the network. Referring toFIG.8, the structure of a conventional email120according to the prior art is illustrated. The email120includes a header122and a body124. The header includes a “To” (or possibly the CC and/or BCC fields) field, a “From” field, a unique global ID number, a subject field, optional attachments, and a date/time stamp. The body124of the email includes the media to be transmitted, which typically includes a typed message and possibly attached files (e.g. documents or photos). When complete, the email is transmitted by implementing a “send” function or command A DNS lookup of the email address of the recipient is then performed and the email is routed to the recipient. Conventional emails are “static”, meaning the body of the email, including attachments, must be created before transmission may begin. Once transmission starts, the contents defined in the body is fixed, and cannot be dynamically altered or updated. As a result, there is no way to progressively transmit with conventional emails time-based media as the media is being created. Prior art emails120are therefore incapable of supporting near real-time communication. Referring toFIG.9, one possible embodiment of a “progressive” email130according to the principles of the invention is shown. The email message130, which is capable of supporting real-time communication, includes a header132including a “To” field (and possibly CC and/or BCC fields) and a body134. The structure of email130differs from a conventional prior art email120in at least two regards. First, the header132includes an email Start date/time and an End date/time. By associating a start and end time with an email130, as opposed to just a date/time stamp when an email120is sent, the second difference may be realized. As soon as the email address of the recipient is defined, the delivery path to the next hop or hops is immediately ascertained, using a DNS lookup result of the defined email address. Again, the lookup result can be either an actual or a previous result that is cached. As the delivery route from hop to hop is discovered, time-based media may be progressively transmitted as it is created, using the streaming nature of SMTP, Microsoft Exchange or any other type of email protocol. The body134of email130is therefore “progressive”. As the time-based media associated with an email message130is dynamically created, the time-based media is progressively transmitted to the email server of the recipient. If an email130is sent to multiple recipients, regardless if identified in the To, CC or BCC fields, the above process is repeated for each. With progressive emails130, an email protocol session is established with the email server associated with the sender as soon as the email address of the recipient is defined. This differs from conventional emails120, where the email protocol session is typically initiated only after the email has been composed in full and the sender implements the “send” function. As a result, the delivery route can be discovered either before or concurrent with the progressive transmission of time-based media as it is being created. In situations where the time-based media may be created before the session is established, the time-based media may be temporarily and/or persistently stored as the media is created. The stored media may then be progressively transmitted from storage once the protocol session with the email server is established. The End date/time of email130may be either defined or open-ended. When the sender actively implements the end function38/56on the communication device14, then the end time of the email130is defined. If the end function38/56is never implemented, then the duration of the email130is “open-ended” and does not necessarily have a defined end date/time. Open-ended emails130are therefore typically terminated by default after a predetermined period of time where no media is created. In summary, progressive emails130can be sent using SMTP, Microsoft Exchange or any other proprietary or non-proprietary email protocol by implementing the above-described modifications. Similarly, recipients may simultaneously and progressively review the time-based media of progressive emails130by modifying access protocols such as POP, IMAC and the like. Together, these modifications enable the use of email addressing, email protocols, DNS and DNS protocols, and the existing email infrastructure to support real-time communication of time-based media. IV. Late Binding of Recipient Addresses for Real-time Voice and Other Time-Based Media With the messages (as described with regard toFIGS.4A,4B and5A-5D) or progressive emails130described above, a user addresses a recipient using their globally addressable email address and then immediately begins talking or generating time-based media. With each embodiment, the delivery route is immediately discovered as soon as the email address of the recipient is defined. Time-based media is progressively transmitted along the delivery route as it is discovered as the media is created. Consequently the discovery of an active delivery route and the progressive creation, transmission and delivery of the time-based media may occur in real-time. In the event the actual delivery route is discovered after the creation of time-based media has started, then the media may be temporarily and/or persistently stored and then transmitted from storage once the active delivery route is defined. No network connection or circuit needs to be established before the sender may start talking or creating other media. The ability to progressively and simultaneously transmit the time-based media using DNS and the infrastructure of email therefore enables the late binding of recipient addresses for voice and other time-based media in a manner that previously was not possible. V. Conversations The messaging methods and systems as described (with regard toFIGS.1-3,4A-4B,5A-5DorFIG.9) are each conducive for supporting conversations between sending and receiving users. When two or more parties are conversing back and forth using any of the above-listed RVX protocols or progressive emails130, then the conversation may take place (i) in the near real-time mode; (ii) the time-shifted mode; or (iii) seamlessly transition between the two modes. When two or more participants are conversing in the real-time mode, the user experience is similar to a conventional full duplex telephone conversation. In the time-shifted mode, the user experience is similar to an asynchronous messaging system. As described in more detail in the above-mentioned U.S. applications, the media may be rendered using a number of different rendering options, such as play, catch up to live, pause a live conversation, jump to the head of a conversation, jump to a previous point in time of the conversation, render faster, render slower, jump between different conversations, etc. By using certain rendering options, a user may seamlessly transition a conversation from the time-shifted mode to the real-time mode and vice versa. Regardless of the embodiment, the “reply” function may be implemented in a variety ways. For example, the recipient may enter an explicit reply command into their communication device14, such as by using a predefined voice or keystroke command, or entering a command through a touch screen. Alternatively, a reply message or email may be generated automatically when the recipient begins speaking or generating other time-based media in response to a message or email130. When a reply message is automatically created, the email address of the original sender is used for addressing the reply message. In yet other embodiments, the RVX protocol used for sending and receiving the messages of a conversation between participants in the real-time mode do not necessarily have to be the same. For example, one participant may send messages using one of the CTP, synchronization, progressive emails130, VoIP, SIP, RTP, or Skype protocols, whereas other participants may use a different one of the listed protocols, provided some type of a common conversation identifier is used. Any messages, regardless of the protocol used for transmission, are linked or threaded together using the unique conversation identifier. In various further embodiments, conversations can be defined using a variety of criteria. For example, conversations may be defined by the name of a person (e.g., mom, spouse, boss, etc) or common group of people (e.g., basketball team, sales team, poker buddies, etc). Conversations may also be defined by topic, such as fantasy football league, ACME corporate account, or “skunk works” project. Regardless of the contextual attribute used to define a conversation, the ability to link or organize the messages of a particular conversation together creates the notion of a persistent or ongoing conversation. With a conventional telephone call, the conversation typically ends when the parties hang up. There is no way to contextually link, organize and possibly store the spoken words of multiple telephone exchanges between the same parties. On the contrary, conversations, as defined herein, are a set of common messages linked together by a common attribute. So long as messages are added to the conversation, the conversation is continuous or ongoing. This attribute makes it possible for a participant to contribute to a conversation at any arbitrary time. For example, a user may select a conversation among a list of conversations and contribute a message to the selected conversation at anytime. The message is then sent to all the conversation participants. Messages are therefore not necessarily sent when either a conversation is first created or in reply to an incoming message. VI. Implementation Embodiments The messaging methods as described with regard toFIGS.1-3,4A-4B and5A-5Dand progressive emails130may be implemented in a variety of ways. For example, cell phone and other mobile communication service providers may provide users with peer-to-peer mobile communication devices that operate using either messages and/or progressive emails130. In addition, these service providers may also maintain a network12of servers16for conveying messages between users as described herein using one or more RVX protocols. In yet another embodiment, the messaging and progressive email130methods may be embedded in a software application that is intended to be loaded into and executed on conventional telephones, mobile or cellular telephones and radios, mobile, desktop and laptop computers. In each of these cases, the application enables the device to send, receive and process messages and progressive emails130as described herein. In yet other implementations, conventional email clients can be modified to create, receive and process progressive emails130. The modified email client may alternatively reside on a server on the Internet or other proprietary or non-proprietary network, on sending or receiving devices, or both. Although the above-described systems and methods were generally described in the context of a single sender and a single recipient (as discussed with regard toFIGS.4A-4B and5A-5D) or emails130to a single recipient, it should be understood the messages and/or emails130might be simultaneously sent to multiple parties. Each recipient will either receive or not receive the message or email, depending on their status, as described above. Also although the above-described email methods were generally described in the context of “globally” unique identifiers, such as an email address, it is necessary to note that such identifiers do not necessarily have to be a global. In alternative embodiments, the identifier may uniquely identify a user within a defined non-global community of users. For example, a community, such a social networking website, may issue each user a unique identifier within the community. Users within the community can then communicate with one another, as described herein with regard toFIGS.1through10. The unique identifier assigned to each user is used to not only authenticate each user, but also for routing messages and media between users. Accordingly, the term “identifier” as used in this application is intended to be broadly construed and mean both globally and non-globally unique identifiers. It also should be noted that the system and methods as described herein are not intended for use with only “live” real-time transmission. The aforementioned systems and methods as described with respect toFIGS.1through3,4A-4B,5A-5D and9may also be used with the real-time transmission of previously created and stored time-based media. As the media is retrieved from storage, it is progressively transmitted as the delivery route to the recipient is discovered, as described in detail above. The time-based media exchanged by the messages and/or emails is not limited to just voice or video. In addition, the time-based media may be delivered to a recipient in a different form than it was created. For example, a voice message may be transcribed into a text file or a message in English may be translated into another language before being delivered to the recipient. Any media that varies over time, such as sensor data, GPS or positional information, may also be transmitted. While the invention has been particularly shown and described with reference to specific embodiments thereof, it will be understood by those skilled in the art that changes in the form and details of the disclosed embodiments may be made without departing from the spirit or scope of the invention. It is therefore intended that the invention be interpreted to include all variations and equivalents that fall within the true spirit and scope of the invention, as provided in the attached claims. | 49,390 |
11943187 | DETAILED DESCRIPTION Embodiments are directed to a social media platform with reaction sharing capability. The platform can include a social contract that messages and videos exchanged between users spread or create a positive emotion, such as happiness. Any message and/or video that a recipient found to generate a negative emotion can be found to be in violation of the social contract, and the communication platform can restrict senders of such messages and/or videos from spreading negative emotion, e.g., by suspending their writing privileges. Thus, the communication platform can encourage users to spread positive emotions. When a recipient receives a message and/or video, the recipient indicates to the communication platform a type of emotion, e.g., a positive emotion (which is in accordance with the social contract) or a negative emotion (which is in violation of the social contract), the message and/or video generated in the recipient. The communication platform tracks various metrics regarding users of the communication platform, such as a number of times a specified user has violated the social contract, a percentage of messages and/or videos sent by the specified user that is found in violation of the social contract, a number of times the specified user is reported to the communication platform, or a frequency of the violation. The communication platform can suspend the writing privileges of the specified user when one or more metrics exceed their thresholds, which prevents the user from sending messages and/or videos and therefore, stop spreading of negative emotions by the specified user. After the suspension, the specified user may continue to receive messages and/or videos from other users but would not be able to send messages and/or videos to other users. The communication platform encourages spreading of positive emotions. For example, when a recipient of a message and/or video indicates to the communication platform that the message and/or video generated a positive emotion, a sender of the message and/or video is made aware of that, which encourages the sender to send messages and/or videos that generate a positive emotion. In another example, the communication platform enables the recipient to send a reaction of the recipient in reading the message and/or viewing the video to the sender, which can further encourage the sender to continue sending messages and/or videos that generate such positive emotion. The reaction can be a video recording of the recipient reading the message and/or viewing the video and his/her reaction to the message and/or video. Such exchanges between a pair of users in which (a) a message and/or video generates a positive emotion in the recipient, (b) the recipient sends a video recording of the reaction to the sender, and (c) the sender experiences a positive emotion upon viewing the video recording of the reaction of the recipient, not only promotes generating positive emotion between the pair of users, but also aids in strengthening the relationship between the pair of users. In another example, the communication platform enables the sender of a video to select one or more portion(s) of the video for which the reaction of the recipient should be recorded. The communication platform can inform the recipient that the sender has requested a reaction video for a portion of the video and request permission to record during that portion. This can allow a sender to better analyze which parts of a video made the recipient happy and which parts of the video made them upset. For example, a stand-up comedian can use this feature to gauge which jokes in the routine get the best response from a recipient. Turning now toFIG.1,FIG.1is a block diagram of an environment100in which the communication platform can be implemented. The environment100includes a server105, which implements at least a portion of the communication platform and facilitates exchanging of messages between users125of the communication platform. The communication platform also includes a client-side portion that enables a user to send or receive messages, among other functionalities of the communication platform. The client-side portion can be implemented as an app, e.g., a mobile app, which can be installed and executed on client devices136-138associated with users125. The client-side portion of the communication platform can also be implemented as a browser-based application, which can be accessed using a web browser application on the client devices136-138. An executable file for generating the app can be stored at the server105, storage system110associated with the server105, or at a different location that is accessible by the client devices136-138. Users125can install the app in their respective client devices136-138by downloading the app from any of the above locations. A client device can be any of a desktop, laptop, tablet PC, smartphone, wearable device or any computing device that is capable of accessing the server105over a communication network150and is capable recording videos, sending and/or receiving multimedia content from a user. In the following paragraphs the client-side portion of the communication platform is implemented as an app (also referred to as “messaging app”). Each of the users125can install the messaging app on their respective client devices. For example, the first user126can install the messaging app (“messaging app146”) on the client device136, the second user127can install the messaging app (“messaging app147”) on the client device138, and the third user128can install the messaging app (“messaging app148”) on the client device138. The messaging apps installed on the communication platform encourages the users125to exchange messages between them in accordance with a social contract, e.g., promote a positive emotion among the users125, and restricts those of the users125who send messages that are in violation of the social contract, e.g., messages that generate negative emotions in a recipient, by suspending writing privileges of those users. While the social contract is described as spreading a positive emotion, such as happiness, it is not restricted to a positive emotion and include other factors. Users125are required to accept the social contract before they can send or receive messages. For example, when a first user126uses the messaging app146for the first time, the messaging app146displays the social contract, such as “I intend to make people happy with my messages” and requires the user to accept the contract before the first user126can send or receive any messages from other users. This message can remind the first user126that the communication platform is about spreading positive emotions.FIG.2shows example screenshots of the messaging app displaying the social contract, consistent with various embodiments. In some embodiments, the GUIs ofFIG.2are part of the messaging app146. The GUI205displays a brief description of the messaging app and the GUI210displays the social contract. A user can send messages to other users only upon accepting the social contract, e.g., selecting “I Promise” option in the GUI210. If the user does not accept the social contract, the messaging app does not let the user send messages to other users (but can receive messages from other users). Some of the functionalities supported by the messaging app include sending a message to a user, receiving a message from a user, and posting a message that can be viewed by multiple users, recording videos of reactions to reading a message, recording “catch-up” videos having information about a particular user for consumption by a category of users, and recording videos of users having descriptive content, all of which are described in the following paragraphs. In some embodiments, the messaging app can act a content sharing app. Some additional functionalities can include sharing audio, video, images, GIFs, URL links, coupons, location, and any other shareable content. In some embodiments, the app can facilitate screen sharing. For example, user A can be messaging user B regarding private information such as a bank statement or health records. However, to get another opinion user A may want to share the information with user B. To do so, user A can then choose to share screens with user B to display the information. With respect to sending a message, in some embodiments, the messaging app lets the users125send messages to contacts in their address book on the client device. For example, the messaging app146enables the first user126to send messages to the contacts in the address book on the client device136. That is, the first user126will be able to send a message130to a second user127if the contact information of the second user127is stored in the address book on the client device136.FIG.3shows example screenshots of the messaging app displaying contacts from an address book on the client device, consistent with various embodiments. In some embodiments, the GUIs ofFIG.3are part of the messaging app146. The GUI305displays contacts from an address book of the first user126stored on the client device136. The first user126may select one of the contacts, e.g., “Brian C.” from the address book, as illustrated in the GUI310. The first user126can then compose the message130and send it to the second user127, as illustrated in the GUI315. The message130can include text or multimedia content. However, in some embodiments, the message130is a text. In some embodiments, the first user126can also choose to send the message130anonymously. For example, the GUI315shows an anonymous indicator316, which when checked shares the user identifiable information (Ull) of the first user126with the recipient along with the message130, and when unchecked removes the Ull from the message130, thereby sending the message130anonymously. Further, in some embodiments, if a recipient finds a message to be offensive, the messaging app may show the Ull to the recipient even if the message was sent anonymously. For example, if the first user126sends the message130to the second user127anonymously, and if the second user127found the message130to be offensive, the messaging app147can reveal the Ull of the first user126to the second user127in the message. In some embodiments, this can act as a deterrent for sending offensive messages. The Ull can include any information that can be used to identify or derive identity of the sender, such as a username, name of the user, telephone number, and email ID. The GUI320shows a note, which indicates that an identity of the sender will be revealed if the recipient finds the message to be offensive. In some embodiments, the note is shown only the first when the user sends an anonymous message. While the messaging app146, lets the first user126send messages to the contacts in the address book on the client device136, in some embodiments, the messaging app146lets the first user126send a message to a contact that is not in the address book. The first user126may type in the contact information, such as a telephone number or email ID of the recipient rather than selecting from the address book. Further, regardless of whether the first user126can send messages to contacts that are not in the address book, the first user126may receive messages from contacts that are not in the address book of the first user126. With respect to receiving messages, the messaging app provides an option to the user to record a reaction of the user to reading the message. For example, when the second user127receives the message130, the messaging app147can provide an option to the second user127to record a reaction135of the second user127to reading the message130. The messaging app147provides this option prior to displaying the message130to the second user127. If the second user127chooses to record the reaction135, the messaging app147instructs a camera of the client device137to start a video recording of the second user127and then displays the message130. The recording can happen in the background while the message130is displayed on a screen of the client device137. The messaging app147records the video for a specified duration from the time the message is displayed, e.g., 30 seconds, 45 seconds, or 1 minute. Even after recording, the second user127can choose whether or not to send the reaction135to the first user126. Further, the messaging app asks the second user127to indicate a type of the emotion the message130generated for the second user127. The type of emotion can be a positive emotion, such as happiness, laughter, smile, joy, etc., or a negative emotion such as sad, disappointed, creepy, gross, or angry. The messaging app147can provide an indicator to indicate the type of emotion. For example, the positive emotion indicator can be an icon, a text, an image, a symbol or other representations of positive emotion, such as a “like” image, a thumbs up image, a smiley icon, a smile symbol, and the negative emotion indicator can be an icon, a text, an image, a symbol or other representations of negative emotion, such as a “dislike” image, a thumbs down image, a frown face icon, or a frown face symbol. By selecting one of these two indicators, the second user127can indicate the type of emotion generated by the message130. For the sake of brevity, an indication of positive emotion is referred to as a “like,” and an indication of a negative emotion is referred to as a “dislike.” In some embodiments, if the second user127indicates that the message130generated a negative emotion, the messaging app147provides an option for the second user127to report the sender of the message130, e.g., the first user126, to the communication platform in the server105. Upon receiving a report against the first user126, the server105stores the report in the storage system110, which can be used in determining whether to suspend the writing privileges of the first user126. FIG.4shows example screenshots of the messaging app displaying a received message, consistent with various embodiments. In some embodiments, the GUIs ofFIG.4are part of the messaging app147. The GUI405displays a notification of a new message. The GUI410displays a prompt asking the second user127to confirm whether the second user127wants to record the reaction to reading the message130. If the second user127confirms the recording of the reaction, the messaging app147instructs the camera of the client device137to start the video recording and then displays the GUI415, which displays the message130, otherwise the messaging app just displays the GUI415. The GUI415also provides emotion indicators such as a positive emotion indicator416and a negative emotion indicator417. The second user127can indicate the type of emotion generated by the message130by selecting one of the two emotion indicators416and417. The GUI420displays a prompt asking the second user127to confirm whether the second user127wants to send the recording of the reaction135to the sender of the message130, e.g., the first user126. If the second user127confirms sending of the reaction, the reaction135is sent to the first user126. The messaging app147transmits the reaction135(if the second user127provided consent to sending) and the type of emotion to the server105, which is then transmitted to the first user126. The first user126can view the reaction135of the second user127and the type of emotion felt by the second user127in reading the message130in the messaging app146. In some embodiments, the messaging app includes an inbox folder, which includes all messages received by a user, and an outbox folder which includes all messages sent by the user. For example, an inbox folder in the messaging app146associated with the first user126, that can include all messages received by the first user126, and an outbox folder that includes all messages sent by the first user126. If any of the messages in these folders have an associated reaction, then those messages would also include the associated reaction which the first user126can playback anytime. For example, if the any of the messages in the inbox folder of the first user126has reactions of the first user126in reading those messages, those videos would be tagged to the message. A thumbnail or any other video indicator that is indicative of a video is displayed in association with the message, and the first user126can playback the video by selecting the video indicator. Similarly, if any message in the outbox folder has a reaction of a recipient of the message, the message is tagged with the video, and the first user126can playback the video by selecting the associated video indicator. FIG.5shows example screenshots of a message in an outbox folder of a user, consistent with various embodiments. In some embodiments, the GUIs ofFIG.5are part of the messaging app146. The GUI505shows a message in the outbox folder of a user, e.g., the first user126, that is sent to another user “Stephen P.” The message is also associated with a reaction of the recipient, which is indicated by the video thumbnail506. The first user126can select the video thumbnail506to play the video. The GUI510shows the positive emotion indicator511, which is indicative of type of emotion, e.g., positive emotion, felt by the recipient in reading the message from the first user126. The communication platform facilitates a user in strengthening existing relationships. The communication platform categorizes the contacts in the address book of a user into multiple categories, each of which is representative of a relationship type of the user with the contacts in those categories. In some embodiments, the communication platform categorizes the contacts based on a degree of interaction between the users in the communication platform. FIG.6shows an example screenshot of categorization of address book contacts of a user, consistent with various embodiments. In some embodiments, the GUIs ofFIG.6are part of the messaging app146. The messaging app146categories the contacts in the address book of the first user126into a “know,” “trust” and “love” category as illustrated in the GUI605. In some embodiments, the “know” category includes all contacts from the address book of the first user126. In some embodiments, the “trust” category includes those contacts from the address book to whom the first user126has sent a message. In some embodiments, the “trust” category includes those contacts from the address book with whom the first user126has exchanged messages (e.g., sent messages to and received messages from). While the messaging app146can automatically categorize the contacts, the first user126can also assign one of the above categories to a specified contact. Further, the specified contact can move from one category to another if the interaction of the first user126with the specified contact changes. For example, the specified contact may initially be in “know” category but may move to “trust” category when the first user126sends a message to the specified contact, and may further move to the “love” category when the first user126and the specified user have exchanged messages. In some embodiments, the messaging app146can transmit the categorization information of the contacts to the server105, which can store the categorization information in the storage system110, e.g., in an encrypted format. In some embodiments, the criteria to assign a specified contact to a specified category can be user-defined. For example, the degree of interaction with the specified contact, e.g., the number of messages that the first user126has to send to the specified contact, for the specified contact to be categorized into “trust” category can be user-defined. Similarly, the number of messages to be exchanged between the first user126and the specified contact for the specified contact to be categorized into “love” category can be user-defined. Such categorization can encourage the user to strengthen a specified relationship. For example, the first user126can look at the categorization and see that a specified contact, “Kevin”, is in the “trust” category and may feel that they haven't communicated with each other in a while and therefore, get encouraged to exchange messages with him. Upon exchanging messages with “Kevin,” “Kevin” may be moved to the “love” category. The messaging app146also allows the first user126to share “catchup” videos with his/her contacts. In some embodiments, a “catch-up” video is a video recording having some information of the first user126. For example, the catch-up video of the first user126can be a video of the first user126providing some information about what's happening with the first user126, which helps his/her contacts catch up with the first user126. The first user126can generate different catch-up videos for different categories, e.g., having varying degree of personal information in the messaging app146as illustrated in the GUI605by catch-up video section606. For example, the first user126can create a first catch-up video and assign to the “know” category, create a second catch-up video and assign to the “trust” category, and create a third catch-up video and assign to the “love” category. In some embodiments, the catch-up video generated for the “love” category can have more personal information about the first user126than the catch-up video generated for the “trust” category as the first user126has a closer and stronger relationship with contacts in the “love” category than in the “trust” category. Also, the allotted duration of catch-up video recording for different categories can be different. For example, the allotted duration for the catch-up video can be the highest for the “love” category and the lowest for the “know” category. In the catch-up video section606, the first user126has generated a catch-up video only for the “know” category. When a specified contact of the first user126requests for viewing a catch-up video of the first user126, the messaging app determines the category to which the specified contact belongs and provides access to the catch-up video of the first user126that is assigned to the determined category. The specified contact may not have access to catch-up videos of the first user126assigned to categories other than the one to which the specified contact belongs. In some embodiments, the messaging app installed a client device associated with the specified contact may interact with the server105to find the categorization of the specified contact in the first user's126messaging app. Similarly, when the first user126requests for viewing a catch-up video of a contact, such as “Kevin,” e.g., by tapping on the thumbnail of the contact in the “know category,” the messaging app146determines the category to which the first user126belongs in Kevin's messaging app and provides access to the catch-up video of Kevin that is assigned to the category of the first user126. In some embodiments, the messaging app146may indicate a number of the catch-up videos viewed by the first user126, e.g., as a percentage of a total number of catch-up videos of the contacts accessible by the first user126, such as 60% of catch-up videos viewed.” The communication platform also lets users post or publish messages (“public post” or “public message”) that can be viewed by all users of the communication platform. Users can also tag or include videos in their public posts. A video tagged to a public post can be a video recording of what's on a user's mind or what the user feels about another public post. In some embodiments, the public posts are displayed anonymously, that is, the messaging app removes the Ull of the user who posted the public post or commented on the public post. FIG.7shows example screenshots of public posts, consistent with various embodiments. In some embodiments, the GUIs ofFIG.7are part of the messaging app146. The GUI705displays a public post by a user of the communication platform. The GUI710displays public posts by multiple users of the communication platform. The first user126can choose to comment on one of the public posts. The GUI715allows the first user126to comment on one of the public posts displayed in the GUI710. The first user126may also choose to add a video to the comment, as illustrated in the GUI710. Referring back toFIG.1and as described above, the communication platform not only promotes spreading positive emotion between users, but also aids in strengthening an existing relationship between users. The communication platform restricts users found to be in violation of the social contract from sending messages by suspending their writing privileges. The server105stores various types of data regarding the users125in the storage system110, such as user profile information of the users (e.g., name, telephone number, email ID, profile picture), the messages exchanged between the users, a like or dislike received for each of the messages, any reports received against the users, reaction videos of the users, and at least some information regarding contact categories of each of the users125. The server105tracks various metrics for each of the users125in determining whether to suspend the writing privileges for a user. For example, a metric can include one or more of a number of times a specified user has received dislikes from a particular user, a number of times a specified user has received dislikes from one or more of the users125, a percentage of messages sent by the specified user that received dislikes, a total number of messages sent by the specified user, a period for which the specified user has been active on the communication platform, a number of times the specified user is reported to the communication platform, a frequency at which a dislike is received (e.g., how many dislikes per week/month, per 100 messages), a frequency at which the specified user is reported, and such. The server105can define threshold limits for each metric or a combination of the metrics. The server105can suspend the writing privileges of the specified user when one or more metrics exceed their thresholds. Suspending the writing privileges can prevent the specified user from sending messages and therefore, can stop spreading of negative emotions by the specified user. After the suspension, the specified user may continue to receive messages from other users, but would not be able to send messages to other users. In some embodiments, the suspended user can appeal to the communication platform to regain writing privileges. FIG.8shows the prompts that a user would see when their messages are being reported for having negative content. Sample prompts800includes reminder801, warning802, and suspension803. Furthermore, althoughFIG.8uses the term “messages”, a person of skill in the art would understand that the prompts inFIG.8can be applied to shared videos, files, and other shareable content. A user can receive reminder801when there has been one report against the user's messages. The user can then acknowledge receipt of reminder801and continue to use the communication platform. Warning802is displayed when the user has received two reports. In this case, the communication platform again notifies the user of the two reports and warns that an additional report will result in suspension of writing privileges. Again, the user can acknowledge receipt of warning802and continue to use the communication platform. If the user receives a third report, the user will receive a notice of suspension803. Suspension803informs the user that their writing privileges are suspended. In some embodiments, for a user to progress from reminder801to warning802to suspension803, the user must be reported by different users. For example, user A can send three negative messages to user B. User B can then report all three messages to the communication platform. In which case, user A can now be presented with reminder801because the reports were all made by the same user. Conversely, if user A sent one video to user B, a message to user C, and a video to user D, and all three recipients report user A, then user will receive suspension803. In some embodiments, prior to suspending a user's writing privileges, the reported messages may be analyzed through sentiment analysis. Sentiment analysis can include methods known in the art such as natural language processing, text analysis, computational linguistics, and biometrics to identify whether the user's writing privileges should be suspended. For example, the communication platform may use IBM Watson technology to perform sentiment analysis. In another example, sentiment analysis can include a scoring technique wherein each word, phrase, expression, or facial feature that indicates a negative message can be summed to arrive at a negative-ness score. Upon which, if the negative-ness score exceeds a pre-determined threshold, then the user's writing privileges can be suspended. In some embodiments, upon receiving suspension803, the user is given the option to apologize to the reporters. If the user decides to apologize, the communication platform may display identifiable information of the three reporters. The user can then select which reporter to apologize to. In some embodiments, the reporters may be anonymous. The communication platform may only display generic prompts such as “reporter 1”, “user 1”, or the like. FIG.9shows screen shots of the prompts the reporting user may see when the reported user chooses to apologize. Apologize prompts900includes apology preview901and apology902. In some embodiments, apology preview901shows the name of the user that was reported and indicates that they want to apologize. Moreover, apology901gives the recipient the option to view the apology or to ignore the apology. In some embodiments, the reported user may receive a notification of the recipient's selection. If the recipient chooses to view the apology by selecting, for example, “HEAR HER OUT,” then apology902is displayed. In some embodiments, apology902is a written message with the option to accept or decline the apology. In some embodiments, apology902can be a video, music, or other shareable content. Moreover, the accept or decline prompts can be shown in various ways such as by emoticons, text, icons, or the like. In some embodiments, if the recipient accepts the apology, the report that was made by the recipient against the reported user can be cancelled. For example, user A can be reported once by each of user B, user C, and user D. Thus, user A can have their writing privileges be suspended. However, user A can apologize to user B, and user B can accept the apology. Once accepted, user A's writing privileges can be reinstated because user A only has two valid reports on their name. FIG.10shows a prompt a user may receive after recording a video or writing a message. Thank you prompt1000includes record prompt1001. In some embodiments, the communication platform displays record prompt1001after a user has recorded a video or written a message. Record prompt1001can request the user to record a thank you video or write a thank you message, which can be played or displayed when a recipient indicates that the video or message made them feel positive (i.e., smile or laugh). For example, user A can write a joke and proceed through prompts on the communication platform to send the joke to user B. User B may then indicate that the joke made them smile. Once indicated, user A may receive record prompt1001to record a thank you video or write a thank you message. In some embodiments, record prompt1001may be displayed prior to sending a video or message to a recipient. FIG.11is a block diagram of a communication platform, consistent with various embodiments. The communication platform1100includes a messaging component1105, a video component1110, an emotion management component1115, a metric determination component1120, and a monitoring component1125. The messaging component1105facilitates in exchanging messages between users125, e.g., sending a message from one user to another user. The video component1110facilitates in recording videos, such as reactions and catch-up videos. The emotion management component1115determines the type of emotion generated by the messages exchanged in the communication platform1100. The metric determination component1120can determine various metrics associated with the users125, e.g., for monitoring the adherence of the users125to the social contract of the messaging1100platform. The monitoring component1125can monitor the adherence of the users125to the social contract of the messaging1100platform and suspend writing privileges of the users found violating the social contract. The communication platform1100can be implemented in a distributed architecture. That is, the components of the communication platform1100can be distributed across multiple entities. For example, some components can be implemented on the server105and some in the client-side portion, e.g., in the messaging app. In another example, all the components can be implemented in both the server105and the client-side portion. Additional details with respect to the components of the communication platform1100are described at least with reference toFIGS.9and10below. Note that the communication platform1100illustrated inFIG.11is not restricted to having the above described components. The communication platform1100can include lesser number of components, e.g., functionalities of two components can be combined into one component, or can include more number of components, e.g., components that perform other functionalities. In some embodiments, the functionalities of one or more of the above components can be split into two or more components. FIG.12is a flow diagram of a process1200for managing users in a communication platform, consistent with various embodiments. In some embodiments, the process1200can be implemented in the environment100and using the communication platform1100ofFIG.11. At block1201, the emotion management component1115determines a type of emotion generated by a set of messages sent by a user, e.g., the first user126. In some embodiments, the type of emotion generated by a particular message from the first user126is indicated by a recipient of the particular message, e.g., as described at least with reference toFIGS.1and4. At block1202, the metric determination component1120determines one or more metrics associated with the first user126. In some embodiments, the metrics can be based on the type of emotion. As described at least with reference toFIG.1, a metric can include one or more of a number of times a specified user has received dislikes from a particular user, a number of times a specified user has received dislikes from one or more of the users125, a percentage of messages sent by the specified user that received dislikes, a total number of messages sent by the specified user, a period for which the specified user has been active on the communication platform, a number of times the specified user is reported to the communication platform, a frequency at which a dislike is received (e.g., how many dislikes per week/month, per 100 messages), a frequency at which the specified user is reported, and such. At block1203, the monitoring component1125determines if any of the metrics satisfies the criterion for violation. The monitoring component1125can define threshold limits for each metric or a combination of the metrics. In some embodiments, the criterion can be that one or more metrics exceed one or more thresholds. For example, one criterion can be that a first metric exceeds a first threshold and a second metric exceeds a second threshold. In another example, the criterion can be that at least one of the first metric and the second metric exceeds a corresponding threshold. If the monitoring component1125determines that none of the metrics satisfy the violation criterion, the emotion management component1115continues to monitor the type of emotion received for the messages sent by the first user126. On the other hand, if the monitoring component1125determines that one or more of the metrics satisfy the violation criterion, at block1204, the monitoring component1125determines that the first user126violated the social contract of the communication platform1100. At block1205, the monitoring component1125suspends the writing privileges of the first user126. Suspending the writing privileges can prevent the first user126from sending messages and therefore, can stop spreading of negative emotions by the first user126. While the above process1200is described with respect to a single user, e.g., the first user126, in some embodiments, the process1200is executed for each of the users125. FIG.13is a flow diagram of a process1300for displaying a message to a user, consistent with various embodiments. In some embodiments, the process1300can be implemented in the environment100ofFIG.1and using the communication platform1100ofFIG.11. At block1301, the messaging component1105receives a message at a client device associated with a user. For example, the messaging component1105in the client device137associated with the second user127receives the message130from the first user126. At block1302, the video component1110determines if the second user127is interested in recording a reaction to reading the message130. For example, the video component1110can display a prompt that asks if the second user127is interested in recording the reaction. At determination block1303, if the second user127indicated interest in recording the reaction, the process1300proceeds to block1304, where the video component1110starts a video recording using the camera, e.g., front-facing camera of the client device137, to record the reaction of the second user127, as described at least with reference toFIG.4. At block1305, the messaging component1105displays the message130to the second user127. The second user127may react to the message130by exhibiting some emotion while reading the message130, e.g., a smile, a grin, a frown, surprise, confused, through facial expressions or other body language. The emotion is captured in the video recording as the reaction135. At block1306, the video component1110stops the video recording. In some embodiments, the video component1110continues to record the video for a specified duration after the message130is displayed. In some embodiments, the starting and stopping of the recording is done automatically by the video component1110. That is, the second user127may not have to manually start or stop the recording, and the recording can happen in the background while the second user127is reading the message130displayed on a display screen of the client device137. This way, the reaction135can be a candid video of the reaction of the second user127. At block1307, the emotion management component1115generates a prompt on the client device137asking the second user127to identify the type of emotion the message130generated for the second user127. In some embodiments, the emotion management component1115can display a positive emotion indicator416and a negative emotion indicator417, which the second user127can use to indicate the type of emotion, as described at least with reference toFIG.4. At block1308, the emotion management component1115receives a user selection of the type of emotion. At block1309, the video component1110confirms that the second user127is still interested in sending the reaction135to the first user126. For example, the video component1110can display a prompt asking the second user127to confirm if the second user127wants to send the reaction135to the first user126. At block1310, the messaging component1105transmits the reaction135and the type of emotion to the first user126upon receiving the confirmation from the second user127. In an event the second127does not confirm sending of the reaction, then the messaging component1105transmits the type of emotion but not the reaction135. Referring back to determination block1303, if the second user127is not interested in recording the reaction, the process proceeds to block1311, where the messaging component1105displays the message130to the second user127. At block1312, the emotion management component1115generates a prompt on the client device137asking the second user127to identify the type of emotion the message130generated for the second user127, e.g., as described with reference to block1307. At block1313, the emotion management component1115receives a user selection of the type of emotion. At block1314, the messaging component1105transmits the type of emotion to the first user126. FIG.14is a flow diagram of a process1400for displaying a video to a user and receiving a reaction video, consistent with various embodiments. In some embodiments, the process1400can be implemented in the environment100and using the communication platform1400ofFIG.11. At block1401, the messaging component1405can receive a video at a client device associated with a user. The messaging component1405can be used to send text, videos, pictures, etc. For example, the messaging component1405in the client device137associated with the second user120receives a video from the first user126. In some embodiments, at block1401, a user may select the video they want to view. For example, the messaging app147can include a library of content that users have made public, rather than sending to a particular individual(s). Thus, a user can select the video they want to view. In some embodiments, the video can be organized based on characteristics of the videos. For example, a video can be categorizes based on the emotion that it is intended to induce, the content, the geographic area where it was made, and other criterion. For example, user A may want to view videos to make them motivated to finish a work out. Thus, user A may filter the videos to “motivational.” In some embodiments, the library can include categorized messages. For example, user B may want to a read a joke. Thus, user B can filter the messages to only show “funny” messages. At block1402, the video component1410can determine if the first user126indicated interest in recording a reaction video of second user127. For example, the video component1410can display a prompt that asks if the second user127grants permission to record the reaction video. Additionally, first user126may be asked prior to sending the video the second user127, whether a reaction video should be requested. First user126may have the option of asking for a reaction video while watching the video, after watching the video, or while watching only a portion of the video. For example, first user126may record a three-minute video to send to second user127. Within the three-minute video, the first user126may indicated that a reaction video should be recorded only during the last thirty seconds of the video. Additionally, first user126may request reaction videos for multiple portions of the video. For example, the first thirty seconds and last thirty seconds of a three-minute video. At determination block1403, if the first user126indicated interest in recording the reaction video, the process1400proceeds to block1404, wherein the video component1410starts a recording using the camera. The camera can be facing the second user127and can be front-facing camera or rear-facing camera. It can record the reaction of second user127, as described at least with reference toFIG.4. In some embodiments, prior to block1404, the second user may be given the option to grant or decline permission to start recording the reaction video. In some embodiments, block1405may be executed prior to block1404, or vice a versa, depending on when the first user127wanted to record to the reaction video. For example, if the first user127wanted to record to reaction video for only the last thirty seconds of a three-minute video, then block1405will precede block1404. At block1405, the messaging component1405displays the video to second user127. Second user127may react to the video by exhibiting one or more emotion(s) while viewing the video, e.g., smile, laugh, frown, glare, or other body language. The emotion(s) is captured in the reaction video as reaction135. At block1406, the video component1410stops recording. In some embodiments, video component1410continues to record the video for a specified duration after the video is completed. In some embodiments, the starting point and ending point of the recording is dictated by the first user126, as mentioned above. This way the reaction135can be a candid video of the second user127. At block1407, the messaging component1405generates a prompt to confirm that the second user127wants to send the reaction video to the first user126. In some embodiments, block1407can be optional. For example, first user126can request a reaction video but give second user127discretion to decide whether or not to send the video. In some embodiments, sending the reaction video can be a requirement for viewing the video. Blocks1408,1409, and1410function similarly to block1308,1309, and1310, respectively, ofFIG.13. Referring back to determination block1403, if the first user126is not interested in recording the reaction, the process proceeds to block1412. Blocks1412,1413,1414, and1415function similarly to blocks1311,1312,1313, and1314, respectively, ofFIG.13. FIG.15is a flow diagram of the process1500for receiving a reaction video for a portion of a video, consistent with various embodiments. In some embodiments, the process1500can be implemented in the environment100and using the communication platform1100ofFIG.11. A first user1501can record a video1504using a personal device, e.g., phone, tablet, computer, camera, or the like. First user1501can record the video while within the environment of the communication platform1100ofFIG.11or upload the video onto the communication platform1100ofFIG.11from a different source, e.g., local storage, USB, WiFi, etc. At block1505, first user126can select the one or more portion(s) of the video for which a reaction video should be recorded. Block1505functions similarly to similar blocks, e.g., block1302, ofFIG.13. For example, first user1501can select the last thirty seconds of a three-minute to record a reaction video. In another example, first user1501can chose to record a reaction video for the entirety of the video. At block1506, the messaging component1105generates a prompt on client device137confirming that first user1501wants to send the video with instructions for recording a reaction video to the second user1503. Once confirmed, server1502sends video to second user at block1507. At block1508, the second user1503receives the video with reaction recording instructions. In some embodiments, the messaging platform1100, on which process1500may be performed, may perform block1509. At block1509, server1502can executed instructions to request permission to record using an onboard camera. Subsequently, second1503can grant permission to record, as per block1510. At block1511, server1502executed instructions to play the video and record a reaction video for the requested portion, as per block1511. After the video has finished playing, or when the recording has finished, second user1503can confirm that the reaction video should be sent to first user1501, as per block1512. Subsequently, server1502sends the reaction video to the first user, as per block1513. In block1514, first user1501receives the reaction video, as per block1514. In some embodiments, the second user1503can request a reaction video from the first user1501for the reaction video sent by the second user1503. For example, first1501can send second user1503a five-minute video with instructions to record a reaction video for the first two minutes. After agreeing, to record the reaction video, the second user1503may be prompted by the messaging platform1100, if the first user should record a reaction video to the reaction video sent by the second user1503. FIG.16is a block diagram of a computer system as may be used to implement features of the disclosed embodiments. The computing system1600may be used to implement any of the entities, components or services depicted in the examples of the foregoing figures (and any other components described in this specification). The computing system1600may include one or more central processing units (“processors”)1601, memory1602, input/output devices1604(e.g., keyboard and pointing devices, display devices), storage devices1603(e.g., disk drives), and network adapters1605(e.g., network interfaces) that are connected to an interconnect1606. The interconnect1606is illustrated as an abstraction that represents any one or more separate physical buses, point to point connections, or both connected by appropriate bridges, adapters, or controllers. The interconnect1606, therefore, may include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus or PCI-Express bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), IIC (I2C) bus, or an Institute of Electrical and Electronics Components (IEEE) standard 1394 bus, also called “Firewire”. The memory1602and storage devices1603are computer-readable storage media that may store instructions that implement at least portions of the described embodiments. In addition, the data structures and message structures may be stored or transmitted via a data transmission medium, such as a signal on a communications link. Various communications links may be used, such as the Internet, a local area network, a wide area network, or a point-to-point dial-up connection. Thus, computer readable media can include computer-readable storage media (e.g., “non-transitory” media) and computer-readable transmission media. The instructions stored in memory1602can be implemented as software and/or firmware to program the processor(s)1601to carry out actions described above. In some embodiments, such software or firmware may be initially provided to the processing system1600by downloading it from a remote system through the computing system1600(e.g., via network adapter1605). The embodiments introduced herein can be implemented by, for example, programmable circuitry (e.g., one or more microprocessors) programmed with software and/or firmware, or entirely in special-purpose hardwired (non-programmable) circuitry, or in a combination of such forms. Special-purpose hardwired circuitry may be in the form of, for example, one or more ASICs, PLDs, FPGAs, etc. REMARKS The above description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in some instances, well-known details are not described in order to avoid obscuring the description. Further, various modifications may be made without deviating from the scope of the embodiments. Accordingly, the embodiments are not limited except as by the appended claims. Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Terms that are used to describe the disclosure are discussed below, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description of the disclosure. For convenience, some terms may be highlighted, for example using italics and/or quotation marks. The use of highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term is the same, in the same context, whether or not it is highlighted. It will be appreciated that the same thing can be said in more than one way. One will recognize that “memory” is one form of a “storage” and that the terms may on occasion be used interchangeably. Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein, nor is any special significance to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for some terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any term discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various embodiments given in this specification. Those skilled in the art will appreciate that the logic illustrated in each of the flow diagrams discussed above, may be altered in various ways. For example, the order of the logic may be rearranged, substeps may be performed in parallel, illustrated logic may be omitted; other logic may be included, etc. Without intent to further limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control. | 54,490 |
11943188 | DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent that the present teachings may be practiced without such details. In other instances, well known methods, procedures, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings. Techniques for restricting which notifications and/or conversations are presented on a user device are provided. These techniques address the technical problems associated with configuring which notifications to present on the user devices by providing users with fine-grained control over the notifications to be presented. These techniques permit the user to associate each of their user devices with a device category. The user further defines time category information that associates each time period of a plurality of time periods with permitted device category information and permitted message category information. The user is able to configure which categories of user devices are permitted to present notifications based on message category and the time period in which the message is received. A technical benefit of this approach is that the messaging platform is able to automatically inhibit the presentation of unnecessary and/or undesirable notifications on specified categories of user devices at specified times. Another technical benefit is that these techniques automatically hide entire conversations temporarily during selected time periods on specified categories of user devices at specified times. Consequently, the user workflow is not interrupted by the presentation of notifications and/or conversations that the user does not wish to see at that time, and user experience is significantly improved. These and other technical benefits of the techniques disclosed herein will be evident from the discussion of the example implementations that follow. FIG.1is a diagram showing an example computing environment100in which the techniques disclosed herein for restriction notifications presented to a user may be implemented. The computing environment100may include a communication platform110. The example computing environment100may also include user devices105a,105b,105c, and105d(collectively referred to as user device105). The user devices105a,105b,105c, and105dmay communicate with the communication platform110via the network120. The network120may be a combination of one or more public and/or private networks and may be implemented at least in part by the Internet. In the example shown inFIG.1, the user devices105a-105dare associated with the same user. The user device105ais a laptop computer that is associated with a device category that indicates that the laptop is primarily used for work. The user device105bis a desktop computer that is also associated with a device category that indicates that the desktop computer is primary used for work. The user device105cis a mobile phone that is a hybrid device that is used for both work and personal usage. The user device105dis a tablet computer that is associated with a personal device category that indicates that the tablet is primarily used for personal usage unrelated to work. The example shown inFIG.1includes only four user device105a-105dand user devices105are only associated with three device categories. In other implementations, a user may be associated with a different number of user devices105and/or the user devices105may be associated with a different number of device categories. In the example shown inFIG.1, the communication platform110is implemented as a cloud-based service or set of services. The communication platform110is configured facilitate communications among users of the communication platform110. The communication platform110supports one or more of email, text messaging, chat messaging, and/or other types of messaging. In some implementations, the communication platform110may also provide other services that are accessible to users via their respective user devices105, such as but not limited to a collaboration platform which enables users to create and share electronic content. The term “electronic document” as used herein can be representative of any document or component in electronic form that can be created by a computing device, stored in a machine-readable storage medium, and/or transferred among computing devices over a network connection or via a machine-readable storage medium. Examples of such electronic documents include but are not limited to word processing documents, presentations, web sites (e.g., Microsoft SharePoint® sites), digital drawings, media files, components thereof, and the like. The user devices105a,105b,105c, and105dare each a computing device that may be implemented as a portable electronic device, such as a mobile phone, a tablet computer, a laptop computer, a portable digital assistant device, a portable game console, and/or other such devices. The user devices105a,105b, and105cmay also be implemented in computing devices having other form factors, such as a desktop computer, vehicle onboard computing system, a kiosk, a point-of-sale system, a video game console, and/or other types of computing devices. While the example implementation shown inFIG.1includes four user devices, other implementations may include a different number of user devices105that may utilize the services provided by the communication platform110. Furthermore, in some implementations, the application functionality provided by the communication platform110may be implemented, in part, by a native application installed on the user devices105a,105b,105c, and105d, and the user devices105a,105b,105c, and105dmay communicate directly with the communication platform110over a network connection. FIGS.2A and2Bare diagrams showing additional features of the communications platform and the user device105. In the example implementation shown inFIG.2A, the communication platform110is configured to implement the techniques provided herein for managing which notifications and/or conversations may be presented on the user device105in response to receiving a message. In contrast, in the implementation shown inFIG.2B, the user device105is configured to implement the techniques provided herein for managing which notifications and/or conversations may be presented on the user device in response to receiving a message. FIG.2Ashows additional features of the communication platform110and the user device105. The communication platform110includes a message processing unit205, a notification configuration unit210, a configuration datastore215, a content classification model220, an authentication unit225, a user interface unit230, a message queue235, and a conversation visibility unit240. The message processing unit205receives messages for a user of the user device105. The messages may be received from a user device of another user (not shown). The message processing unit205is configured to determine the intended recipient of the message based on a phone number, email address, user alias, or other identifier associated with the intended recipient of the message. The message processing unit205is configured obtain user device information for the intended recipient from the configuration datastore215. As discussed in the preceding examples, the user may be associated with multiple user devices, and the user may receive messages on all or a subset of these user devices. The messages may also be associated with a conversation, which refers to a group of related messages in a message thread. The user device information also includes information associating each user device with a device category. The user associated with the devices may assign a device category from among a set of predefined device categories to each device. In other implementations, the communication platform110provides a user interface that enables the user to define additional categories of user devices. The message processing unit205also accesses time category information from the configuration datastore215. The time category information associates each time period of a plurality of time periods with permitted device category information and permitted message category information. The permitted device category information indicates which categories of user devices are permitted to provide notification of receive messages during a specified time period, and the message category information indicates which categories of messages from a plurality of message categories for which the notifications may be generated during the time period. The message processing unit205determines a message category for the messages received and determines whether the message may be delivered to each of the user devices based on the user device information retrieved from the configuration datastore215. In some implementations, the message processing unit205determines the message category based on user input. In other implementations, the message processing unit205uses the content classification model220to analyze the message to obtain a prediction of the message category. The message processing unit205sends the message to each of user devices for which notifications are permitted during the time category in which the message is received. The message processing unit205delays the delivery of the message to the other user devices until a time period is reached in which the message may be delivered to these other devices. In some implementations, the message processing unit205inserts the delayed messages into the message queue235which is a data structure used to store information indicating which messages have not yet been delivered and the user devices to which the message has not yet been delivered. In some implementations, the message queue235may be implemented as separate queues for each type of message that may be processed by the communication platform110. In a non-limiting example, email messages are inserted into a first queue, text messages are inserted into a second queue, chat messages are inserted into a third queue, and so forth for each type of message which the communication platform110is configured to process. The message processing unit205is configured to periodically check the message queue235to determine whether the pending messages may be delivered based on the message category information and the time category information. The notification configuration unit210provides a web application that can be accessed by a browser application or browser-enabled native application implemented on the user devices105a-105din some implementations. The web application provides a user interface that enables the user to associate user devices with the user. The web application also provides a user interface that enables the user to define time category information for the user and to associate permitted device category information and permitted message category information with the time category information. These categories are applied to notifications associated with individual messages and/or entire conversations in some implementations. The notification configuration unit210stores the configuration information obtained from the user in the configuration datastore215. In some implementations, the configuration datastore215updates the data structures shown inFIGS.5A-5Dwith the information obtained from the user. The configuration datastore215is a persistent datastore that stores configuration used by the communication platform110. Each user may be associated with custom device category information and time category information that suits the needs of the user. Users may have different combinations of user devices and different usage habits which may be satisfied by the permitted the user to customize which notifications may be presented on which devices during specific time periods. The content classification model220may be implemented using various machine learning architectures such as deep neural networks (DNNs), recurrent neural networks (RNNs), convolutional neural networks (CNNs), and/or other types of neural networks. The content classification model220is trained to receive a textual message as an input and to output a prediction of the message category. The input may be various types of messages, including but not limited to an email, text message, or chat message. The content classification model220is trained to predict the message category from among a set of predetermined message categories on which the model has been trained. Examples of such message categories include but are not limited to work related messages, urgent work-related messages, personal messages, and urgent personal messages. In some implementations, a default message category may also be provided, and the default message category may be initially assigned to a message and/or conversation until a message category is determined for that message or conversation. Furthermore, once a message category has been assigned to a message in a conversation, subsequent messages in the thread may be assigned the same message category without each message in the thread being analyzed by the content classification model220to obtain a message category. In other implementations, each message in a conversation is analyzed by the content classification model220to ensure that the message category has not changed. In a non-limiting example, a first message in a conversation is categorized as being a personal message and the second message in the conversation is categorized as being a work-related message. This change in message category can cause the communication platform110to handle the notifications associated with these two messages differently. In a non-limiting example, the change in message category from personal to work-related causes the communication platform110to permit notifications associated with the message thread to be presented to the user during work hours and/or display the conversation to the user in the browser application250and/or the one or more native applications255. Prior to the conversation being reclassified from personal to work-related, the communication platform110would have either delayed the notifications associated with messages associated with the conversion during work hours and/or provided silent notifications for messages associated with the conversation during work hours. Furthermore, the communication platform110would have hidden the conversation in the browser application250and/or the one or more native applications255during work hours in some implementations to avoid distracting the user with personal messages during work hours. In some implementations, the performance of the content classification model220is fine-tuned in some implementations based on feedback received from the user indicating whether the communication platform110correctly classified messages. The notification configuration unit210provides a user interface that permits the user to provide this feedback in some implementations. Fine-tuning the content classification model220can lead to the model providing better predictions of the message category in the future. The authentication unit225provides functionality for verifying whether users are permitted to access the services provided by the communication platform110. In some implementations, the authentication unit225provides functionality for receiving authentication credentials for the users from their respective user devices105. The authentication unit225may be configured to verify that the authentication credentials are valid and permit the users to access the services provided by the communication platform110responsive to the authentication credentials being valid. The communication platform110includes a conversation visibility unit240in some implementations. In such implementations, the communication platform110is configured to cause conversations associated with certain categories of messages to be selectively presented or hidden on the user device105based on the message category associated with the conversation, the time category information associated with the user, and user device category information. The selective presentation and/or hiding of conversations can be implemented in addition to or instead of the selective delaying of notifications and/or presentation of silent notifications. Furthermore, in some implementations, the selective presentation and/or hiding of conversations may be performed on certain categories of user devices while the selective delaying of notifications and/or presentation of silent notifications is performed on other categories of user devices. In some implementations, the user accesses message content via one or more native applications255on each user device105. In such implementations, the communication platform110is configured to maintain information indicating which conversations are associated with the user of the user device105and the message category associated with each of the conversations. The message category associated with each conversation is reevaluated and updated each time a message associated with the conversation is received in some implementations. The conversation visibility unit240is scheduled to periodically determine whether the conversations associated with the user may be presented on each category of user device associated with the user in some implementations. In some implementations, the determinations is be performed by the conversation visibility unit240at the start of the time period associated with each time category to determine whether each conversation should be presented to the user or hidden from the user on each category of user device and upon receipt of a message associated with each conversation. The communication platform110sends a signal to the one or more native applications255to selectively present or hide conversations. Examples of selectively hiding conversations are shown inFIGS.4G-4K, which are discussed in detail in the examples which follow. In other implementations, such as that shown inFIG.2B, the user device105determines whether to display each conversation based on the device category of that client device205, the time category information, and the message category associated with each conversation. In some implementations, the communication platform110implements a web application in some implementations that provides access to the various services of the communication platform110. The message processing unit205provides a web application to create, receive, and reply to messages. The notification configuration unit210provides a web application configuring various parameters used to control the types of notifications that may be presented on the user devices105. In some implementations, the conversation visibility unit240sends a signal to the one or more native applications255to selectively present or hide conversations based on the message category, the time category information, and the device category information. The user device105may include one or more native applications255and/or a browser application250. In some implementations, the one or more native applications255includes a native application configured to communicate with the communication platform. The communication platform110provides a web application in some implementations that provides access to the various services of the communication platform110. In some implementations, the one or more native applications255includes a native application configured to communicate with the communication platform110. In such implementations, the native application implements a user interface for sending and receiving messages and for configuring the various parameters used to control the notifications which may be presented on the user devices105. The browser application250is an application for accessing and viewing web-based content, which may be provided by the communication platform110. In some implementations, the notification configuration unit210of the communication platform110provides a web application that enables the user to utilize the services of the communication platform110in a similar manner as the native application described above. The communication platform110may support both the one or more web-enabled native applications255and one or more web applications, and the users may choose which approach best suits their needs. The communication platform110may also provide support for the one or more native applications255, the browser application250, or both to provide functionality for a user of the user device105to obtain the services provided by the communication platform110. In some implementations, the user device105includes a message queue265. In such implementations, the message processing unit205sends messages to the user device105that would have otherwise been stored in the message queue235of the communication platform110with an indication that the message should be stored in the message queue265until a time period in which a notification may be presented on the user device105. In some implementations, the indication may include time period information that indicates when the user device105may remove the delayed messages from the message queue265for processing by a native application255and a notification of the message presented on the user device105. In other implementations, the message processing unit205sends an indication to the user device105that a message may be moved from the message queue265for processing by a native application255and a notification of the message presented on the user device105. In the implementation shown inFIG.2B, the user device105rather than the communication platform110is configured to manage which notifications are presented on the user device105. The user device105includes a message processing unit270, a notification configuration unit275, a configuration datastore280, and a content classification model285. In such implementations, the message processing unit205sends messages received for a user to each of the user devices105associated with the user. The message processing unit205is configured obtain user device information for the intended recipient from the configuration datastore215and device information for the user devices105of the intended recipient. The message processing unit205then sends the message to each of the user device105for processing by the message processing unit270implemented on the user device105. The message processing unit270obtains user device information and time category from the configuration datastore280. The user device information and the time category information is provided by the user using the native application255in some implementations, and the native application255updates the configuration datastore280on the user device105with this information. The user device105propagates this information the communication platform110in some implementations to update the configuration datastore215maintained by the communication platform110. The updates to this information may also be propagated by the communication platform110to each of the user devices105associated with the user so that the user does not have to manually configure the settings of each user device105individually. The message processing unit270analyzing messages received from the communication platform110and determines a category for the messages. The message processing unit270determines a message category for the messages received and determines whether a notification of the message may be presented on the user device105based on the user device information and time category information retrieved from the configuration datastore215. In some implementations, the message processing unit270determines the message category based on user input. In other implementations, the message processing unit270utilizes the content classification model285to classify messages. The content clarification model285is similar to the content classification model220shown in theFIG.2A. The message processing unit270places messages for which notifications must be delayed into the message queue265in some implementations. The message processing unit270periodically checks the messages in the message queue265to determine whether these messages can be removed from the queue and a notification presented to the user. In yet other implementations, the message processing unit270is configured to provide incoming messages to the native application255for processing rather than placing the message in the message queue265and the message processing unit270generates a silent notification on the user device105. The silent notifications provides no haptic or audible indication on the user device105. The silent notification may include a graphical element that is unobtrusive on a display of the user device, such as a small icon or other indicator that a message has been received. Such a graphical indication does not obstruct any content or applications windows that are open on the user device, which could interrupt the user workflow. The client device205includes a conversation visibility unit290in some implementations. In such implementations, the conversation visibility unit290is configured to cause conversations associated with certain categories of messages presented by the one or more native applications255to be selectively presented or hidden based on the message category associated with the conversation, the time category information associated with the user, and device category information. FIGS.3A-3Dshow examples of interactions between the user devices105a-105dand the communication platform110shown in the preceding figures. The user devices105aand105bare a laptop and desktop computer, respectively, which are associated with the work-related device category. The user device105cis a mobile phone associated with a hybrid device category, which indicates that the user device105cmay be used for both work-related and personal tasks. The user device105dis a tablet computer that is associated with a personal device category, which indicates that the user device105dis used for personal tasks unrelated to work. These example device categories are intended to demonstrate the concepts herein do not limit these techniques to those specific device categories or combination of device categories. In the example implementation shown inFIG.3A, a work-related email message is received at the communication platform110. The communication platform110determines that the intended recipient of the email message is associated with the user devices105a-105din this example implementation. The communication platform110accesses the user device information to determine the category of each of the user devices105a-105d. The communication platform110also determines the message category associated with the work-related email message. The communication platform110then utilizes the time category information including the determine permitted device category information and the permitted message category information to determine which categories of devices are permitted to the display a notification that the email message has been received. In this example, the work-related email message is permitted to be the devices in the work-related device category and the hybrid device category for the time period in which the work-related email is received. The communication platform110sends the email to the user devices105a,105b, and105c, and the user devices105a,105b, and105cpresent a notification that the work-related email has been received. The communication platform110does not send the work-related email to the user device105d, which is a personal user device, and the user has not authorized notifications of messages of the work-related message category be presented to the user on the user device105d. In the example implementation shown inFIG.3B, a personal text message is received at the communication platform110. In this example implementation, a notification that the personal text has been received may be presented on all user devices105a-105d. However, the communication platform110determines that a notification of the personal text message cannot be presented on the user devices105aand105b, which are both user devices that fall into the work-related device category. The user in this implementation has defined time category information that precludes the presentation of notifications of the personal message category during work hours. Consequently, the communication platform110inserts a copy of the personal text into the message queue235on the personal device. The communication platform110periodically checks whether the messages in the message queue can be sent to the user devices105aand105b. In this example implementation, the user has defined time category information which indicates that notifications for messages of any type which fall into the personal message category may be presented on the work-related user devices105aand105bafter work hours. The user has defined time category information that indicates that personal messages may be sent to devices in the hybrid device category and the personal device category at any time. Therefore, the communication platform110sends the personal text message to the user devices105cand105dupon receipt of the personal text message by the communication platform110. The personal text message is forward to the user devices105cand105d, and the mobile devices105cand105dpresent a graphical, auditory, or haptic notification of the personal text message. FIG.3Cis a diagram showing another implementation similar to that shown inFIG.3Bin which a personal text message is received at the communication platform110. In this implementation, the communication platform110does not hold the personal text message in a message queue on the communication platform110. Instead, each of the user devices105a-105dare configured to manage the notifications presented on that user device. The communication platform110sends the personal text message the user devices105a-105d. In some implementations, the communication platform110may include a silent notification indicator with the personal text message send to the user devices105aand105b, which indicates that the user devices105aand105bshould not present a haptic or audible notification to the user. However, the user devices105aand105bmay present an unobtrusive graphical notification on a display of the user device. As will be discussed in greater detail in the examples which follow, silent notifications may also be used for implementation in which conversations are hidden. FIG.3Dis a diagram showing another implementation similar to that shown inFIGS.3B and3C. The communication platform110does not hold the personal text message in a message queue on the communication platform110, and the user devices105a-105dare responsible for managing the notifications presented on these devices. The user devices105aand105bstore the personal text message in the message queue265implemented on each user device, and the personal text message remains in the message queue265until a time period is reached in which a user device of the work-related device category is permitted to present a notification of the personal message category. FIGS.4A-4Fare diagrams of an example user interfaces showing example of notifications being in the preceding figures.FIG.4Ais an example of a user interface402of a text messaging application in which a first user, a project manager, drafts a text message404to a second user requesting an update on a project. The message is sent to the communication platform110, which determines whether notifications of the receipt of the message should be presented on the user devices105of the second user.FIG.4Bshows an example user interface406representing a desktop of the laptop computer of the second user. The laptop computer is associated with the work-related device category, and the message is classified as being associated with the work-related message category. In this example, the communication platform110determines that the notification should be presented on the laptop of the second user, and the notification408is displayed on the user interface406. FIG.4Cis an example of a user interface410of a text messaging application in which a first user drafts a text message412to a second user to asking if the second user would still like to meet for lunch. As in the preceding examples, the message412is received by the communication platform110, and the communication platform110determines whether the user devices105associated with the second user may display a notification that the message412has been received.FIG.4Dshows an example of a user interface414of a mobile phone of the second user. The mobile phone is associated with the personal device category, and the communication platform110classifies the message as a personal message. The communication platform110determines that the notification416should be presented on the user interface414of the mobile phone. Suppose, however, that the second user has a second mobile phone that is primarily used for work and is classified in the work-related device category. The communication platform110determines based on the time category information associated with the user that notifications for messages that are not classified as the work-related message category should not be displayed on the second mobile device during work hours. In the implementation shown inFIG.4D, no audible or haptic notification is presented to the user. Only a non-obtrusive graphical notification420is presented on user interface418of the second mobile phone.FIG.4Eshows another example of a user interface422representing a laptop computer associated with the second user that is classified in the work-related device category. A non-obtrusive graphical notification424is presented on user interface422of the laptop. The notifications420and424represent examples of silent notifications that may be presented on a user device. Other types of silent notifications can be used in other implementations. FIGS.4G-4Kare diagrams of an example user interfaces430of a messaging application that show how conversations may be classified and selectively presented or hidden on a native application255or the browser application on a client device205. The conversations are selectively presented or hidden based on the device category associated with the client device205, the time category information, and the message category associated with the conversations. InFIG.4G, the message interface430shows four conversations432,434,436, and438. In some implementations, such as the example implementation shown inFIG.2A, the communication platform110is configured to categorize each of the conversations and causes the conversations to be selectively presented or hidden. In other implementations, such as the example implementation shown inFIG.2B, the user device105is configured to categorize each of the conversations and to cause the conversations to be selectively presented or hidden. In this example, the conversations432and436are worked-related conversations, and the conversations434and438are personal conversations that are non-work related. FIG.4Hshows an example of these conversations having been categories and a category indicator442,444,446, and448being presented next to each of the conversations. In some implementations, such indicators may be presented next to each of the conversations to inform the user of the category that has been associated with the conversation. In some implementations, the user may click on or otherwise actuate the category indicator to cause a conversation category configuration pane470to be displayed as shown inFIG.4I. The conversation category configuration pane470includes a dropdown that the user may select a category from among the categories of message categories that may be associated with the conversation. The communication platform110and/or the client device105updates the category associated with the message in response to the user changing the category. In some implementations, the models used to categorize the messages may be provided feedback to improve their predictions in response to the user changing a message category that was automatically assigned to a conversation by the communication platform110or the client device105. FIG.4Jshow an example of the user interface430shown during the user's work hours on a work-related device, such as the user devices105aand105band/or hybrid user device105cwhich is used for both work-related and personal tasks. In the example shown inFIG.4J, the communication platform110or the client device105has presented the work-related conversations432and436and hidden the personal conversations434and438based on the user device information, the category information, and message category information as discussed in the preceding examples.FIG.4Kshows another example of the user interface430in which the work-related conversations are presented, and the other conversations have been hidden. The user interface430includes a message490that indicates that filtering is active and provides the user with links to show all conversations and/or configure the notification settings used to determine which messages notifications and/or conversations should be presented to the user on each client device105for each message category and time category. FIGS.5A-5Dare diagrams of an example data structure used to store configuration information for the techniques provided herein.FIG.5Ashows an example of a user device data structure that associates a user identifier of users with the device identifiers of the user devices associated with the users. The user device data structure also includes a device category for each of the user devices. As discussed in the preceding examples, the user may assign a device category to each of their user devices via a user interface provided by the communication platform110. The user device data is stored in the configuration datastore215on the communication platform110and/or the configuration datastore280on the user device105in some implementations. FIG.5Bshows an example of a time category data structure that associates each time period among a plurality of time periods with permitted device category information and message category information. The permitted device category information indicates which categories of user devices are permitted to provide notification of received messages during the time period, and the message category information indicates which categories of messages from a plurality of message categories for which the notifications may be generated during the time period. FIG.5Cshows an example of a message queue data structure that may be used to store message information in the message queue235and/or the message queue265. The message queue data structure associates a message identifier, a message category, and a device identifier with each message. In some implementations, the text of the message is also stored in the message queue data structure. FIG.5Dshows an example of the time category data structure that includes a hide conversations field. The hide conversations field is used by the communication platform110and/or the client device105to determine whether to hide conversations associated with message categories other than those permitted to be presented on the client device. If the hide other conversations field is set to a “yes” value, then the conversations associated with message categories other than those that are permitted are hidden on the devices categories listed in the device category permitted field. The delay notification field is used to indicate whether to delay notifications for other message categories. If the delay notifications field is set to no, then the communication platform110and/or the client device105can present silent notifications for hidden conversations. If the delay notifications field is set to yes, then the communication platform110and/or the client deice105are not permitted to present silent notifications for conversations that are hidden, and the notifications will be delayed until notifications associated with the category of message associated with the conversation may be presented on a particular category of client device. In the non-limiting example shown inFIG.5D, only conversations associated with the work-related urgent message category are permitted to be presented during “focus” hours at work. Conversations not associated with this message category are hidden and notifications associated with conversations associated with other categories are hidden. However, during non-focus hours at work, all conversations associated with work-related conversations are permitted to be presented to the user on work-related device and hybrid work and personal use devices. Furthermore, during non-focus hours at work, conversations associated with urgent personal messages are permitted to be presented on the work-related and hybrid user devices. The notifications are not delayed in this example for conversations associated with other message categories, so silent notifications of messages for conversations for categories of messages are permitted, but these conversations are hidden on the work-related and hybrid devices. However, the user may click on or otherwise actuate these notifications to cause the conversation associated with such a silent notification to be at least temporarily displayed. This approach allows the user to work without interruptions caused by distracting notifications but provides an option to view a message and/or conversation associated with such silent notifications as necessary. FIG.6Ais a flow chart of an example process600for managing notifications presented on user devices105associated with a user. The process600may be implemented by the message processing unit205of the communication platform110. The process600may be used in implementations in which the communication platform110is responsible for determining which notifications may be presented to the user upon receipt of a message and on which of the user devices these notifications may be presented. The process600may also be used to determine which notifications must be delayed until a time when such notifications may be presented on the user devices. The process600includes an operation602of a maintaining a data structure defining a relationship between identities of user devices associated with a user, a device category associated with each user device of the user devices, and a message category. As discussed in the preceding examples, a data structure, such as the data structures shown inFIGS.5A-5Cmay be used to store the data used to configure the categories of messages for which notifications may be presented on specific categories of user devices105upon receipt of the message for a particular time category. The process600includes an operation604of updating the data structure. The message processing unit205and/or other units of the user device105may update the data structure. Updating the data structure includes operations606-618in some implementations. The process600includes an operation606of obtaining device category information, the device category information grouping a plurality of user devices associated with a first user into a plurality of categories. The device category information includes a device identifier and a device category associated with each user device of the plurality of user devices for the user. As discussed in the preceding examples, the user may categorize their user devices into one of a number of categories. In some implementations, the devices may be categories into work-related devices, personal devices, and hybrid devices that are used for a mix of personal and work-related uses. These categories are merely intended to demonstrate the concepts described herein, and other implementations may be used in addition to or instead of one or more of these example categories. The process600includes an operation608of obtaining time category information. The time category information associates each time period among a plurality of time periods with permitted device category information and message category information. The permitted device category information indicates which categories of user devices are permitted to provide notification of received messages during the time period, and the message category information indicates which categories of messages from a plurality of message categories for which the notifications may be generated during the time period. The process600includes an operation610of receiving a first message for the first user at a first time. The communication platform110receives the first message from a user device105of a sender of the message in some implementations. The first message may be an email messages, text message, chat message, and/or other type of message. The message processing unit205processes the messages that are received to determine whether a notification of the received messages may be presented on the user devices105of the first user. In some implementations, the message processing unit205stores the message in an incoming message queue235for processing before being sent to the user devices105of the first user. The first message remains in the message queue235until the first message can be sent to all of user devices of the user. The process600includes an operation612of determining a first message category for the first message among the plurality of message categories. In some implementations, the user may manually mark a message thread with a category. In such implementations, the user may specify the message category once an initial message is received. The message processing unit270may assign default message category to the message thread in some implementations, and the user may assign a different message category if the user does not agree with the default assignment. In other implementations, the message processing unit205relies on the content classification model220to analyze the first message to determine the first message category. The process600includes an operation614of determining a first time category associated with the first message based on the first time. The first time represents the time that the message was received by the user device105in some implementations. As discussed in the preceding examples, the user may define periods of times, referred to as time categories, that represent specific times when certain categories of messages may be presented on the user device. In some implementations, the time categories can be associated with a specific day or days of the week. For example, a work time category may be defined for days of the week and the hours which the user typically works, and a personal time category may be defined for the days of the week and times which the user typically is not at work. Other time categories in addition to or instead of one or more of these example time categories are utilized by other implementations. The process600includes an operation616of according to the data structure associated with the first user, determining a first subset of the plurality of user devices associated with a first device category are permitted to provide notifications that the first message has been received. The first subset of the plurality of user devices is associated with one or more categories of user devices permitted to provide notification of received messages of the first message category associated with the first time category. The process600includes an operation618of causing the first subset of the plurality of user devices to present a first notification of the receipt of the first message and a remainder of the plurality of user devices not included in the first subset of the plurality of user devices to delay presentation of the first notification. The remainder of the plurality of user devices is associated with one or more second device categories which are not permitted to provide notification of received messages of the first message category during the first time period associated with the first time category. In some implementations, the first message is sent to each of the user devices of the first subset of the plurality of user devices and the user devices are configured to present the first notification to the user in response to receiving the first message. The first notification may be presented as a graphical, audible, and/or haptic notification that informs the user that the first message has been received. The type of notification to be presented by the user is configurable by the user in some implementations. FIG.6Bis a flow chart of an example process640for managing notifications presented on a user device. The process640may be implemented on a user device105by the message processing unit270. Such an implementation is shown inFIG.2B. The process640may be used in implementations in which the user devices105are responsible for determining which notifications may be presented to the user upon receipt of a message and which notifications must be delayed until a time when such notifications may be presented on the user device. The process640includes an operation642of maintaining a data structure defining a relationship between user device associated with a user, a device category associated with the device, and a message category. As discussed in the preceding examples, a data structure, such as the data structures shown inFIGS.5A-5Bmay be used to store the data used to configure the categories of messages for which notifications may be presented on the user device105upon receipt of the message for a particular time category. The process640includes an operation644of updating the data structure. The message processing unit270and/or other units of the user device105may update the data structure. Updating the data structure includes operations648-658in some implementations. The process640includes an operation646of obtaining device category information at a user device, the device category information indicating that the user device is associated with a first category of user device among a plurality of user devices associated with a first user. As discussed in the preceding examples, the user may categorize their user devices into one of a number of categories. In some implementations, the devices may be categories into work-related devices, personal devices, and hybrid devices that are used for a mix of personal and work-related uses. These categories are merely intended to demonstrate the concepts described herein, and other implementations may be used in addition to or instead of one or more of these example categories. The process640includes an operation648of obtaining time category information at the user device, the time category information associating each time period among a plurality of time periods with message category information. The message category information indicates for which message categories among a plurality of message categories the user device is permitted to display immediate notifications of received messages for the time period. The process640includes an operation650of receiving a first message for the first user at a first time. The user device105receives the first message from the communication platform110. The first message may be an email messages, text message, chat message, and/or other type of message. The message processing unit270processes the messages that are received to determine whether a notification of the received messages may be presented on the user device105. In some implementations, the message processing unit270also performs other types of actions on the first message, such as storing the message in an incoming message queue265for processing by a native messaging application, such as the native application255. The message processing unit270provides the first message as an input a native application255that is configured to process the message in some implementations, and the native application255is configured to store the message in an inbox, message queue, message thread, or memory of the user device. The process640includes an operation652of determining a first message category for the first message among the plurality of message categories. In some implementations, the user may manually mark a message thread with a category. In such implementations, the user may specify the message category once an initial message is received. The message processing unit270may assign default message category to the message thread in some implementations, and the user may assign a different message category if the user does not agree with the default assignment. In other implementations, the message processing unit270relies on the content classification model285to analyze the first message to determine the first message category. The process640includes an operation654of determining a first time category associated with the first message based on the first time. The first time represents the time that the message was received by the user device105in some implementations. As discussed in the preceding examples, the user may define periods of times, referred to as time categories, that represent specific times when certain categories of messages may be presented on the user device. In some implementations, the time categories can be associated with a specific day or days of the week. For example, a work time category may be defined for days of the week and the hours which the user typically works, and a personal time category may be defined for the days of the week and times which the user typically is not at work. Other time categories in addition to or instead of one or more of these example time categories are utilized by other implementations. The process640includes an operation656of according to the data structure associated with the first user, determining that the user device is permitted to provide a notification that the first message has been received based on the first category of the user device. The first category of the user device indicates that the user device is permitted to provide notifications of received messages of the first message category associated with the first time category. As discussed in the preceding examples, the user device is permitted to provide notifications upon receipt of a certain categories of messages on certain categories of devices at specified times. The delivery of notifications for other categories of messages are delayed until a time when the user device is permitted to present such notifications. The process640includes an operation658of presenting a first notification of the receipt of the first message via a user interface of the user device. The first notification may be presented as a graphical, audible, and/or haptic notification that informs the user that the first message has been received. The type of notification to be presented by the user is configurable by the user in some implementations. FIG.6Cis a flow chart of an example process670for managing conversations presented on a user device. The process670may be implemented on a user device105or the communication platform110as discussed in the preceding examples. The process670includes an operation672of maintaining a data structure defining a relationship between user device associated with a user, a device category associated with the device, and a message category. As discussed in the preceding examples, a data structure, such as the data structures shown inFIGS.5A-5Bmay be used to store the data used to configure the categories of messages for which conversations may be presented on the user device105based on the category of message associated with the conversation. The process670includes an operation674of updating the data structure. The user device105or the communication platform110may update the data structure. Updating the data structure includes operations678-688in some implementations. The process640includes an operation676of obtaining device category information, the device category information grouping a plurality of user devices associated with a first user into a plurality of categories. The device category information includes a device identifier and a device category associated with each user device of the plurality of user devices for the user. As discussed in the preceding examples, the user may categorize their user devices into one of a number of categories. In some implementations, the devices may be categories into work-related devices, personal devices, and hybrid devices that are used for a mix of personal and work-related uses. These categories are merely intended to demonstrate the concepts described herein, and other implementations may be used in addition to or instead of one or more of these example categories. The process670includes an operation678of obtaining time category information. The time category information associates each time period among a plurality of time periods with permitted device category information and message category information. The permitted device category information indicates which categories of user devices are permitted to provide notification of received messages during the time period, and the message category information indicates which categories of messages from a plurality of message categories for which the notifications may be generated during the time period. The process670includes an operation680of obtaining conversation information for a plurality of conversations associated with the user. As discussed in the preceding example, the client device105and/or the communication platform110may group messages from a message thread into conversations. A conversation may include one or more messages and may include messages from one or more other users communicating with the user. The conversation may include a thread of email messages, text messages, chat messages, and/or other types of messages supported by the communications platform110and the client device105. In some implementations, a copy of the messages and the conversation information for the conversations associated with these messages is stored in a persistent datastore of the communications platform110. In other implementations, the messages and/or the conversation information is stored in a persistent datastore of the client device105. The process670includes an operation682of determining a message category associated with each conversation of the plurality of conversations. As discussed in the preceding examples, each conversation is categorized based on the category of messages associated with the conversation and the category of the conversation may change over time if the category of messages associated with the conversation change over time. For example, a conversation with a work colleague that was initially categorized as personal may be recategorized as work-related as the subject matter of the conversation changes from topics unrelated to work to work-related topics. A conversation may be analyzed each time a message is received by the communication platform110and/or the client device105to determine the message category to associate with the message. The process670includes an operation684of determining a first time category associated with a first time and an operation686of according to the data structure associated with the first user, determining a first subset of the plurality of conversations that are permitted to be presented on a first user device associated with the first user according to the first time category and a second subset of the plurality of conversations are not permitted to be presented on the first user device associated with the first user according to the first time category. As discussed in the preceding examples, the time categories are used to determine which categories of messages and/or categories of conversations may be presented or hidden on a particular user device at a particular time. Each category of user device may be permitted to present specific categories of conversations at specific times. In some implementations, the communication platform110and/or the client device105are configured to periodically determine whether each of the conversations that the user may access via the browser application250and/or the one or more native applications255should be visible to the user or hidden depending upon the time of day. In a non-limiting example, the user configures the time category information to cause conversations unrelated to work to be hidden from the user in the browser application250and/or the one or more native applications255of work-related user devices105during work hours to prevent unwanted distractions. Similarly, other categories of messages may be presented to the user or hidden from the user on certain categories of user device at certain times. For example, the user may configure the communication platform110or the client device105to display personal content on certain hybrid user devices105, such as their mobile phone, after work hours and to hide work-related conversations on these devices. The techniques herein are not limited to these specific examples. Other implementations are possible that utilize different message categories, time categories, and devices categories. The client device105or the communication platform110may be scheduled to periodically determine whether the conversations associated with the user may be presented on each category of user device associated with the user. In some implementations, the determinations may be performed at the start of the time period associated with each time category to determine whether each conversation should be presented to the user or hidden from the user on each category of user device and upon receipt of a message associated with each conversation. The communication platform110and/or the client device105are configured to delay notifications associated with hidden conversations or to present silent notifications for hidden conversations according to the techniques disclosed herein in some implementations to prevent notifications associated with hidden conversations from distracting the user and interrupting user workflow. The process670includes an operation688of causing the first subset of the plurality of conversations to be presented on the first user device an operation690of causing the second subset of the plurality of conversations to be hidden on the first user device. The conversions that are permitted to be displayed on a particular user device105associated with the user are displayed on that category of user device105while the other conversations that are not permitted to be displayed on that category of user device are hidden. Other categories of user device associated with the user may be permitted to present a different subset of conversations and hide a different subset of the conversations. While the preceding examples describe specific time categories, device categories, and message categories, these examples are merely intended to illustrate the techniques described herein. Other implementations may utilize different time categories, device categories, and/or message categories and are not limited to the specific categories shown herein. The detailed examples of systems, devices, and techniques described in connection withFIGS.1-6Care presented herein for illustration of the disclosure and its benefits. Such examples of use should not be construed to be limitations on the logical process embodiments of the disclosure, nor should variations of user interface methods from those described herein be considered outside the scope of the present disclosure. It is understood that references to displaying or presenting an item (such as, but not limited to, presenting an image on a display device, presenting audio via one or more loudspeakers, and/or vibrating a device) include issuing instructions, commands, and/or signals causing, or reasonably expected to cause, a device or system to display or present the item. In some embodiments, various features described inFIGS.1-6Bare implemented in respective modules, which may also be referred to as, and/or include, logic, components, units, and/or mechanisms. Modules may constitute either software modules (for example, code embodied on a machine-readable medium) or hardware modules. In some examples, a hardware module may be implemented mechanically, electronically, or with any suitable combination thereof. For example, a hardware module may include dedicated circuitry or logic that is configured to perform certain operations. For example, a hardware module may include a special-purpose processor, such as a field-programmable gate array (FPGA) or an Application Specific Integrated Circuit (ASIC). A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations and may include a portion of machine-readable medium data and/or instructions for such configuration. For example, a hardware module may include software encompassed within a programmable processor configured to execute a set of software instructions. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (for example, configured by software) may be driven by cost, time, support, and engineering considerations. Accordingly, the phrase “hardware module” should be understood to encompass a tangible entity capable of performing certain operations and may be configured or arranged in a certain physical manner, be that an entity that is physically constructed, permanently configured (for example, hardwired), and/or temporarily configured (for example, programmed) to operate in a certain manner or to perform certain operations described herein. As used herein, “hardware-implemented module” refers to a hardware module. Considering examples in which hardware modules are temporarily configured (for example, programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module includes a programmable processor configured by software to become a special-purpose processor, the programmable processor may be configured as respectively different special-purpose processors (for example, including different hardware modules) at different times. Software may accordingly configure a processor or processors, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. A hardware module implemented using one or more processors may be referred to as being “processor implemented” or “computer implemented.” Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (for example, over appropriate circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory devices to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output in a memory device, and another hardware module may then access the memory device to retrieve and process the stored output. In some examples, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by, and/or among, multiple computers (as examples of machines including processors), with these operations being accessible via a network (for example, the Internet) and/or via one or more software interfaces (for example, an application program interface (API)). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across several machines. Processors or processor-implemented modules may be in a single geographic location (for example, within a home or office environment, or a server farm), or may be distributed across multiple geographic locations. FIG.7is a block diagram700illustrating an example software architecture702, various portions of which may be used in conjunction with various hardware architectures herein described, which may implement any of the above-described features.FIG.7is a non-limiting example of a software architecture, and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture702may execute on hardware such as a machine800ofFIG.8that includes, among other things, processors810, memory830, and input/output (I/O) components850. A representative hardware layer704is illustrated and can represent, for example, the machine800ofFIG.8. The representative hardware layer704includes a processing unit706and associated executable instructions708. The executable instructions708represent executable instructions of the software architecture702, including implementation of the methods, modules and so forth described herein. The hardware layer704also includes a memory/storage710, which also includes the executable instructions708and accompanying data. The hardware layer704may also include other hardware modules712. Instructions708held by processing unit706may be portions of instructions708held by the memory/storage710. The example software architecture702may be conceptualized as layers, each providing various functionality. For example, the software architecture702may include layers and components such as an operating system (OS)714, libraries716, frameworks718, applications720, and a presentation layer744. Operationally, the applications720and/or other components within the layers may invoke API calls724to other layers and receive corresponding results726. The layers illustrated are representative in nature and other software architectures may include additional or different layers. For example, some mobile or special purpose operating systems may not provide the frameworks/middleware718. The OS714may manage hardware resources and provide common services. The OS714may include, for example, a kernel728, services730, and drivers732. The kernel728may act as an abstraction layer between the hardware layer704and other software layers. For example, the kernel728may be responsible for memory management, processor management (for example, scheduling), component management, networking, security settings, and so on. The services730may provide other common services for the other software layers. The drivers732may be responsible for controlling or interfacing with the underlying hardware layer704. For instance, the drivers732may include display drivers, camera drivers, memory/storage drivers, peripheral device drivers (for example, via Universal Serial Bus (USB)), network and/or wireless communication drivers, audio drivers, and so forth depending on the hardware and/or software configuration. The libraries716may provide a common infrastructure that may be used by the applications720and/or other components and/or layers. The libraries716typically provide functionality for use by other software modules to perform tasks, rather than rather than interacting directly with the OS714. The libraries716may include system libraries734(for example, C standard library) that may provide functions such as memory allocation, string manipulation, file operations. In addition, the libraries716may include API libraries736such as media libraries (for example, supporting presentation and manipulation of image, sound, and/or video data formats), graphics libraries (for example, an OpenGL library for rendering 2D and 3D graphics on a display), database libraries (for example, SQLite or other relational database functions), and web libraries (for example, WebKit that may provide web browsing functionality). The libraries716may also include a wide variety of other libraries738to provide many functions for applications720and other software modules. The frameworks718(also sometimes referred to as middleware) provide a higher-level common infrastructure that may be used by the applications720and/or other software modules. For example, the frameworks718may provide various graphic user interface (GUI) functions, high-level resource management, or high-level location services. The frameworks718may provide a broad spectrum of other APIs for applications720and/or other software modules. The applications720include built-in applications740and/or third-party applications742. Examples of built-in applications740may include, but are not limited to, a contacts application, a browser application, a location application, a media application, a messaging application, and/or a game application. Third-party applications742may include any applications developed by an entity other than the vendor of the particular platform. The applications720may use functions available via OS714, libraries716, frameworks718, and presentation layer744to create user interfaces to interact with users. Some software architectures use virtual machines, as illustrated by a virtual machine748. The virtual machine748provides an execution environment where applications/modules can execute as if they were executing on a hardware machine (such as the machine800ofFIG.8, for example). The virtual machine748may be hosted by a host OS (for example, OS714) or hypervisor, and may have a virtual machine monitor746which manages operation of the virtual machine748and interoperation with the host operating system. A software architecture, which may be different from software architecture702outside of the virtual machine, executes within the virtual machine748such as an OS750, libraries752, frameworks754, applications756, and/or a presentation layer758. FIG.8is a block diagram illustrating components of an example machine800configured to read instructions from a machine-readable medium (for example, a machine-readable storage medium) and perform any of the features described herein. The example machine800is in a form of a computer system, within which instructions816(for example, in the form of software components) for causing the machine800to perform any of the features described herein may be executed. As such, the instructions816may be used to implement modules or components described herein. The instructions816cause unprogrammed and/or unconfigured machine800to operate as a particular machine configured to carry out the described features. The machine800may be configured to operate as a standalone device or may be coupled (for example, networked) to other machines. In a networked deployment, the machine800may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a node in a peer-to-peer or distributed network environment. Machine800may be embodied as, for example, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a gaming and/or entertainment system, a smart phone, a mobile device, a wearable device (for example, a smart watch), and an Internet of Things (IoT) device. Further, although only a single machine800is illustrated, the term “machine” includes a collection of machines that individually or jointly execute the instructions816. The machine800may include processors810, memory830, and I/O components850, which may be communicatively coupled via, for example, a bus802. The bus802may include multiple buses coupling various elements of machine800via various bus technologies and protocols. In an example, the processors810(including, for example, a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an ASIC, or a suitable combination thereof) may include one or more processors812ato812nthat may execute the instructions816and process data. In some examples, one or more processors810may execute instructions provided or identified by one or more other processors810. The term “processor” includes a multi-core processor including cores that may execute instructions contemporaneously. AlthoughFIG.8shows multiple processors, the machine800may include a single processor with a single core, a single processor with multiple cores (for example, a multi-core processor), multiple processors each with a single core, multiple processors each with multiple cores, or any combination thereof. In some examples, the machine800may include multiple processors distributed among multiple machines. The memory/storage830may include a main memory832, a static memory834, or other memory, and a storage unit836, both accessible to the processors810such as via the bus802. The storage unit836and memory832,834store instructions816embodying any one or more of the functions described herein. The memory/storage830may also store temporary, intermediate, and/or long-term data for processors810. The instructions816may also reside, completely or partially, within the memory832,834, within the storage unit836, within at least one of the processors810(for example, within a command buffer or cache memory), within memory at least one of I/O components850, or any suitable combination thereof, during execution thereof. Accordingly, the memory832,834, the storage unit836, memory in processors810, and memory in I/O components850are examples of machine-readable media. As used herein, “machine-readable medium” refers to a device able to temporarily or permanently store instructions and data that cause machine800to operate in a specific fashion, and may include, but is not limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical storage media, magnetic storage media and devices, cache memory, network-accessible or cloud storage, other types of storage and/or any suitable combination thereof. The term “machine-readable medium” applies to a single medium, or combination of multiple media, used to store instructions (for example, instructions816) for execution by a machine800such that the instructions, when executed by one or more processors810of the machine800, cause the machine800to perform and one or more of the features described herein. Accordingly, a “machine-readable medium” may refer to a single storage device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes signals per se. The I/O components850may include a wide variety of hardware components adapted to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components850included in a particular machine will depend on the type and/or function of the machine. For example, mobile devices such as mobile phones may include a touch input device, whereas a headless server or IoT device may not include such a touch input device. The particular examples of I/O components illustrated inFIG.8are in no way limiting, and other types of components may be included in machine800. The grouping of I/O components850are merely for simplifying this discussion, and the grouping is in no way limiting. In various examples, the I/O components850may include user output components852and user input components854. User output components852may include, for example, display components for displaying information (for example, a liquid crystal display (LCD) or a projector), acoustic components (for example, speakers), haptic components (for example, a vibratory motor or force-feedback device), and/or other signal generators. User input components854may include, for example, alphanumeric input components (for example, a keyboard or a touch screen), pointing components (for example, a mouse device, a touchpad, or another pointing instrument), and/or tactile input components (for example, a physical button or a touch screen that provides location and/or force of touches or touch gestures) configured for receiving various user inputs, such as user commands and/or selections. In some examples, the I/O components850may include biometric components856, motion components858, environmental components860, and/or position components862, among a wide array of other physical sensor components. The biometric components856may include, for example, components to detect body expressions (for example, facial expressions, vocal expressions, hand or body gestures, or eye tracking), measure biosignals (for example, heart rate or brain waves), and identify a person (for example, via voice-, retina-, fingerprint-, and/or facial-based identification). The motion components858may include, for example, acceleration sensors (for example, an accelerometer) and rotation sensors (for example, a gyroscope). The environmental components860may include, for example, illumination sensors, temperature sensors, humidity sensors, pressure sensors (for example, a barometer), acoustic sensors (for example, a microphone used to detect ambient noise), proximity sensors (for example, infrared sensing of nearby objects), and/or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components862may include, for example, location sensors (for example, a Global Position System (GPS) receiver), altitude sensors (for example, an air pressure sensor from which altitude may be derived), and/or orientation sensors (for example, magnetometers). The I/O components850may include communication components864, implementing a wide variety of technologies operable to couple the machine800to network(s)870and/or device(s)880via respective communicative couplings872and882. The communication components864may include one or more network interface components or other suitable devices to interface with the network(s)870. The communication components864may include, for example, components adapted to provide wired communication, wireless communication, cellular communication, Near Field Communication (NFC), Bluetooth communication, Wi-Fi, and/or communication via other modalities. The device(s)880may include other machines or various peripheral devices (for example, coupled via USB). In some examples, the communication components864may detect identifiers or include components adapted to detect identifiers. For example, the communication components864may include Radio Frequency Identification (RFID) tag readers, NFC detectors, optical sensors (for example, one- or multi-dimensional bar codes, or other optical codes), and/or acoustic detectors (for example, microphones to identify tagged audio signals). In some examples, location information may be determined based on information from the communication components862, such as, but not limited to, geo-location via Internet Protocol (IP) address, location via Wi-Fi, cellular, NFC, Bluetooth, or other wireless station identification and/or signal triangulation. While various embodiments have been described, the description is intended to be exemplary, rather than limiting, and it is understood that many more embodiments and implementations are possible that are within the scope of the embodiments. Although many possible combinations of features are shown in the accompanying figures and discussed in this detailed description, many other combinations of the disclosed features are possible. Any feature of any embodiment may be used in combination with or substituted for any other feature or element in any other embodiment unless specifically restricted. Therefore, it will be understood that any of the features shown and/or discussed in the present disclosure may be implemented together in any suitable combination. Accordingly, the embodiments are not to be restricted except in light of the attached claims and their equivalents. Also, various modifications and changes may be made within the scope of the attached claims. While the foregoing has described what are considered to be the best mode and/or other examples, it is understood that various modifications may be made therein and that the subject matter disclosed herein may be implemented in various forms and examples, and that the teachings may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all applications, modifications and variations that fall within the true scope of the present teachings. Unless otherwise stated, all measurements, values, ratings, positions, magnitudes, sizes, and other specifications that are set forth in this specification, including in the claims that follow, are approximate, not exact. They are intended to have a reasonable range that is consistent with the functions to which they relate and with what is customary in the art to which they pertain. The scope of protection is limited solely by the claims that now follow. That scope is intended and should be interpreted to be as broad as is consistent with the ordinary meaning of the language that is used in the claims when interpreted in light of this specification and the prosecution history that follows and to encompass all structural and functional equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirement of Sections101,102, or103of the Patent Act, nor should they be interpreted in such a way. Any unintended embracement of such subject matter is hereby disclaimed. Except as stated immediately above, nothing that has been stated or illustrated is intended or should be interpreted to cause a dedication of any component, step, feature, object, benefit, advantage, or equivalent to the public, regardless of whether it is or is not recited in the claims. It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. Relational terms such as first and second and the like may be used solely to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “a” or “an” does not, without further constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element. The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various examples for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. | 89,160 |
11943189 | DETAILED DESCRIPTION OF THE INVENTION The disclosure is provided in order to enable a person having ordinary skill in the art to practice the invention. Exemplary embodiments herein are provided only for illustrative purposes and various modifications will be readily apparent to persons skilled in the art. The general principles defined herein may be applied to other embodiments and applications without departing from the scope of the invention. The terminology and phraseology used herein is for the purpose of describing exemplary embodiments and should not be considered limiting. Thus, the present invention is to be accorded the widest scope encompassing numerous alternatives, modifications and equivalents consistent with the principles and features disclosed herein. For purposes of clarity, details relating to technical material that is known in the technical fields related to the invention have been briefly described or omitted so as not to unnecessarily obscure the present invention. The present invention would now be discussed in context of embodiments as illustrated in the accompanying drawings. FIG.1is a detailed block diagram of a system100for creating an intelligent memory and providing contextual intelligent recommendations, in accordance with an embodiment of the present invention. Referring toFIG.1, the system100comprises a communication source unit110, an intelligent memory generation subsystem102, a second repository124and one or more information units (1-n)132. The communication source unit110, the second repository124and the information unit132are connected to the intelligent memory generation subsystem102via a communication channel (not shown). The communication channel (not shown) may include, but is not limited to, a physical transmission medium, such as, a wire, or a logical connection over a multiplexed medium, such as, a radio channel in telecommunications and computer networking. Examples of radio channel in telecommunications and computer networking may include, but are not limited to, a local area network (LAN), a metropolitan area network (MAN) and a wide area network (WAN). In an embodiment of the present invention, the system100is a self-optimization system configured to employ one or more cognitive techniques for extracting and processing data associated with electronic communications. In an exemplary embodiment of the present invention, the cognitive techniques may include, but are not limited to, artificial intelligence techniques and machine learning techniques. The system100provides an optimized interlinking of conversations data in electronic communications for creating an intelligent memory and providing contextual intelligent recommendations. In an exemplary embodiment of the present invention, the intelligent memory is representative of an institutional memory comprising one or more data types associated with a group of people in an organization. The one or more data types correspond to facts, concepts, experiences and knowledge associated with the group of people which are collected, processed and enhanced to provide inheritance of electronic communications data, contextual recommendations and insights. The system100is configured to determine intelligence from data associated with electronic communications. For example, the system100may process and extract electronic communications data present in an email mailbox of an employee of an organization for determining intelligence associated with such data. In an embodiment of the present invention, the subsystem102comprises an intelligent memory generation engine104(the engine104), a processor106and a memory108. The engine104includes various units, which operate in conjunction with each other for generating intelligent memory and providing contextual intelligent recommendations. The various units of the engine104are operated via the processor106specifically programmed to execute instructions stored in the memory108for executing respective functionality of the units of the engine104, in accordance with various embodiments of the present invention. In an embodiment of the present invention, the subsystem102may be implemented in a cloud computing architecture in which data, applications, services, and other resources are stored and delivered through shared data-centers. In an exemplary embodiment of the present invention, the functionalities of the subsystem102are delivered to a user as Software as a Service (SaaS) or Platform as a Service (Paas) over a communication network. In another embodiment of the present invention, the subsystem102may be implemented as a client-server architecture. In this embodiment of the present invention, a client terminal accesses a server hosting the subsystem102over a communication network. The client terminals may include but are not limited to a smart phone, a computer, a tablet, a Graphical User Interface (GUI), an Application Programming Interface (API), microcomputer or any other wired or wireless terminal. The server may be a centralized or a decentralized server. In an embodiment of the present invention, the engine104comprises a scheduler queue unit112, a parsing unit114, a Natural Language Processing (NLP) unit116, a first repository118, a visualization unit120, a record storage unit122, a recommendation unit126, a reporting database128and an analytics unit130. In an embodiment of the present invention, the first repository118comprises data associated with at least one or more past users, who are no longer associated with the organization and one or more present users.FIG.2illustrates a detailed block diagram of the first repository202(118,FIG.1). The first repository202comprises a first sub-repository204, a repository synchronization (synch) scheduler unit208, an extraction service unit210, a comparison service unit212. Further, the first sub-repository204comprises an active user list unit214, a storage unit216and an exclusion list unit218. In an exemplary embodiment of the present invention, the first repository202is configured to operate on at least one of, but is not limited to, a Not Only Structured Query Language (NoSQL) design, a Lightweight Directory Access Protocol (LDAP) or a Graph database. In an embodiment of the present invention, the extraction service unit210of the first repository202is connected to an organization directory unit206(i.e. the second repository124,FIG.1). The organization directory unit206is associated with user data in an organization and is updated based on current user data as well as data associated with users joining and leaving the organization. In an embodiment of the present invention, the extraction service unit210connects and synchronizes (synch) with the organization directory unit206at pre-determined intervals for determining any changes in the user data and subsequently extracting the user data for extracting the electronic communications data associated with the user data. The repository synch scheduler unit208is configured to communicate with the extraction service unit210for controlling and scheduling the user data extraction from the organization directory unit206. In an embodiment of the present invention, the repository synch scheduler unit208is configured to set a time window for extraction of a count data associated with the user data from the organization directory unit206in a pre-defined time-period and frequency. Further, the count data present in the organization directory unit206is determined based on, at least, users (i.e. the employees, contractors, etc.) that have left the organization, present users in the organization, or new users that have joined the organization on a previous day. Further, based on the count data, users which are determined as missing since a previous sync are flagged as ‘Not Active’ (NA) by the extraction service unit210. Extraction of data corresponding to such users are not carried out from that day onwards. Existing or new users are flagged as ‘active’ by the extraction service unit210. In an embodiment of the present invention, the extracted user data from the extraction service unit210is transmitted to the comparison service unit212for comparing changes in user data since a last synch. In an embodiment of the present invention, the extraction service unit210receives user data from the organization directory unit206with all data fields. The comparison service unit212makes a differential comparison of each relevant field (i.e. fields mentioned in Table 1) for determining changes. In the event a change is determined in a data field from the information received from the organization directory unit206as compared to a corresponding data field in the storage unit216(for e.g. change in manager, geo location, designation, etc.), then the comparison service unit212modifies that data field in the storage unit216to ensure consistency between the user data present in the storage unit216and the user data present in the organization directory unit206. In an embodiment of the present invention, based on an extracted list of active users, a synch mechanism of the repository synch scheduler unit208configured within the first repository118sets the appropriate time zone for each user.FIG.6is a flowchart illustrating the setting of time zone for each user in the active user list unit214(FIG.2) of the first repository202(FIG.2). As illustrated in FIG.6, the repository synch scheduler unit208performs the extraction at predetermined time intervals. This aids in updating the time zone data associated with the user data stored in the first repository202. Further, if the user has not updated his/her preferred time zone, then the repository synch scheduler unit208uses a messaging source time zone settings as the preferred time zone for that user. In an embodiment of the present invention, the extraction service unit210is further configured to extract geographical location of each user associated with the electronic communications data. The extraction of geographical location aids in scheduling extraction by the scheduler queue unit112for a particular user's communications data only when a previous calendar day is over in that geography for that particular user. An extraction process flow carried out by the scheduler queue unit112,302for user data selection is illustrated inFIG.3.FIG.3depicts user data selection by the scheduler queue unit112,302from the first repository202. In an exemplary embodiment of the present invention, the extraction service unit210is configured to assemble the extracted user data in a format as illustrated in Table 1: TABLE 1LastLastViewUserUserUserManagerManagerExtractionExtractionRightsIDNameemailNameemailLocationDateMonthto:Active? Further, the user data stored in the storage unit216is classified into one or more attributes, such as, but is not limited to, e-mail attributes, time attributes, geographical attributes and user attributes. E-mail attributes may include, but are not limited to, user name, given name, surname, display name and e-mail. Time attributes include, but are not limited to, user update, created date and time, last update time and previous start time. Geographical attributes include, but are not limited to, use location, country, time zone and preferred time zone. User attributes include, but are not limited to, active user, view Access Control List privileges and manager of the user. In an exemplary embodiment of the present invention, the extraction service unit210, during each sync cycle with the organization directory unit206, is configured to verify and update manager related information for each user (i.e. employee) and captures a hierarchy between a manager and the user. The manager related information is stored in the storage unit216and used for providing authorization for viewing conversation threads associated with relevant criteria corresponding to the electronic communications of users who have left the organization or have been reassigned from the team, via the visualization unit120using a visualization portal. Further, the storage unit216comprises an all Access Control List (ACL) which is configured to provide rights to defined users for providing ‘view’ authorization to view conversation data of existing users or user who have left the organization. The visualization unit120provides an administration functionality via which a manager may carry out modifications in the ‘view’ authorization of the user who has left the organization and provides the existing or the new user with ‘view’ authorization for viewing conversation threads associated with the electronic communications. In an embodiment of the present invention, the processed user data from the comparison service unit212is transmitted to the storage unit216. The storage unit216is configured to store the processed user data associated with various users. In an embodiment of the present information, each user data entry in the storage unit216has a ‘view Access Control List (ACL) field value associated with it. The ‘view ACL’ field value provides for inheriting electronic communications data by a user who is not originally part of the electronic communications data. Further, a default value of ‘view ACL’ field value has the same value as the user ID for default access by a user only to their own electronic communications data. The storage unit216is configured to provide access for defined users (based on organization defined user hierarchy and permissions) to change the ‘view ACL’ field value for providing ‘view’ authorization to the existing or new users in the organization for viewing particular conversations associated with the electronic communications of the user who has left the organization (i.e. a former user) or has been reassigned. The ‘view’ authorization is provided to the existing or new users based on, but is not limited to, account name, territory and geographic location. Advantageously, ‘view’ authorization aids in better access of a former user's conversation associated with the electronic communication at a granular level. Further, ‘view’ authorization aids in providing the existing or new users with the entire conversation thread associated with the electronic communication, instead of just forwarding the conversation threads to the existing or new user. Yet further, ‘view’ authorization aids in providing access to attachment revision history for a particular inherited conversation thread associated with the electronic communication such that the new user or the reassigned user has rights to view only the specific electronic communications data, based on specific criteria, of the user who has left the organization and not his/her personal electronic communications data. In an embodiment of the present invention, the active user data is stored in the active user list unit214in the first repository202for determining which users are currently active and whose communication data needs to be extracted by the scheduler queue unit112,302for further processing. A user is considered active if the user is listed as currently employed with the organization or has an active entry in the organization directory unit206. Further, the exclusion list unit218is configured to maintain an exclusion list of users (as defined by the organization) whose communication data needs to be excluded from being extracted by the scheduler queue unit112,302. In an embodiment of the present invention, the scheduler queue unit112,302is configured to communicate with the exclusion list unit218, prior to scheduling the user communication data extraction from the communication source unit110, for carrying out a check and determining whether the user is in the exclusion list. In an exemplary embodiment of the present invention, referring toFIG.3, if the user data is in the exclusion list or the user data is not currently active or the user is not in the current time zone, then the scheduler queue unit112,302does not schedule extraction of the user's communication data. In another exemplary embodiment of the present invention, if the user data is not in the exclusion list and the user data is currently active and the user data is in the current time zone, then the user data is transmitted for queuing in the scheduler queue unit112,302. In another exemplary embodiment of the present invention, the scheduler queue unit112,302uses a distributed event streaming platform to ensure that all users scheduled for electronic communication data extraction are maintained in a secure and scalable queue. Referring toFIG.4, this queue aids the message parser406within the parsing unit114(FIG.1) to process only the selected users in the queue based on the order in which the users were added to the queue.FIG.4is explained in detail in the later part of the specification. Advantageously, this ensures the processing of user's electronic communication data beyond normal processing times for users in a particular time zone that may present more than average number of active users. Further, advantageously, the user notification queue404is scalable and is executed by one or more queue operations such as, but not limited to, Java Management Extensions (JMX) and Kafka. In an embodiment of the present invention, the scheduler queue unit112is configured to organize users for extraction of their electronic communications data present in the communication source unit110in a defined order. In an embodiment of the present invention, the engine104is configured to employ authentication techniques for authenticating and authorizing the parsing unit114for connecting to the communication source unit110for accessing and extracting electronic communications data present in the communication source unit110. In an exemplary embodiment of the present invention, if a Single Sign-On (SSO) functionality is implemented on the communication source unit110, then the parsing unit114uses one or more defined SSO tokens for authentication and providing read access to the electronic communications data present in the communication source unit110. In an embodiment of the present invention, the scheduler queue unit112is configured to select active users in the first repository118in a particular time zone at pre-defined time intervals (e.g. 30-minute intervals) so as to cover all time zones and subsequently extract electronic communications data (e.g. e-mails) sent or received during the previous calendar day to the communication source unit110(e.g. data in mailboxes) in that time zone. Thereafter, the scheduler queue unit112queues the selected user data, using an event notification queue, for sending to the parsing unit114for extraction of electronic communications data for the scheduled users. FIG.4illustrates an electronic communications data extraction process from the communication source unit110(FIG.1) for using the extracted data in a parsing operation and carrying out a deduplication process. In an embodiment of the present invention, the scheduler queue unit112(FIG.1) comprises a parser scheduler402for scheduling extraction of active user data from the first repository118and extracting electronic communications data associated with the active user data from the communication source unit110.FIG.7is a flowchart illustrating electronic communications data extraction by the scheduler queue unit (112). The parser scheduler402is invoked at a pre-defined time interval (e.g. every 30 minutes) and performs a check whether users are available for that specific time zone for electronic communications data extraction. The parser scheduler402further performs a check to determine whether users have been added in the exclusion list unit218(FIG.2) or if the users have been classified as inactive since the last extraction before processing the user data for extraction. In an embodiment of the present invention, the parsing unit114is configured to extract electronic communications data associated with the users queued for extraction by the scheduler queue unit112. In an embodiment of the present invention, the parsing unit114checks for duplicate data associated with the electronic communications data by carrying out the deduplication process. The parsing unit114extracts electronic communications data for processing by selected users in a user notification queue404in the scheduler queue unit112. The parser unit114parses each individual user communication data and compares the individual electronic communications data with previously parsed communication information stored in the record storage unit122(FIG.1) based on unique identifiers (for e.g., message ID, time stamp, origin, etc.). In the event an electronic communication has already been processed by the parser unit114, then the record storage unit122returns a valid unique identifier as evidence for the processed and stored communications data. The parser unit114stops any further processing of that communications data and proceeds to the next communications data. Further, the parsing unit114is configured to process the user data based on a multithreaded implementation of message parsers406for the active users in the scheduler queue unit112for extracting the electronic communications. Advantageously, the message parsers406are scalable and distributed parsers and process large amount of user data concurrently. Further, each message parser406is multithreaded and processes the user data based on available computing capacity. Subsequent to the extraction of the electronic communication, the parsing unit114is configured to provide the extracted electronic communications data to a NLP event notification service408of the NLP unit116. In an exemplary embodiment of the present invention, the NLP event notification service408is executed by the queue operations such as, but are not limited to, the Java Management Extensions (JMX) and the Kafka. In an embodiment of the present invention, the parsing unit114is configured to continuously track date and time-period of a last extraction of the electronic communication data from the communication source unit110. Tracking of the electronic communication data aids in carrying out a phased and consistent extraction. In an embodiment of the present invention, the extracted active user data is received by the user notification queue404of the scheduler queue unit112. The message processing thread of message parser406selects a user from the user notification queue404for processing, as depicted by the flowchart illustrated inFIG.8. The message processing thread of the message parser (406) performs a check to determine whether the user data is being processed for the first time. Further, if the user data is determined to be processed for the first time, then it is checked whether that user data associated with the electronic communications data is an existing user data and whether the user data requires historical data extraction. In an embodiment of the present invention, in the initial implementation, the message parser406is configured to extract the historical electronic communications data based on a pre-configured maximum age of electronic communications data (e.g. electronic communications data from last two years). In this embodiment of the present invention, the entire electronic communication data is not extracted in one day. The electronic communications data is only extracted for a predefined number of days. Further, each day of ingestion extracts the electronic communications data for a predefined number of past days. Therefore, advantageously, defining predefined number of days for electronic communications data extraction ensures consistency of electronic data extraction for all users with accuracy. In an embodiment of the present invention, the electronic communications data for each user is parsed by the message parser406from the last parsed date to the previous day of extraction of the electronic communications data. Further, for past electronic communications data, if it is determined that the number of days of electronic communications data to be extracted is greater than user joining date or exceeding the date of electronic communications data extraction for a maximum time period (e.g. 2 years), then the last electronic communications data to be processed is taken from the date of joining of the user or from a maximum time period. For example, in the initial implementation, if a 60-day cycle is implemented for historical electronic communications data extraction and the last cycle reaches a defined parameter, such as, but not limited to, an employee date of joining or the organization defining maximum extraction period of say 2 years, then the extraction of electronic communications data for that user stops and the defined parameter is marked as a start date from which the electronic communications data associated with the user is available. FIG.5illustrates a pipeline diagram for executing the parsing unit114(FIG.1). In an embodiment of the present invention, the message parser502in the parsing unit114is configured to extract electronic communications data for selected users for processing from the user notification queue506provided by the parser scheduler504. The message parser502performs the electronic communications data extraction based on at least the time zone associated with the user, the time zone preferred by the user or time zone set by the organization's administrator for the user. The parser scheduler504is based on an active-passive architecture and is connected to a first repository510for accessing user data for carrying out electronic communication extraction for the particular user. The extracted list of active users is provided to a user notification queue506present in the scheduler queue unit112(FIG.1). Further, each thread of the message parser502in the parsing unit114(FIG.1) is configured to select an active user from the user notification queue506and the electronic communication data from communication source unit110associated with the selected active user is extracted. Further, subsequent to completion of processing of each electronic communications data associated with the selected active user, the processed electronic communications data is converted to a custom record format by the message parser502for storage in the record storage unit122. In an exemplary embodiment of the present invention, storage of the processed electronic communications data in the record storage unit122(FIG.1) is carried out using at least a custom JavaScript Object Notation (JSON) format, object store format, or a Graph format. The custom record format is then added to the notification NLP queue508for processing by the NLP unit116(FIG.1) and subsequently stored in a sub-record storage unit1004(FIG.10), as elaborated later in the specification. Referring toFIG.4, in an embodiment of the present invention, the message parser406analyses each electronic communications data in the communication source unit110and determines whether the electronic communications data is at least a calendar invite or a regular message.FIG.9is a flowchart illustrating a process for determining whether the electronic communications data is at least a calendar invite or a regular message. Further, if the electronic communications data is a calendar invite, then the electronic communications data is processed in the form of a calendar to determine an invite flow by identifying one or more variations associated with the calendar such as, but not limited to, details of a calendar invite (such as, sender, time, duration, attendees, communication channel, etc.), a calendar response (such as, accept, decline or tentative) and a calendar cancellation. Further, if the electronic communications data is the regular message, then the message parser406processes the body and attachment (if any) of the regular message. The attachment is transmitted to the record storage unit122(FIG.1) for storage after carrying out a deduplication process. In an exemplary embodiment of the present invention, deduplication process of attachment is carried out based on a SHA hash256or a similar technique. Further, if the attachment already exists in the record storage unit122(FIG.1), then a key of the attachment already available in the record storage unit122is retrieved and subsequently updated in the custom record. In an embodiment of the present invention, the message parser406(FIG.4) is configured to perform a check for determining duplication of the electronic communications data, i.e. whether the electronic communications data has been processed or not, by checking whether an electronic communications data ID exist in the record storage unit122(FIG.1) or not. If the electronic communications data ID exists in the record storage unit122(FIG.1), then the electronic communications data is not processed or stored in the record storage unit122(FIG.1). Therefore, the message parser406(FIG.4) is configured to process and store only unique electronic communications data in the record storage unit122(FIG.1) using a custom format. Identical extracted electronic communications data are identified which may be associated with multiple users to reject duplicate electronic communications data from being stored in the record storage unit122(FIG.1). This significantly reduces storage space required for the record storage unit122. In an exemplary embodiment of the present invention, the duplicate parsed header and subject, duplicate parsed body of the conversations data associated with the electronic communications data and the duplicate attached documents, referred to as duplicate data, are analyzed and removed by the parsing unit114based on a deduplication (DeDupe) check process prior to transmitting to the record storage unit122. Further, if a response to the electronic communications data is identified as having references to inline electronic communications data replies, then the entire electronic communications data is captured in the custom format. Further, each processed electronic communications data in custom format is transmitted to the notification NLP queue508(FIG.5) for further processing and subsequently stored in the sub-record storage unit1004(FIG.10) of the record storage unit1002(FIG.10). In an embodiment of the present invention, the NLP unit116is configured to carry out a keyword tagging operation on the custom record formatted electronic communication data in the notification NLP queue508(FIG.5). The NLP unit116is configured to parse the conversation data associated with the electronic communications data and search for one or more relevant keywords in the conversation data associated with the electronic communications data. Further, tagging of the keywords in the electronic communications data is based on a pre-generated keywords map, referred to as a pilot dictionary, to process such keywords. The tagging of keywords aids in determining relevance of each conversation data associated with the electronic communications data, possible transitions in a particular conversation data associated with the electronic communications, determining the organization specific conversation terms and the subsequent steps that may need to be taken. In an embodiment of the present invention, the NLP unit116is configured to receive the parsed data as input from the parsing unit114for extraction of the keywords. The parsed data from the parsing unit114is provided to the NLP unit116based on the NLP event notification queue508. In an exemplary embodiment of the present invention, the NLP unit116is configured with the pilot dictionary comprising natural language words and phrases which are of relevance to the organization. The pilot dictionary is generated based on a learning process during the initial deployment phase by the NLP unit116. The NLP unit116is configured to use the pilot dictionary to parse the initial electronic communications data in order to develop a Natural Language (NL) vocabulary which is specific to the organization. Further, the developed NL vocabulary is used to analyze the natural language associated with the future electronic communications. The NL vocabulary is fine-tuned based on one or more NL datasets associated with initial electronic communications data of the organization. In an embodiment of the present invention, the development of the NL vocabulary is an iterative process performed during the initial deployment of the solution. The NLP unit116(FIG.1) is configured to carry out a first iteration for identifying electronic communications data stored within the record storage unit122(FIG.1) in order to train the pilot dictionary data set from the stored electronic communications data for developing the NL vocabulary. Further, the NLP unit116, in subsequent iterations, is configured to establish a relationship with the stored electronic communications data based on 1-nthlevel of transitivity. The iterations are customizable based on one or more development parameters associated with the NL vocabulary. In an embodiment of the present invention, subsequent to development of the NL vocabulary, the NLP unit116is configured to further parse the stored electronic communications data in the record storage unit122(FIG.1) for converting the stored electronic communications data into one or more detailed electronic communications data records. The detailed electronic communications data records are stored in the sub-record storage unit1004of the record storage unit1002. In an embodiment of the present invention, the developed NL vocabulary is employed for the incoming electronic communications data for identifying the keywords and saving in the detailed electronic communications data records. The stored keywords are used by the analytics unit1014(130,FIG.1) for further processing. Further, the identified keywords are appended to the respective detailed electronic communications data records stored in the sub-record storage unit1004of the record storage unit1002. In an embodiment of the present invention, each parsed electronic communications data is analyzed by the parsing unit114to quantify the relevance to the organization. In an exemplary embodiment of the present invention, an initial dictionary (i.e. the pilot dictionary) is used by the parsing unit114to ascertain the context and relevance of each conversation that is parsed by the parsing unit114for the organization. The context and relevance are determined based on weights, repetitions and functionality associated with the electronic communications. During initial deployment of the pilot dictionary, a sample set (for example, 500,000 communications) is split into a 4:1 ratio for dictionary training and testing. Further, during the initial deployment of the pilot dictionary, the output of the parser unit114for the sample set is directly stored, in a custom format, in the record storage unit122instead of being sent to the NLP event notification queue508(FIG.5). Each communication in this initial communication set is analyzed for context and relevance with the pilot dictionary. Each keyword is determined as relevant based on the presence of one or more words from the pilot dictionary and each relevant word is further added to the pilot dictionary for use in subsequent conversation analysis. This process aids in enhancing and improving the pilot dictionary for the organization as well as learning the specific terms used by the organization in the course of operation. Further, keywords corresponding to an entity associated with the electronic communications data are stored as a first tag for each electronics communications data in a custom record format in the record storage unit122. Examples of keywords include, but are not limited to, sender name, customer domain, and timestamp. Advantageously, tags aid in understanding characteristics such as, but not limited to, user behavior, business health, responsiveness and customer interaction. The tags are transmitted to the analytics unit130for analysis and to the visualization unit120for enhanced access control and search capabilities. In an embodiment of the present invention, subsequent to analysis of the initial communication set, the pilot dictionary is considered as the main dictionary for all further analysis for determining context and relevance from all subsequent electronic communications. Each electronic communications data from the initial electronic communications data set is analyzed using the pilot dictionary and each keyword corresponding to the analyzed electronic communications data is stored as a second tag in a custom record format for that electronic communication in the record storage unit122. Further, all subsequent electronic communications (beyond the initial communication set) are processed by the parser unit114and passed to the NLP event notification queue508(FIG.5) for further processing. In an embodiment of the present invention, the NLP unit116is further configured to pre-process the electronic communications data by cleaning the electronic communications data to reduce noise in the electronic communications data, tokenize the content in the electronic communications data, implement a lemmatization operation for reducing the words to their root form and selectively learn from the electronic communications data for extracting contextual and relevant data from the electronic communications data. The pre-processing process aids in refining understanding of language associated with the electronic communications data by the system100and improving the pilot dictionary. In an embodiment of the present invention, the NLP unit116receives parsed electronic communications data from the NLP notification queue508(FIG.5). The electronic communications data received is cleaned by the NLP unit116to remove standard communication artifacts such as, but not limited to, punctuations, and other content that may lack relevance. The cleaned data is further tokenized by the NLP unit116. The NLP unit116breaks down sentence structures into elements such as, but not limited to, words, characters, sub-words. The tokenization operation relates to separating the cleaned data into smaller elements, referred as tokens. The tokens may be associated with, but are not limited to, words, characters and sub-words. The tokenization operation may, therefore, be classified as word, character and sub-word tokenization operation. The NLP unit116further removes stop words such as, but not limited to, articles, etc. which are a list of pre-defined words that are used for stringing the conversations, associated with the electronic communications data, together which do not have relevance from the conversation perspective. Advantageously, removal of stop words aids in creating a cohesive first grouping of keywords present in the cleaned data in order to understand the context of the conversations. The remaining words, phrases are analyzed with reference to the pilot dictionary to ascertain relevance of the communication to the organization. Keywords found relevant are added to the pilot dictionary as part of the dictionary improvement process. In an embodiment of the present invention, the NLP unit116is further configured to implement the lemmatization operation for reducing the words to their root form (i.e. base word). Advantageously, lemmatization operation reduces the words associated with the tokens to their base word, thereby reducing the inflected words and ensuring that the base word belongs to the conversation. Further, in the lemmatization operation the root word is referred to as lemma. The lemma is at least a canonical form, a dictionary form or a citation form of a set of words. The processed keyword information is then further processed by the NLP unit116. In an embodiment of the present invention, the NLP unit116analyzes the nature of the entity associated with the electronic communications including context and relevance based on keyword recurrence, weights, repetitions and functionality associated with the electronic communications. The NLP unit116is configured to analyze the pre-processed parsed data in order to carry out a recognition operation between entities associated with the electronic communications data and context present in the electronic communications data associated with the parsed electronic communications data for distinguishing between the entities and the context. The recognition operation between the entities and the context in the electronic communications data is carried out based on a semantic analysis of the parsed electronic communications data. The recognition operation for context is further carried out based on assimilated learning techniques in addition to carrying out of the semantic operation. In an embodiment of the present invention, the NLP unit116is configured to generate a text and a hypothesis semantic graph, which is a structured linguistic representation comprising information related to semantic electronic communications data. The semantic graphs are generated based on typed dependency graphs, in which each node is a word and labelled edges represent grammatical relations between the words. A semantic graph for a sentence contains a node for each word of the sentence and each node is embedded with metadata generated by a toolkit of linguistic processing tools, including, but are not limited to, word lemmas, parts of speech and named entity recognition. This data is processed to improve the pilot dictionary as well as to improve the analytics capability of the analytics unit130. In an embodiment of the present invention, the analytics unit130uses the electronic communications data to further determine and establish relationships with the other electronic communications data analyzed using machine learning algorithms such as, but not limited to, neural networks techniques. In an embodiment of the present invention, the NLP unit116is configured to utilize output of the semantic operation and the assimilated learning techniques to further process the parsed electronic communications data, determine relationships between the tags associated with the parsed electronic communications data and thereafter generate a score for each relationship. Further, the NLP unit116is configured to communicate with the record storage unit122for storing the output of the semantic operation and assimilated learning techniques in the sub-record storage unit1004(FIG.10). Post analysis, the determined keywords, their corresponding lemma and the relevance based on keyword recurrence, weights, repetitions and functionality associated with the electronic communications are added to the custom record format of the corresponding electronic communications and stored in the record storage unit122. FIG.10illustrates a detailed block diagram of a record storage unit1002(122,FIG.1), in accordance with an embodiment of the present invention. The record storage unit1002comprises the sub-record storage unit1004and a recommendation tracking collection unit1006. The record storage unit1002is a central repository of all custom formatted records generated by the NLP Unit1012(116,FIG.1) associated with the organization. In an embodiment of the present invention, the sub-record storage unit1004of the record storage unit1002is configured to receive an input as a parsed electronic communications data in custom format related to one or more conversation parameters of the electronic communications data comprising, the parsed header, subject, the parsed body of the conversations data associated with the electronic communications data along with the attached documents present (if any) with the analyzed conversations data from the NLP unit1012(116,FIG.1) for storage and future retrieval. Further, the sub-record storage unit1004is configured to store the one or more conversation parameters associated with the electronic communications data. The sub-record storage unit1004is configured to store only a single copy of the data received from the parsing unit1008(114,FIG.1) as an electronic communications data object in order to avoid duplication. The storage record unit1002is configured to ensure the data is retained within the organization's computing environment and no data leaves the computing environment to ensure security and data compliance requirements of the organization. Further, no data is sent to any external organization as well as to a solution provider. In an embodiment of the present invention, the analytics unit130(FIG.1) is an artificial intelligence based component configured to determine behavioral patterns of a user associated with the electronic communications data stored in the sub-record storage unit1004(FIG.10) of the record storage unit122(1002,FIG.10). The analytics unit130determine behavioural patterns by carrying out analysis of the electronic communications data stored in the sub-record storage unit1004(FIG.10). The analytics unit130is further configured to communicate with the one or more information units132based on an Application Programming Interface (API) integration. In various embodiments of the present invention, the analytics unit130is continuously trained for improved operation. Further, no data is sent to any external organization as well as a solution provider from the analytics unit130. In an embodiment of the present invention, the analytics unit130is configured to generate a multi-relational model for determining behavioural patterns of users associated with the electronic communications data stored in the sub-record storage unit1004(FIG.10). The analytics unit130processes electronic communications data based on user syntax and the electronic communications data object stored in the sub-record storage unit1004(FIG.10). The multi-relational model provides relationship of the electronic communications data with other electronic communications data stored in the sub-record storage unit1004(FIG.10). For example, a communication may be in the form of an email to a customer and a few colleagues with an attachment for a product sales proposal. Each email represents an independent entity with distinct relationships with multiple users including, but not limited to, one sender and multiple recipients, multiple attachments, one or more customers, an opportunity, etc. as well as distinct properties including, but not limited to, timestamp, subject, etc. Similarly, every single email conversation has similar relationships which contributes to the generating of a multi-relationship model. These attributes, relationships and properties are used to build a relational view for the user and similar users in the organization. In an embodiment of the present invention, the analytics unit130is configured to communicate with the NLP unit116for receiving signals representing relevance of the keywords stored as data in the sub-record storage unit1004(FIG.10) with respect to the electronic communications data. In an embodiment of the present invention, based on the relevant keywords and the keywords stored as the first and the second tag in the record storage unit (122), the analytics unit130generates a querying model. The querying model represents conversation data associated with the electronic communications data in the form of graph nodes (or similar), thereby providing the multi-relational model. The querying model is represented inFIG.11comprising various nodes and illustrating communication between various nodes of the querying model. In an example, a user node has a one-to-one relationship with an electronic communication node when the user is a sender. The electronic communication node has a one-to-many relationship with many recipient user nodes. Further, each electronic communication node has a one-to-one relationship with said electronic communications body and header nodes, but has a one-to-many relationship with any of the attachment nodes that may have been a part of that electronic communication as well as all the keyword nodes that are associated with the electronic communication. The analytics unit130determines behavioural patterns associated with the data stored in the sub-record storage unit1004(FIG.10) based on the querying model. The analytics unit130analyzes types, frequencies and strength of the electronic communications data between users within the organization and outside the organization. The analytics unit130further analyzes cross-references between electronic communications data comprising specific keywords. Further, the analytics unit130is configured to continuously determine behavioural patterns. In an embodiment of the present invention, the analytics unit130is configured to use neural network techniques for continuously determining existing relationships, new relationships and undefined behaviour patterns between the extracted electronic communications data and other data stored in the sub-record storage unit1004(FIG.10). In an embodiment of the present invention, the analytics unit130is configured to employ an inductive neural network technique including, but is not limited to, a GraphSAGE technique on the querying model for continuously determining existing relationships, new relationships and undefined behaviour patterns between the extracted electronic communications data and other data stored in the sub-record storage unit1004(FIG.10). The analytics unit130computes node embeddings for unseen nodes or relationships in the querying model using multiple techniques for continuously determining new relationships and undefined behaviour patterns associated with the electronic communications data and the data stored in the sub-record storage unit1004(FIG.10). Using the multiple techniques, the analytics unit130constantly learns a function that generates node embeddings by understanding neighbouring nodes and properties associated with the nodes in order to compute node embeddings for unseen nodes or relationships in the querying model. Computing node embeddings for unseen nodes or relationships in the querying model aids in effective and time efficient determination of the existing relationships, new relationships and undefined behavioural patterns associated with the data stored in the sub-record storage unit1004(FIG.10). In an embodiment of the present invention, the analytics unit130(FIG.1) is configured to integrate with the multiple (1-n) information units132via an analytics integration module (not shown) associated with the analytics unit130. The analytics unit130integrates with the information units132based on an Application Programming Interface (API) level integration. The information units132include, but are not limited to, a Human Resource Management System (HRMS) and a Customer Relationship Management (CRM) system. In an embodiment of the present invention, the analytics unit130integrates with the information units132via the analytics integration module (not shown) for retrieving user specific data in order to update the first repository118. In another embodiment of the present invention, the analytics unit130integrates with the information units132for determining changes that are to be recommended for updating the information units132based on the output of the analytics unit130. In yet another embodiment of the present invention, the analytics unit130integrates with the information units132to ease user driven changes through direct API calls. In an exemplary embodiment of the present invention, the user may be associated with a certain set of customer accounts data within the organization's CRM and the analytics unit130integrated with the information units132via the analytics integration module (not shown) retrieves the data in order to update the first repository118. In another exemplary embodiment of the present invention, during analysis of the data stored in the sub-record storage unit1004(FIG.10), if the analytics unit130detects a state change of the stored data (e.g. sending out of a proposal by a user), then the analytics unit130is configured to verify a current state of the stored data prior to making the recommendation of changes for updating the information units132(e.g. the current state in the information units132is ‘qualification’). Further, the analytics unit130is configured to generate and send one or more API call based enquiries to the information units132via the analytics integration module (not shown) for recommendation of changes for updating the information units132. In the event, if the current state of the stored data is identical to the proposed change, then the updates in the information units132is ignored. Further, if the current state is incorrect (e.g. the current state is ‘qualification’ and the recommended change in state is ‘proposal’), then the analytics unit130is configured to retrieve the current state and the recommended change is transmitted to the reporting database128for storage and future retrieval. In an embodiment of the present invention, the analytics unit130is configured to communicate with the reporting database128in order to transmit and store the determined behavioural patterns associated with the electronic communications data stored in the sub-record storage unit1004(FIG.10) for each user. The storage of behavioural patterns aids in tracking the past behavioural patterns and maintaining key attribute data associated with each user. In an exemplary embodiment of the present invention, the reporting database128is particularly a repository of user behavioural patterns, associated with the data stored in the sub-record storage unit1004(FIG.10) captured daily through analysis by the analytics unit130, which are then aggregated to provide a weekly view, a monthly view, a quarterly view, an annual view as well as a custom timeline view via the visualization unit120. The reporting database128is further configured to provide visualization of the stored behavioural patterns in a time bound comparison via the visualization unit (120), e.g. a week-over-week, custom time windows, etc. In another embodiment of the present invention, the analytics unit130is configured to transmit one or more user behavioural patterns associated with the computed node embeddings for unseen nodes or relationships in the querying model to the reporting database128. Further, the analytics unit130is configured to transmit one or more past user's behavioural patterns associated with the computed node embeddings for unseen nodes or relationships in the querying model to the reporting database128. In an embodiment of the present invention, one or more recommendations are provided to the users by the analytics unit130via the recommendation unit126based on the multi-relational model. The recommendations represent electronic communications related actionable suggestions, which may include, but are not limited to, a regular message or calendar invite, which the user may have missed responding to for 2 days, a follow-up with a customer on an electronic communication sent by the user, sending a pointer to another user in the organization working on a similar project or a solution or a technology reference in the user's electronic communications sent in a previous day, and a recommendation to create a new opportunity (if already not created) in the organization's CRM after a pre-defined period of interactions between the customer and the user. In another exemplary embodiment of the present invention, the recommendations may include to update the status of an opportunity in CRM based on progress analyzed by the analytics unit130(e.g. based on a proposal being sent or a demo having been concluded). In another exemplary embodiment of the present invention, the recommendations may include processing a previously received “deal won notification” from the organization which may be useful to the user based on the context, a reminder to setup a meeting with a customer (if not already scheduled) as mentioned in the previous day's electronic communications data, a notification of “view” rights being given to the user to a particular electronic communication data by another user (or the user's manager) with a link to view the entire electronic communications on the visualization unit120, and a notification of a change in status for an opportunity the user is either involved in or has view rights. Further, the recommendations may vary depending on the type of the organization such as, but not limited to, sales, HR, supply chain, service delivery, etc. The time period of sending recommendations depends on the inputs received by the recommendation unit126from the analytics unit130and criticality of the recommendation. In an embodiment of the present invention, the analytics engine130is configured to communicate with the recommendation unit126for sending the recommendation in the form of an electronic Recommendation Action communication (RAC) based on the multi-relational model. The recommendation sent to the user, requires the user to take a suitable action based on the sent recommendation. Further, the RAC may be a contextual or a periodic electronic communication comprising a recommended action in the form of recommendation, collaboration suggestions, data on performance and behaviour of each user in the organization. In an embodiment of the present invention, the RAC comprises two delivery time frames including, a weekly RAC and a contextual RAC. In an exemplary embodiment of the present invention, the weekly RAC is sent to the user on every Monday at 9 AM in the user's time zone which comprises at least a summary of the previous week's electronic communications, collaborations and suggestions to improve sharing of electronic communications within the organization. The weekly RAC further provides a specific area of improvement in the performance of the user and further provides the user a detailed dashboard of the user's performance via the visualization unit120along with relative performance comparisons with other users. Further, the recommendation unit126may provide a summary data for demonstrating a change of the user behaviour from a previous week to a current week in the form of the weekly RAC. The recommendation unit126provides the summary data to the user as well as their manager with respect to the time-period between the receiving of the RAC with the recommendation by the user and the action taken by the user on the recommendation. Advantageously, the summary data aids in timely update of the information units132, thereby improving efficiency and predictability of the working of the information units132. In another exemplary embodiment of the present invention, the contextual RAC provides data to the user, based on an analysis of the user's most recent electronic communications, including, but are not limited to, creating a new opportunity by using the information units132after analysis of an electronic communication with a prospect and ‘no correlating’ opportunity identifier in the information units132. In yet another exemplary embodiment of the present invention, the contextual RAC provides data to the user including updating the status or stage of an opportunity (e.g. user updating opportunity stage from ‘qualification’ to ‘proposal’ after the detection of proposal related electronic communications with a prospect) by a single click on the visualization unit120, which communicates with an appropriate information unit132using the analytics integration module (not shown) to facilitate the update. In another exemplary embodiment of the present invention, the contextual RAC provides data to the user including suggesting contact data of other users in the organization that are working on similar projects and opportunities in order to facilitate collaboration. In an embodiment of the present invention, the RAC is scheduled by the recommendation unit126based on communicating with the reporting database128and processing the data stored in the reporting database128. In an embodiment of the present invention, the recommendation unit126is configured to communicate with the first repository118in order to schedule the sending of the weekly RAC at a pre-determined time to all the users in a particular time zone. Further, the recommendation unit126is configured to send the contextual RAC based on availability of the analysis of the user's most recent electronic communications, thereby providing faster responses to users with better insights. In an embodiment of the present invention, the RAC comprises embedded API calls which are used for providing updates to the information units132with a single click via the analytics integration module (not shown). Further, actions on each of these recommendations associated with the RAC, when initiated by a user, is received by the respective information units132and at the same time, the recommendation tracking collection unit1006(FIG.10) in the record storage unit1002(FIG.10) is updated with action taken on the recommendation. Timeline for the action on the recommendation and the type of response is recorded in the recommendation tracking collection unit1006(FIG.10) using APIs embedded in the RAC. Advantageously, the embedded API aids in providing control to the user on any change made to the information units132using the analytics integration module (not shown). The weekly RAC and the contextual RAC may be sent to the user in the form of pictorial and textual stories in electronic communications, which can also be visualized in greater detail by the user on the visualization unit120. In an embodiment of the present invention, the recommendation unit126is configured with one or more pre-defined templates, which are employed for providing visualization of the recommendation and insights on behaviour in the form of the RAC. The pre-defined templates are reusable and are modular in nature.FIGS.12a-12cillustrate the templates used for sending the weekly RAC and the contextual RAC to the user via electronic communication sent to the communication source unit110. In an exemplary embodiment,FIGS.13a-13cillustrate the weekly RAC and the contextual RAC sent to the user, using the templates illustrated inFIGS.12a-12c, via electronic communication sent to the communication source unit110relating to at least, but not limited to a, 1) sales account manager, 2) human resource manager and 3) delivery manager. In an embodiment of the present invention, the recommendation unit126is configured to use a combination of the templates to generate the body of each RAC for the user and add the generated RAC to a scheduled electronic communication delivery queue for that user based on the time zone of the user. Further, the recommendation unit126is regularly updated by the engine104so that a correct template is available for each recommendation type. Further, each RAC template is enabled with a single-click functionality, such that each template is suitably used for accessing the visualization unit120for more detailed representations. In an embodiment of the present invention, the recommendation unit126receives event triggered updates from the analytics unit130in order to send the contextual RAC to the user based on analysis of a recently processed electronic communication data that requires an immediate response. Advantageously, event triggered communication from the analytics unit130aids the recommendation unit126in sending immediate recommendations to the users, which enables users to act upon the recommendations in a timely manner. Further, if a contextual RAC is processed over the weekend, then the recommendation unit126is configured to provide the contextual recommendation as part of the next weekly RAC. In another embodiment of the present invention, the recommendation unit126is configured to communicate with the reporting database128for processing the data stored in the reporting database128for generating a user specific briefing message. In an embodiment of the present invention, each user specific briefing message provides one or more intelligent recommendations to the user and further the user specific briefing message is embedded with APIs in order to provide one-click change on one or more targeted information units132using the analytics integration module (not shown). In the event the user receives the briefing message with an intelligent recommendation from the recommendation unit126, the user may choose to act upon the recommendation immediately or at a later date. The user receives the briefing message with an intelligent recommendation in the form of a Universal Resource Locator (URL) link. Further, in the event the intelligent recommendation URL link is selected, an API call is triggered to the analytics unit130by the recommendation unit126which, firstly, records a timestamp associated with the selected recommendation in the recommendation tracking collection unit1006(FIG.10) of the record storage unit1002(FIG.10). Secondly, the analytics unit130triggers a subsequent API call to the respective information units132through the analytics integration module (not shown) by using the authentication mechanism implemented by the organization. In an embodiment of the present invention, the recommendation tracking collection unit1006(FIG.10) of the record storage unit1002(FIG.10) is further configured to track when a recommendation is made via the recommendation unit126to the user and the time taken by the user to take action on that recommendation. Thus, the recommendation tracking collection unit1006(FIG.10) is continuously updated with the recommendations made via the recommendation unit126and the reporting database128. In an embodiment of the present invention, the visualization unit120is configured to operate in conjunction with the first repository118, the record storage unit122, the recommendation unit126and the reporting database128. The visualization unit120is configured to provide an actionable User Interface (UI) with information related to, but not limited to, historical and inherited user electronic communication data, searchable conversations based on keywords and dashboards for data visualization. The visualization unit120is configured to provide user friendly data driven outcomes including, but are not limited to, representation of custom generated depiction of electronic communications data by inheritable chronological threads, keyword tagging based on electronic communications processed by the NLP unit116and providing dashboards and recommendations to deliver insights relating to non-obvious and hidden patterns in the electronic communications data by implementing AI based analytics. FIG.14illustrates a detailed block diagram of the visualization unit1400(120,FIG.1). In an embodiment of the present invention, the visualization unit1400(120,FIG.1) comprises a visualization layer1402configured to receive inputs from the first repository1404(118,FIG.1), the record storage unit1406(122,FIG.1), the recommendation unit1410(126,FIG.1) and the reporting database1408(128,FIG.1). Further, users interact with the visualization layer of the visualization unit1400by logging-on to the actionable UI by using corporate authentication protocols. The user may access the visualization unit120by clicking on an embedded link in a RAC briefing electronic communication. In an embodiment of the present invention, the visualization layer1402of the visualization unit1400is configured to communicate with the first repository1404(118,FIG.1) for processing and mapping the stored user's data and providing access to the user for data visualization using the authentication functionality. Further, the user data stored in the first repository1404(118,FIG.1) which is processed and mapped by the visualization unit1400is in the form of an Access Control List (ACL) present within the first repository1404(118,FIG.1) for each user. In an embodiment of the present invention, the visualization layer1402is configured to communicate with the record storage unit1406(122,FIG.1) for fetching the data stored in the sub-record storage unit1004(FIG.10) based on the recommendation sent to the user via the recommendation unit1410(126,FIG.1). Further, a record of the recommendation is processed as relating to the electronic communications data based on the linkages between the record storage unit1002(FIG.10) and the recommendation tracking collection unit1006(FIG.10). Advantageously, the linkages between the record storage unit1002(FIG.10) and the recommendation tracking collection unit1006(FIG.10) aids in providing a holistic and detailed view of the entire electronic communications data (including recommendations made and timelines in which they were acted upon), when the electronic communications data is transferred to the new user or the reassigned user. In an embodiment of the present invention, the visualization layer1402of the visualization unit1400(120,FIG.1) is configured to provide one or more intelligent features on the actionable UI by communicating with the record storage unit1406(122,FIG.1). In an exemplary embodiment of the present invention, the intelligent features provide a designated authorized user (as defined by the organization), the ability to access an organized intelligent memory of user conversation threads relating to chronological electronic communications data of any former or current users based on pre-defined inheritance keywords. In another exemplary embodiment of the present invention, the intelligent features include an intelligent view of context-based electronic communication data history related to a particular keyword or event triggered by the user by using at least one of the briefing message or a search with one or more third tags. In another exemplary embodiment of the present invention, the intelligent features include widget based dashboards for empirically computing key areas of engagement and performance of the users based on analysis of electronic communication data, and displaying non-obvious data in the form of performance leader boards and similar electronic communications data of the organization which is processed by the analytics unit130. In yet another embodiment of the present invention, the intelligent features include marking certain electronic communications data as confidential electronic communications data, which is accessed by authorized users only and not accessible to the new users or the reassigned users. In another exemplary embodiment of the present invention, the intelligent features include providing rights to the managers in the organization for transferring the previous electronic communications data, associated with the user that has left the organization, to the new user or the reassigned user such that the new user or the reassigned user have rights to view only the specific electronic communications data, based on specific criteria, of the user that has left the organization and not his/her personal electronic communications data. FIGS.15a-15dillustrate screen shots of the actionable UI with functionality including, but is not limited to, visual representation of intelligent memory, context based email thread, user and inheritance administration, and dashboards for data visualization provided to the user on the visualization unit1400(FIG.14). In an exemplary embodiment of the present invention, the visualization layer1402(FIG.14) is configured to provide data visualization based on one or more visualization categories such as, but are not limited to, bar charts, pie charts, plotlines, comparative, composite, organizational, spatial, relational, distributive, sequential and temporal. The data for viewing is rendered based on using one or more visualization tools and techniques such as, but are not limited to, PHA, python, graphana, candela, datawrapper and tableau. FIG.16illustrates a flowchart depicting a method for creating an intelligent memory and providing contextual intelligent recommendations, in accordance with an embodiment of the present invention. At step1602, electronic communications data associated with active user data is extracted. In an embodiment of the present invention, any changes in the user data are determined at pre-determined intervals and the user data is extracted for extracting the electronic communications data associated with the user data. In an embodiment of the present invention, a time window is set for extraction of a count data associated with the user data in a pre-defined time-period and frequency. Further, the user count data is determined based on, at least, users (i.e. the employees, contractors, etc.) that have left the organization, present users in the organization, or new users that have joined the organization on a previous day. Further, the count data determined as missing from a previous sync is flagged as ‘Not Active’ (NA). Further, extraction of the user data, which is flagged as ‘NA’, is not carried out from that day onwards. Existing or new users are flagged as ‘active’. In an embodiment of the present invention, the extracted user data is processed for comparing changes in user data since a last synch. In an embodiment of the present invention, user data with all data fields is processed. A differential comparison of each relevant field (i.e. fields mentioned in Table 1) is made for determining changes. In the event a change is determined in a data field as compared to a corresponding user data field (for e.g. change in manager, geo location, designation, etc.), then that data field is modified to ensure consistency between the user data present a storage unit and the user data present in an organization directory unit. In an embodiment of the present invention, based on an extracted list of active users, a synch mechanism sets the appropriate time zone for each user. The synch mechanism performs the extraction at predetermined time intervals. Further, if the user has not updated his/her preferred time zone, then the synch mechanism uses a messaging source time zone settings as the preferred time zone for that user. In an embodiment of the present invention, geographical location of each user associated with the electronic communications data is extracted. Further, the user data stored in the storage unit216is classified into one or more attributes, such as, but is not limited to, e-mail attributes, time attributes, geographical attributes and user attributes. E-mail attributes may include, but are not limited to, user name, given name, surname, display name and e-mail. Time attributes include, but are not limited to, user update, created date and time, last update time and previous start time. Geographical attributes include, but are not limited to, use location, country, time zone and preferred time zone. User attributes include, but are not limited to, active user, view Access Control List privileges and manager of the user. In an exemplary embodiment of the present invention, during each sync cycle, manager related information for each user (i.e. employee) is verified and updated and a hierarchy between a manager and the user is captured. The manager related information is stored in the storage unit216and used for providing authorization for viewing conversation threads associated with a relevant criteria corresponding to the electronic communication of users who have left the organization or have been reassigned from the team, using a visualization portal. Further, the storage unit216comprises an all Access Control List (ACL) which is configured to provide rights to defined users for providing ‘view’ authorization to view conversation data of existing users or user who have left the organization. An administration functionality is provided via which a manager may carry out modifications in the ‘view’ authorization of the user who has left the organization and provides the existing or the new user with ‘view’ authorization for viewing conversation threads associated with the electronic communication. In an embodiment of the present information, each user data entry in the storage unit216has a ‘view Access Control List’ (ACL) field value associated with it. The ‘view ACL’ field value provides for inheriting electronic communications data by a user who is not originally part of the electronic communications data. Further, the default value of ‘view ACL’ field value has the same value as the user ID for default access by a user only to their own electronic communications data. The storage unit216is configured to provide access for defined users (based on organization defined user hierarchy and permissions) to change the ‘view ACL’ field value for providing ‘view’ authorization to the existing or new users in the organization for viewing particular conversations associated with the electronic communications of the user who has left the organization (i.e. a former user) or has been reassigned. The ‘view’ authorization is provided to the existing or new users based on, but is not limited to, account name, territory and geographic location. In an embodiment of the present invention, the active user data is stored for determining which users are currently active and whose communication data needs to be extracted for further processing. A user is considered active if at least the user is listed as currently employed with the organization or has an active entry in the organization directory unit. Further, an exclusion list of users (as defined by the organization) is maintained whose communication data needs to be excluded from being extracted. In an embodiment of the present invention, prior to scheduling the user communication data extraction, a check is carried out and it is determined whether the user is in the exclusion list or not. In an exemplary embodiment of the present invention, if the user data is in the exclusion list or the user data is not currently active or the user is not in the current time zone, then extraction of the said user's communication data is not scheduled. In another exemplary embodiment of the present invention, if the user data is not in the exclusion list and the user data is currently active and the user data is in the current time zone, then the user data is transmitted for queuing. In another exemplary embodiment of the present invention, a distributed event streaming platform is used to ensure that all users scheduled for electronic communication data extraction are maintained in a secure and scalable queue. This queue aids to process only the selected users in the queue based on the order in which the users were added to the queue. In an embodiment of the present invention, user data is organized for extraction of associated electronic communications data present in a defined order. In an embodiment of the present invention, authentication techniques are employed for authenticating and authorizing for accessing and extracting electronic communications data. In an exemplary embodiment of the present invention, if a Single Sign-On (SSO) functionality is implemented, then one or more defined SSO tokens are used for authentication and providing read access to the electronic communications data. In an embodiment of the present invention, active user data is selected in a particular time zone at pre-defined time intervals (e.g. 30-minute intervals) so as to cover all time zones and subsequently electronic communications data (e.g. e-mails) sent or received during the previous calendar day (e.g. data in mailboxes) in that time zone are extracted. Thereafter, the selected user data is queued, using an event notification queue, for extraction of electronic communications data for the scheduled users. In an embodiment of the present invention, extraction of active user data is scheduled for extracting electronic communication data associated with the user data. A check is performed at a pre-defined time interval (e.g. every 30 minutes) for determining whether users are available for that specific time zone for electronic communications data extraction. Further, a check is performed to determine whether users have been added in the exclusion list or if the users have been classified as inactive since a last extraction before processing the user data for extraction. In an embodiment of the present invention, electronic communications data associated with the user data queued for extraction is extracted. Electronic communications data is extracted for processing by selected users in a user notification queue. Each individual user communication data is parsed and the individual user communication data is compared with previously parsed communication information stored based on unique identifiers (for e.g., message ID, time stamp, origin, etc.). In the event, a user communication has already been processed, then a valid unique identifier is returned as evidence for the processed and stored communications data. Any further processing of that user communications data is stopped and the next user communications data is processed. Further, the user data is processed based on a multithreaded implementation for the active user for extracting the electronic communications data. Subsequent to the extraction of the electronic communications data, the extracted electronic communications data is provided to a NLP event notification service408. Further, the NLP event notification service408is executed by queue operations such as, but are not limited to, the Java Management Extensions (JMX) and the Kafka. In an embodiment of the present invention, date and time-period of a last extraction of the electronic communication data is continuously tracked. In an embodiment of the present invention, the extracted active user data is received by the user notification queue and a message processing thread of message parser selects a user from the user notification queue for processing. The message processing thread of the message parser performs a check to determine whether the user data is being processed for the first time. Further, if the user data is determined to be processed for the first time, then it is checked whether that user data associated with the electronic communication is an existing user data and whether the user data requires historical data extraction. In an embodiment of the present invention, in an initial implementation, the message parser is configured to extract the historical electronic communications data based on a pre-configured maximum age of electronic communications data (e.g. electronic communications data from last two years). At step1604, the extracted electronic communications data is parsed. In an embodiment of the present invention, the electronic communications data for each user is parsed by the message parser from a last parsed date to a previous day of extraction of the electronic communications data. Further, for past electronic communications data, if it is determined that the number of days of electronic communications data to be extracted is greater than user joining date or exceeds the date of electronic communications data extraction for a maximum time period (e.g. 2 years), then the last electronic communications data to be processed is taken from the date of joining of the user or from the maximum time period. For example, in the initial implementation, if a 60-day cycle is implemented for historical electronic communications data extraction and the last cycle reaches a defined parameter, such as, but not limited to, an employee date of joining or the organization defining maximum extraction period of 2 years, then the extraction of electronic communications data for that user stops and the defined parameter is marked as a start date from which the electronic communications data associated with the user is available. In an embodiment of the present invention, the message parser extracts electronic communications data for selected users for processing from a user notification queue. The message parser performs the electronic communications data extraction based on at least the time zone associated with the user, the time zone preferred by the user or time zone set by the organization's administrator for the user. User data is accessed for carrying out electronic communications data extraction for the particular user. The extracted list of active user is provided to the user notification queue. Further, each thread of the message parser is configured to select an active user from the user notification queue and electronic communications data associated with the selected active user is extracted. Further, subsequent to completion of processing of each electronic communications data associated with the selected active user, the processed electronic communications data is converted to a custom record format by the message parser for storage. In an exemplary embodiment of the present invention, storage of the processed electronic communications data is carried out using at least a custom JavaScript Object Notation (JSON) format, object store format, or a Graph format. The custom record format is then added to the notification NLP queue for processing and subsequently stored. The message parser analyses each electronic communications data and determines whether the electronic communications data is at least a calendar invite or a regular message. Further, if the electronic communications data is a calendar invite, then the electronic communications data is processed in the form of a calendar to determine an invite flow by identifying one or more variations associated with the calendar such as, but is not limited to, details of a calendar invite (such as, sender, time, duration, attendees, communication channel, etc.), a calendar response (such as, accept, decline or tentative) and a calendar cancellation. Further, if the electronic communications data is the regular message, then the message parser processes the body and attachment (if any) of the regular message. The attachment is stored after carrying out a deduplication process. In an exemplary embodiment of the present invention, the deduplication process of attachment is carried out based on a SHA hash256or similar technique. Further, if the attachment already exists, then a key of the attachment is retrieved and subsequently updated in the custom record. In an embodiment of the present invention, a check is performed for determining duplication of the electronic communications data, i.e. whether the electronic communications data has been processed or not, by checking whether an electronic communications data ID exist or not. If the electronic communications data ID exists, then the electronic communications data is not processed or stored. Therefore, the message parser is configured to process and store only unique electronic communications data using the custom format and identify identical extracted electronic communications data which may be associated with multiple users to reject duplicate electronic communications data from being stored, thereby significantly reducing the storage space required. In an exemplary embodiment of the present invention, the duplicate parsed header and subject, duplicate parsed body of the conversation data associated with the electronic communications data and the duplicate attached documents, referred to as duplicate data, are analyzed and removed based on a deduplication (DeDupe) check process prior to storage. Further, if a response to the electronic communications data is identified as having references to inline electronic communications data replies, then the entire electronic communications data is captured in the custom format. Further, each processed electronic communications data in custom format is transmitted to the notification NLP queue for further processing and subsequently stored. At step1606, a keyword tagging operation is performed on the electronic communications data. In an embodiment of the present invention, a keyword tagging operation is carried out on the custom record formatted electronic communications data. The conversation data associated with the electronic communications data is parsed and one or more relevant keywords are searched in the conversation data associated with the electronic communications data. Further, tagging of the keywords in the electronic communications data is based on a pre-generated keywords map, referred to as a pilot dictionary, to process such keywords. In an embodiment of the present invention, the parsed data is processed for extraction of the keywords. The parsed data is provided based on the NLP event notification queue. In an exemplary embodiment of the present invention, the pilot dictionary comprises natural language words and phrases which are of relevance to the organization. The pilot dictionary is generated based on a learning process during the initial deployment phase. The pilot dictionary is used to parse an initial electronic communications data in order to develop a Natural Language (NL) vocabulary which is specific to the organization. Further, the developed NL vocabulary is used to analyze the natural language associated with the future electronic communications. The NL vocabulary is fine-tuned based on one or more NL datasets associated with the initial electronic communications data of an organization. In an embodiment of the present invention, the development of the NL vocabulary is an iterative process performed during the initial deployment of the solution. A first iteration is carried out for identifying stored electronic communications data in order to train the pilot dictionary data set from the stored electronic communications data for developing the NL vocabulary. Further, in subsequent iterations, a relationship is established with the stored electronic communications data based on 1-nthlevel of transitivity. The iterations are customizable based on one or more development parameters associated with the NL vocabulary. In an embodiment of the present invention, subsequent to development of the NL vocabulary, the stored electronic communications data is further parsed for converting the stored electronic communications data into one or more detailed electronic communications data records. In an embodiment of the present invention, for identifying the keywords in incoming electronic communications data the developed NL vocabulary is employed and are subsequently saved in the detailed electronic communications data records. The stored keywords are used for further processing. Further, the identified keywords are appended to the respective detailed electronic communications data records. In an embodiment of the present invention, each of the parsed electronic communications data is analyzed to determine a relevance to the organization. In an exemplary embodiment of the present invention, an initial dictionary (i.e. the pilot dictionary) is used to ascertain the context and relevance of each conversation that is parsed for the organization. The context and relevance is determined based on weights, repetitions and functionality associated with the electronic communications. During initial deployment of the pilot dictionary, a sample set (for example, 500,000 communications) is split into a 4:1 ratio for dictionary training and testing. Further, during the initial deployment of the pilot dictionary, the output for the sample set is directly stored, in a custom format, instead of being sent to the NLP event notification queue. Each communication in this initial communication set is analyzed for context and relevance with the pilot dictionary. Each keyword is determined as relevant based on the presence of one or more words from the pilot dictionary and each relevant word is further added to the pilot dictionary for use in subsequent conversation analysis. Further, keywords corresponding to an entity associated with the electronic communications data are stored as a first tag for each electronic communications data in a custom record format. Examples of keywords include, but are not limited to, sender name, customer domain, and timestamp. The tags are processed for analysis and for providing enhanced access control and search capabilities. In an embodiment of the present invention, subsequent to analysis of the initial electronic communications data, the pilot dictionary is considered as the main dictionary for all further analysis for determining context and relevance from all subsequent electronic communications. Each electronic communications data from the initial electronic communications data is analyzed using the pilot dictionary and each keyword corresponding to the analyzed electronic communications data is stored as a second tag in the custom formatted record for that electronic communication. Further, all subsequent electronic communications (beyond the initial communication set) are processed and passed to the NLP event notification queue for further processing. At step1608, contextual data is extracted from the electronic communications data. In an embodiment of the present invention, the electronic communication data is pre-processed by cleaning the electronic communications data to reduce noise in the electronic communications data, tokenizing the content in the electronic communications data, implementing a lemmatization operation for reducing the words to their root form, and selectively learning from the electronic communications data for extracting contextual and relevant data from the electronic communications data. Consequently, understanding of language associated with the electronic communications data is refined, thereby improving the pilot dictionary. In particular, in embodiment of the present invention, the electronic communications data is cleaned to remove standard communication artifacts such as, but not limited to, punctuations, and other content that may lack relevance. The cleaned data is thereafter tokenized. Keywords found relevant are added to the pilot dictionary as part of the dictionary improvement process. In an embodiment of the present invention, the nature of the entity associated with the electronic communications data is analyzed. In an exemplary embodiment of the present invention, context and relevance based on keyword recurrence, weights, repetitions and functionality associated with the electronic communications is analyzed. The pre-processed parsed data is analyzed in order to carry out a recognition operation between entities associated with the electronic communications data and context present in the electronic communications data associated with the parsed electronic communications data for distinguishing between the entities and the context. The recognition operation between the entities and the context in the electronic communications data is carried out based on a semantic analysis of the parsed electronic communications data. The recognition operation for context is further carried out based on assimilated learning techniques in addition to carrying out of the semantic operation. In an embodiment of the present invention, a text and a hypothesis semantic graph is generated, which is a structured linguistic representation comprising information related to semantic electronic communications data. The semantic graphs are generated based on typed dependency graphs, in which each node is a word and labelled edges represent grammatical relations between the words. A semantic graph for a sentence contains a node for each word of the sentence, each node being embedded with metadata generated by a toolkit of linguistic processing tools, including, but are not limited to, word lemmas, parts of speech and named entity recognition. This data is processed to improve the pilot dictionary. In an embodiment of the present invention, the electronic communications data is used to further determine and establish relationships with the other electronic communications data analyzed using machine learning algorithms such as, but not limited to, neural networks technique. In an embodiment of the present invention, output of the semantic operation and the assimilated learning techniques is used to further process the parsed electronic communications data, and determine relationships between the tags associated with the parsed electronic communications data. Thereafter a score is generated for each relationship. Post analysis, the determined keywords, their corresponding lemma and the relevance based on keyword recurrence, weights, repetitions and functionality associated with the electronic communications are added to the custom record format of the corresponding electronic communication. In an embodiment of the present invention, an input as a parsed electronic communications data including conversation parameters of the electronic communications data comprising, a parsed header, subject, a parsed body of the conversations data associated with the electronic communications data along with attached documents present (if any) and analyzed conversations data is stored in a custom format for future retrieval. Further, only a single copy of the data is stored as the electronic communication data object in order to avoid duplication. At step1610, a multi-relational model is generated for the electronic communications data. In an embodiment of the present invention, behavioural patterns of a user associated by the electronic communications data are determined by carrying out analysis of the stored electronic communications data. A multi-relational model is generated by processing electronic communications data based on syntax and electronic communications data object. The multi-relational model provides relationship of the electronic communications data with other stored electronic communications data. For example, a user communication can be in the form of an email to a customer and a few colleagues with an attachment for a product sales proposal. Each email represents an independent entity with distinct relationships to multiple users including, but not limited to, one sender and multiple recipients, multiple attachments, one or more customers, an opportunity, etc. as well as distinct properties including, but not limited to, timestamp, subject, etc. Similarly, every single email conversation has similar relationships which contributes to the multi-relationship model. These attributes, relationships and properties are used to build a relational view for the user and similar users in the organization. In an embodiment of the present invention, relevance of the keywords, stored as data, is determined with respect to the electronic communications data. Based on the relevant keywords and the keywords stored as the first and the second tag, a querying model is generated. The querying model represents conversation data associated with the electronic communications data in the form of graph nodes (or similar), thereby providing the multi-relational model. The querying model comprising various nodes which communicate with each other. In an example, a user node has a one-to-one relationship with an electronic communication node when the user is a sender. The electronic communication node has a one-to-many relationship with many recipient user nodes. Further, each electronic communication node has a one-to-one relationship with said electronic communications body and header nodes, and has a one-to-many relationship with any of the attachment nodes that may have been a part of that electronic communication as well as all the keyword nodes which are associated with the electronic communication. Behavioural patterns associated with the stored data is determined based on the querying model. Types, frequencies and strength of the electronic communications data is analyzed between users within the organization and outside the organization. Cross-references between electronic communications data comprising specific keywords is analyzed. In an embodiment of the present invention, neural network techniques are used for continuously determining existing relationships, new relationships and undefined behaviour patterns between the stored data. In an embodiment of the present invention, an inductive neural network technique including, but is not limited to, a GraphSAGE technique is employed on the querying model for continuously determining existing relationships, new relationships and undefined behaviour patterns associated with the stored data. The node embeddings are computed for unseen nodes or relationships in the querying model using multiple techniques for continuously determining new relationships and undefined behaviour patterns associated with the stored data. The computing process involves a constant learning of a function for generating node embeddings by understanding neighbouring nodes and properties associated with the nodes. Computing node embeddings for unseen nodes or relationships in the querying model aids in effective and time efficient determination of the existing relationships, new relationships and undefined behavioural patterns associated with the stored data. In an embodiment of the present invention, the determined behavioural patterns associated with the electronic communications data are stored for each user. In an exemplary embodiment of the present invention, user behavioural patterns are aggregated to provide a weekly view, a monthly view, a quarterly view, an annual view as well as a custom timeline view. The stored behavioural patterns are visualized in a time bound comparison, e.g. a week-over-week, custom time windows, etc. At step1612, one or more recommendations are provided to users in the form of a recommendations action communication (RAC) based on the multi-relational model. In an embodiment of the present invention, the recommendations represent electronic communication related actionable suggestions including, but are not limited to, a regular message or calendar invite, which the user may have missed responding to for 2 days, a follow-up with a customer on an electronic communication sent by the user, sending a pointer to another user in the organization working on a similar project or a solution or a technology reference in the user's electronic communications sent in a previous day. In another exemplary embodiment of the present invention, the recommendations may include updating status of an opportunity in a CRM. The time period of sending recommendations depends on the inputs and criticality of the recommendation. In an embodiment of the present invention, the recommendations are sent in the form of an electronic Recommendation Action communication (RAC) based on the multi-relational model. The RAC includes embedded API calls for taking a suitable action on information units with a single click. In an exemplary embodiment of the present invention, the information units include, but are not limited to, a Human Resource Management System (HRMS) and a Customer Relationship Management (CRM) system. In another exemplary embodiment of the present invention, during analysis of the stored data, if a state change of the stored data (e.g. sending out of a proposal by a user) is detected, then a current state of the stored data is verified prior to making the recommendation of changes for updating in the information units (e.g. the current state in the information units132is ‘qualification’). Further, one or more API call based enquiries are generated and sent to the information units for recommendation of changes and updating the information units. In the event, the current state of the stored data is identical to the proposed change, then the updates in the information units is ignored. Further, if the current state is incorrect (e.g. the current state is ‘qualification’ and the recommended change in state is ‘proposal’), then the current state and the recommended change is retrieved and stored for future retrieval. In an embodiment of the present invention, the RAC may be a contextual or a periodic electronic communication comprising at least recommended action in the form of recommendation, collaboration suggestions, data on performance and behaviour of each user in the organization. In an embodiment of the present invention, the RAC is scheduled based on communicating with a reporting database and processing the data stored in the reporting database. In an embodiment of the present invention, the weekly RAC is scheduled at a pre-determined time to all the users in a particular time zone. Further, the contextual RAC is sent based on availability of the analysis of the user's most recent electronic communications, thereby providing faster responses to users with better insights. Further, the timeline for the action on the recommendation and the type of response is recorded using APIs embedded in the RAC. The weekly RAC and the contextual RAC may be sent to the user in the form of pictorial and textual stories in electronic communications, which can also be visualized in greater detail by the user. In an embodiment of the present invention, one or more pre-defined templates are configured, which are employed for providing visualization of the recommendation and insights on behaviour in the form of the RAC. The pre-defined templates are reusable and are modular in nature. In an embodiment of the present invention, a combination of the templates is used to generate the body of each RAC for the user and the generated RAC is added to a scheduled electronic communication delivery queue for that user based on the time zone of the user. Further, each RAC template is enabled with a single-click functionality, such that each template is suitably used for accessing more detailed representations. At step1614, visualization of insights associated with the electronic communications data is provided. In an embodiment of the present invention, an actionable User Interface (UI) is provided with information related to, but not limited to, historical and inherited user electronic communication data, searchable conversations based on keywords and dashboards for data visualization. User friendly data driven outcomes are provided including, but are not limited to, representation of custom generated depiction of electronic communications data by user inheritable chronological threads, keyword tagging based on processed electronic communications, and providing dashboards and recommendations to deliver insights relating to non-obvious and hidden patterns in the electronic communications data by implementing AI based analytics. In an embodiment of the present invention, one or more intelligent features are provided on the actionable UI. In an exemplary embodiment of the present invention, the intelligent features provide a designated authorized user (as defined by the organization), the ability to access an organized intelligent memory of user conversation threads relating to chronological electronic communications data of any former or current users based on pre-defined inheritance keywords. Further, the actionable UI provides functionality including, but is not limited to, visual representation of intelligent memory, context based email thread, user and inheritance administration, and dashboards for data visualization provided to the user. In an exemplary embodiment of the present invention, data visualization is provided based on one or more visualization categories such as, but are not limited to, bar charts, pie charts, plotlines, comparative, composite, organizational, spatial, relational, distributive, sequential and temporal. The data for viewing is rendered using one or more visualization tools and techniques such as, but are not limited to, PHA, python, graphana, candela, datawrapper and tableau. Advantageously, in various embodiments of the present invention, the system100provides optimized interlinking of data in an electronic communication for creating the intelligent memory by efficiently parsing electronic communications data. An organization specific vocabulary is developed and relevant conversations in electronic communications data is tagged by using natural language processing and neural network techniques. The present invention further provides for long-term storage of user electronic communication data as history. The present invention provides for conversations data history to be made available to authorized users to view conversations of former or current users based on ‘view’ access rights. Further, the present invention provides for adequate navigation and linkages between the conversation data associated with a user's electronic communications even after the user involved in the communication has left the organization. The present invention provides for parsing electronic communications data for continuous long-term analysis and for providing intelligent recommendations. The present invention provides for efficient processing and analysis of the electronic communications data. The present invention provides for automated insights on similar electronic communications by determining behavioral patterns of users. Further, the present invention provides for retention of only unique user communication data by avoiding storage of duplicate conversation data for efficient storage of conversation data and search operations. Furthermore, the present invention provides for appropriately determining correlations between similar actionable intelligent data associated with the electronic communications. Yet further, the present invention provides for determining and tracking changes made to any document shared as attachment along with the conversations data associated with the electronic communications. The present invention further provides for intelligent visualization of, but not limited to, intelligent memory conversation, conversation inheritance, context based conversations, keyword based search capabilities, user and conversation inheritance management, and dashboards via an actionable User Interface (UI). FIG.17illustrates an exemplary computer system in which various embodiments of the present invention may be implemented. The computer system1702comprises a processor1704(106,FIG.1) and a memory1706(108,FIG.1). The processor1704(106,FIG.1) executes program instructions and is a real processor. The computer system1702is not intended to suggest any limitation as to scope of use or functionality of described embodiments. For example, the computer system1702may include, but not limited to, a programmed microprocessor, a micro-controller, a peripheral integrated circuit element, and other devices or arrangements of devices that are capable of implementing the steps that constitute the method of the present invention. In an embodiment of the present invention, the memory1706(108,FIG.1) may store software for implementing various embodiments of the present invention. The computer system1702may have additional components. For example, the computer system1702includes one or more communication channels1708, one or more input devices1710, one or more output devices1712, and storage1714. An interconnection mechanism (not shown) such as a bus, controller, or network, interconnects the components of the computer system1702. In various embodiments of the present invention, operating system software (not shown) provides an operating environment for various softwares executing in the computer system1702, and manages different functionalities of the components of the computer system1702. The communication channel(s)1708allow communication over a communication medium to various other computing entities. The communication medium provides information such as program instructions, or other data in a communication media. The communication media includes, but not limited to, wired or wireless methodologies implemented with an electrical, optical, RF, infrared, acoustic, microwave, Bluetooth or other transmission media. The input device(s)1710may include, but not limited to, a keyboard, mouse, pen, joystick, trackball, a voice device, a scanning device, touch screen or any another device that is capable of providing input to the computer system1702. In an embodiment of the present invention, the input device(s)1710may be a sound card or similar device that accepts audio input in analog or digital form. The output device(s)1712may include, but not limited to, a user interface on CRT or LCD, printer, speaker, CD/DVD writer, or any other device that provides output from the computer system1702. The storage1714may include, but not limited to, magnetic disks, magnetic tapes, CD-ROMs, CD-RWs, DVDs, flash drives or any other medium which can be used to store information and can be accessed by the computer system1702. In various embodiments of the present invention, the storage1714contains program instructions for implementing the described embodiments. The present invention may suitably be embodied as a computer program product for use with the computer system1702. The method described herein is typically implemented as a computer program product, comprising a set of program instructions which is executed by the computer system1702or any other similar device. The set of program instructions may be a series of computer readable codes stored on a tangible medium, such as a computer readable storage medium (storage1714), for example, diskette, CD-ROM, ROM, flash drives or hard disk, or transmittable to the computer system1702, via a modem or other interface device, over either a tangible medium, including but not limited to optical or analogue communications channel(s)1708. The implementation of the invention as a computer program product may be in an intangible form using wireless techniques, including but not limited to microwave, infrared, Bluetooth or other transmission techniques. These instructions can be preloaded into a system or recorded on a storage medium such as a CD-ROM, or made available for downloading over a network such as the internet or a mobile telephone network. The series of computer readable instructions may embody all or part of the functionality previously described herein. The present invention may be implemented in numerous ways including as a system, a method, or a computer program product such as a computer readable storage medium or a computer network wherein programming instructions are communicated from a remote location. While the exemplary embodiments of the present invention are described and illustrated herein, it will be appreciated that they are merely illustrative. It will be understood by those skilled in the art that various modifications in form and detail may be made therein without departing from or offending the scope of the invention. | 115,149 |
11943190 | DETAILED DESCRIPTION A participant of a particular message thread (hereinafter the principal thread) may wish to obtain feedback for a message that the participant is considering posting to the principal thread from other users who may or may not be participants in the principal message thread without having that feedback visible to all participants of the principal thread. For example, an employee and a customer may be discussing a product issue in the principal thread and the employee may wish to have their boss approve the posting to the principal thread prior to posting it. Typically, the employee would forward the email to their boss and include their proposed message. The boss would then approve the post, edit the proposed post, or otherwise reply back in a separate message thread. Once the employee has approval to make the post in the principal thread, the employee would manually copy the post (e.g., the approved posting or modified posting) back to a position in the principal thread. This is time consuming and causes cluttered inboxes and other messaging interfaces. This may also have the side effect of confusing messaging systems into grouping both the principal thread and the side thread into a same conversation. This may confuse one or more of the employee or boss. Disclosed in some examples are methods, systems, devices, and machine-readable mediums which provide for sidebar communication threads forked from, or related to, a principal thread. The principal thread involves a first set of participants (including a participant that initiates a sidebar thread, called the sidebar initiator) and the sidebar communication thread is between a second set of participants including the sidebar initiator. The sidebar thread is a regular message thread forked from the principal thread that may include different participants from the principal thread and whose purpose is to discuss a potential posting to the principal thread. Messages in the sidebar communication thread may include a history of the principal thread, including one or more messages from the principal thread, and may include a proposed principal thread message that is the subject of the sidebar thread discussion. The sidebar thread may also include sidebar thread messages that carries the conversation of the sidebar thread participants. Once a termination condition is reached for the sidebar thread, the sidebar thread terminates and either the proposed principal thread message (as potentially modified by participants of the sidebar thread) becomes an accepted principal thread message and it is posted to the principal thread as if it was sent by the sidebar initiator or no message is posted (e.g., the proposed principal thread message is rejected). The communication threads may be e-mail, chat, IM, posts on a message board, or the like. The principal thread may be an existing thread and the accepted principal thread message may be posted from the sidebar thread as a reply, reply all, or the like. In other examples, the accepted principal thread message may be the first message in a brand-new thread (e.g., the sidebar initiator may wish to obtain feedback before sending out the message that starts the principal thread). The participants in the sidebar thread and the principal thread may be the same, or may be different but for the presence of a common sidebar thread initiator. The termination conditions may include approval of the proposed principal thread message, rejection of the proposed principal thread message, a specified number of sidebar thread messages, a time expiry, or the like. In some examples, the sidebar thread and its history is not retained in an inbox or other communication store of a sidebar participant. In other examples, the sidebar threads may be stored in a special inbox or communication store. In yet other examples, the sidebar thread history may be available to sidebar thread participants by selecting an option or link within a GUI displaying the main thread. The present disclosure thus solves the technical problem of organizing sidebar message threads in a GUI of a messaging application using the technical solution of a separately tracked thread that automatically posts an approved message to the principal thread. This reduces wasted network and computing resources by reducing the messaging and time required to maintain a sidebar thread manually and then posting the result to the principal thread. This also results in improvements to a GUI interface by reducing visual clutter resulting from user-managed sidebar threads. FIG.1illustrates an example logical diagram of a sidebar thread according to some examples of the present disclosure. Principal thread100involving participant group A was started by an original message105that is the root of the principal thread. A user in group A may reply, creating first reply message110. In some examples, such as e-mail, the first reply message may include a copy of the original message115. A user in group A may become a sidebar initiator by selecting a sidebar user interface option of a communication application, to fork the first reply message110to a sidebar thread. In response to the selection of the sidebar thread option, a sidebar message160is created. The sidebar message160, when sent, may create a sidebar thread150and may be the root of the sidebar thread150. Sidebar thread150may be over a same communication modality or a different communication modality as principal thread100. In some examples, the sidebar thread150may be a new thread (as shown inFIG.1), but in other examples, the sidebar thread may be added to a related (and already existing) communication thread that is different than the principal thread (seeFIG.12). When initiating the sidebar thread, the sidebar initiator specifies a set of sidebar thread participants. This may be done by specifying a set of participants to the sidebar message160. In the example ofFIG.1, this is sidebar thread participant group B. The set of participants in the sidebar thread may be or include participants in the principal thread (e.g., participant group A), may be different participants (with the exception of the user that created the sidebar thread), or may include both some common and some different participants.1nsome examples, the set of sidebar thread participants may change as users are added and subtracted with each message. For example, the first sidebar reply message165may add or remove participants from group B. Similarly, the participants in group A may change over time for the principal thread. Thus, the actual constituency of a participant groups of both communication threads may change over time. The first sidebar message160may include a copy of the original message152, a copy of the first reply message154, a proposed principal thread message156and a sidebar message158. This message is delivered to the sidebar thread participants in group B. Sidebar thread participants may have a number of available actions that they can take in response to receiving the first sidebar message160. The options may depend on settings chosen by the sidebar thread initiator when creating the sidebar thread. Example actions may include one or more of: approving the proposed principal thread message156, rejecting the proposed principal thread message156, replying to the first sidebar message160with additional sidebar messages (e.g., “looks good Bob, but I wouldn't mention our Texas operations just yet.”), directly editing the proposed principal thread message156(which may include change tracking and notations), forwarding the sidebar message160, or the like. In some examples, the list of actions available to a sidebar thread participant may be different for each user and may depend upon a role of the participant in an organization relative to a role of the sidebar thread initiator (e.g., an initiator's boss may have more actions available than an initiator's colleague, which both may have more actions available than an initiator's subordinate). In the Example ofFIG.1, a user in the participant group B has replied and created a first sidebar reply message165. This message may include a copy of the original message175, a copy of the first reply message173, an edited proposed principal thread message171, a copy of the sidebar message169and a sidebar reply message167. Further messages between the sidebar thread participants may happen until a sidebar thread termination condition is encountered. In some examples, the sidebar thread may be terminated upon the occurrence of one or more configured termination events. The events that terminate a sidebar thread may be configured by the sidebar thread initiator, by an administrator of the communication system, or the like. Example termination conditions may include an approval or rejection of the proposed principal thread message by a particular sidebar thread participant; approval or rejection of the proposed principal thread message by a specified number of sidebar thread participants; expiration of a specified amount of time; a specified amount of sidebar thread messages being posted to the sidebar thread; or the like. In still other examples, events from the principal thread may cause a termination event of the sidebar thread. For example, a reply after the first reply message110may terminate the sidebar thread. Based upon the termination event, the sidebar thread may, or may not, post a message to the principal thread. For example, some termination events cause the sidebar thread to terminate without posting any messages from the sidebar thread (e.g., any proposed principal thread messages) to the principal thread. For example, if the proposed principal thread message was rejected by the participants in the sidebar thread. Some termination events may cause the proposed principal thread message to become an accepted principal thread message and be posted to the principal thread. In some examples, the termination events that cause no messages to be posted to the principal thread and the termination events that cause posting to the principal thread may be configured by the sidebar thread initiator, a system administrator, or the like. In some examples, the system may allow participants of the sidebar thread to make edits to the proposed principal thread message by a sidebar thread recipient. Upon reaching a termination event of the sidebar thread, in some examples, these changes may be accepted automatically and be incorporated into the accepted principal thread message posted to the principal thread. In other examples, the edits may need to be approved by one or more sidebar thread participants (e.g., such as the initiator of the sidebar thread) in order to be incorporated into the accepted principal thread message posted to the principal thread. In some examples, whether the edits are accepted automatically or need additional approval may depend on a relative role of the editing participant in an organization as compared with the initiator. For example, if the original sidebar message160is sent by an employee, and the edit in the first sidebar reply message165is made by the employee's boss, then the edit may be accepted without approval. The relationship and roles may be determined by the communication system by communication with a directory service. Edits that require approval may be sent to participants of the sidebar thread and may include notations (e.g., redline) showing edits to the original. These notations may be stripped from the proposed reply message prior to posting to the principal thread. In the example ofFIG.1, once the proposed principal thread message is approved, the proposed principal thread message becomes the approved principal thread message which may be posted directly to the principal thread100as if it came from the initiating user that created the sidebar thread. In the example ofFIG.1, the edited proposed principal thread message171is posted to the principal thread as a new message120in the principal thread. This posting may include a copy of the original message124and a copy of the first reply message122. FIG.2illustrates an example logical diagram of a sidebar thread250according to some examples of the present disclosure.FIG.2is similar toFIG.1, except that while the sidebar thread250is ongoing, a second reply message215is received in the principal thread200. The principal thread200ofFIG.2is started by an original message205. A first reply message210includes a copy of the original message. A participant in participant group A chooses to initiate a sidebar thread from the first reply message210by sending a first sidebar message260. The sidebar message260creates a sidebar thread250that forks off the principal thread at the first reply message210. Note that a sidebar thread may be launched from the original message205as well, or from any other message in the principal thread200. In some examples, a user may select the various messages of the thread to use as a fork point for the sidebar thread. As the sidebar thread continues, producing a sidebar reply265, but before the proposed second reply message is posted to the principal thread, a participant in the principal thread posts a second reply message215. In some examples, the participants in the sidebar thread are all notified at230of the new reply. For example, the second reply message may be copied into a displayed version of a sidebar message (e.g., as if the sidebar thread forked from the second reply message). In other examples, other notifications, such as emails, attachments, or other methods of communicating the second reply message215to participant group1B may be used. Once a termination event of the sidebar thread250is identified, the accepted principal thread message may be posted as a third reply message225as a reply to the second reply message215OR may be posted as a third reply message220to the first reply message210. The selection of where to post the accepted principal thread message may be specified when the sidebar thread is initiated (e.g., the initiator may specify to post the response directly to where the sidebar was forked, or to a latest message in the principal thread); or may be specified when the sidebar thread250terminates. In some examples, the position in the principal thread where the sidebar thread is forked may be the currently selected message. For example, in an email client, a button similar to the reply button may allow for the creation of sidebar threads forking from the currently displayed message. In other examples, the system may allow a participant to initiate a sidebar thread creation option and then select which message in a principal thread to fork from. Likewise, the initiator may choose where to post the accepted principal thread message.FIG.3illustrates an example logical diagram of a sidebar thread according to some examples of the present disclosure where the point in the principal thread where the sidebar thread is forked from as well as the point in the principal thread where the accepted principal thread message is posted may be selected or otherwise configured (e.g., settings). Principal thread300is created when original message310is sent to the participant group. A participant may reply with a first reply message315. First reply message315may include a copy of the original message317. A participant in group A may reply to the first reply message315and create a second reply message320. The second reply message may include a copy of the first reply message322and a copy of the original message324. A participant in the principal thread may then submit an input that forks a sidebar thread350. The initiator of the sidebar thread may select one of the messages (e.g.,310,315, or320—note that message330has not yet been posted when the sidebar thread is initiated) of the principal thread to serve as the fork point and initial thread of the sidebar thread. The message history of the thread from the original message310to the parent thread of the sidebar thread may be included in the first sidebar message355. For example, if the initiator of the sidebar thread selects the original message310as the fork point, the principal thread message history357,367included in the sidebar thread may only include a copy of the original message310. In contrast, if the initiator selects the second reply message320as the fork point, the principal thread message history357,367may include copies of the original message310, first reply message315, and second reply message320. As previously noted, the first sidebar message355may include a principal thread message history357with copies of the messages of the principal thread, a proposed principal thread message361, and a sidebar message363. A reply to the first sidebar message355, sidebar first reply message365, may include a sidebar reply message375and a copy of the first sidebar message373. In the example ofFIG.3, the sidebar first reply message365does not propose any edits to the proposed principal thread message361. A termination event of the sidebar thread350may occur and be detected by the communication service and the proposed principal thread message361,371may become the accepted principal thread message that is then posted to the principal thread300. While the sidebar thread350was ongoing, a participant of the principal thread may have left a third reply message330as a reply to the second reply message. Third reply message330may include copies of the original message332, first reply message334, and second reply message336. As a result of the termination condition, the proposed principal thread message361,371may be posted as a reply340to any message later than, and including the fork point. The selection of which may be based upon user settings, a user selection when forking the sidebar thread, a user selection when posting the proposed principal thread message, or the like. In some examples, the proposed principal thread message361,371may be posted as the accepted principal thread message as a reply to any message in the thread. As shown inFIG.3, if the user forked the sidebar thread from message324, the accepted principal thread message may be posted as a reply from message324, or third reply message330. Accepted principal thread message340may include a copy of the original message346, a copy of the reply messages 1 and 2344, and in some examples, a copy of the third reply message342(as well as the accepted principal thread message). FIG.4illustrates a diagram of a graphical user interface (GUI)400showing a message display interface with a user control for creating a sidebar thread according to some examples of the present disclosure. The GUI400shows a received message in a message window410. Address information panel415shows who the message is from (“Scott James”), who the message is to (“Newsletter List”, “James Jones”, and “Chester McCarthy”) as well as who the message was carbon copied (CC'd) to (“Daphne Jones”). A user may use a UI control in a toolbar405to take various actions on the message. For example, the user may delete the message, archive the message, reply to the sender, reply to all recipients and the sender, forward the message to other recipients, post the message to a unified communications platform (as shown MICROSOFT® TEAMS®), file the message in one of six different folders (e.g., “Quick Steps”), move the message, apply rules or other actions, or export the message to a note taking application (as shown MICROSOFT® ONENOTE®). The toolbar405may also include a sidebar fork button420. Sidebar fork button420may fork a sidebar thread from the email shown in the message window410. The button may include a drop-down menu which allows for specifying one or more parameters of the sidebar thread, such as termination conditions and allowed recipient actions (e.g., edit the proposed principal thread message, approve, reject, etc. . . . ). FIG.5illustrates a diagram of a graphical user interface (GUI)500showing a message display interface with a user control for creating a sidebar thread with a deployed dropdown menu510according to some examples of the present disclosure. GUI500is the GUI400with the dropdown menu510deployed. Dropdown menu510may include one or more options, such as a selection of termination events for the sidebar thread. Examples include sending the final message on approval from all recipients512, some recipients, or certain recipients; on approval, but setting a maximum time for approval (15 minutes is shown) in box514—thus if approval or disapproval is not received within 15 minutes, the final message is posted. Another termination condition may include a maximum number of sidebar emails (e.g.,4emails)516. Once the limit is reached, the current final message is sent. Additional options may be accessed by selecting the additional option item518. Once the sidebar thread option is selected, a sidebar thread message creation UI is displayed.FIG.6illustrates a diagram of a GUI600showing a sidebar thread message creation UI according to some examples of the present disclosure. In some examples, the termination conditions selected using the dropdown menu510may be configured in GUI600. A user may enter the recipients of the accepted principal thread message (if the sidebar thread results in an accepted principal thread message) in the principal thread recipients input box610. The principal thread recipients are the participants of the principal thread that receive the accepted principal thread message after the conclusion of the sidebar thread if the termination condition results in acceptance of the proposed principal thread message. While in some examples, these participants will be all of the participants of the primary thread, in other examples, these participants may be a subset of the participants of the primary thread. In still other examples, these participants may add additional participants. In instances where the user selected a sidebar thread creation option for a particular message of an existing principal messaging thread, the principal thread recipients input box610may be pre-filled in with the same participants as a reply, or reply all command from the particular message. The principal thread recipients input box610may have input elements for direct recipients612(TO), indirect recipients614(CC—carbon copy), and a subject box616. In still other examples, the principal thread recipient(s) may not already be members of the principal thread. In these examples, the principal thread recipient that is not already a member of the principal thread may be added as a participant to the principal thread upon the accepted principal thread message being posted to the principal thread. In still other examples, the recipient specified may not be a principal thread participant and may not be made a principal thread participant. While shown as email addresses, the addresses of participants in sidebar threads and/or principal threads and the like may be email addresses, usernames, IP addresses, phone numbers, or the like. A proposed principal thread message may be entered into input box618. The proposed principal thread message input box618may have both the proposed principal thread message620, and, in instances where the principal thread is an existing messaging thread, one or more messages of the existing messaging thread. For example, the particular message and parent messages may be displayed. In addition, the GUI600may include sidebar recipients input box625, including input areas for direct recipients627, indirect recipients629, and a subject box631of the sidebar thread. In addition, sidebar message text630. Sidebar message text630may not be posted to the principal messaging thread upon termination of the sidebar thread. Send button650may send the sidebar thread message and start the sidebar thread. While GUIs500and600were described as being displayed responsive to a received message and thus the sidebar thread was forked from an existing principal communication thread, in other examples, sidebar threads may be created as a way to seek input on the creation of a new principal thread. For example, the GUI600may be activated in response to selection of a “new sidebar thread” option. In these examples, once the proposed principal thread message is approved as the accepted principal thread message, that message is sent as a first message in a new thread to the recipients listed in the principal thread recipients input box610. In addition to the GUI600, other sidebar thread creation GUIs may be used. For example,FIG.7illustrates a sidebar thread message creation user interface700which shows a sidebar thread message creation dialog. The message creation user interface700may include a sidebar thread address and subject input box705which specifies the sidebar thread participants. The sidebar message may include an automatically generated message710that informs the sidebar recipients that this is a sidebar message thread. The sidebar message thread message area712may be marked by control text delineated from other text with asterisks. Similarly, the proposed principal thread message may also be delineated with control text such as asterisks at area714. Previous principal thread history may be at area718. These delineations may be automatically created by the system when the initiator initiates the sidebar thread, may be manually placed by the initiator, or may put in by the system at a cursor position of the user within the sidebar message based upon the sidebar start720, sidebar end722, principal message start724and principal message end726buttons. That is, upon pressing the sidebar start720button the system may paste the “***start sidebar message area ***” text within the message. In the example ofFIG.7, the principal thread recipients may be a reply or reply all of the message in the principal thread to which the initiator forked the sidebar thread. In other examples, the principal thread recipients may be specified using control text or may be specified later using a UI element. FIG.8illustrates another example of a sidebar thread message creation user interface700with a sidebar thread message creation dialog according to some examples of the present disclosure. Instead of the control text ofFIG.7, inFIG.8the GUI800allows users to highlight text805and then press either a keyboard shortcut key or a UI button (not shown) to set the sidebar message area. FIG.9illustrates a GUI900of a selection interface that allows a sidebar thread initiator to select one or more messages of a principal thread to fork to a sidebar thread, and in some examples, also allows the sidebar thread initiator to select where the accepted principal thread message is posted. The GUI900shows the message thread of the principal thread. Markers915and/or910may be displayed responsive to the initiator activating one or more user interface controls to indicate that the initiator wishes to start a sidebar thread from the principal thread. Marker915may point to the portion of the principal thread where the sidebar thread is forked. The marker915may be moved by the initiator to a different message, such as messages920,925, or930. For example, inFIG.9, the marker915is pointing to message925. This means that the sidebar thread may be forked from message925and the sidebar thread may include the message history of message prior to and including message925. Marker910may mark where in the principal thread an accepted principal thread message will be posted (if any). In some examples, the user may specify where the accepted principal thread message from the sidebar thread will be posted when forking the sidebar thread. In other examples, the user may specify where the accepted principal thread message from the sidebar thread will be posted when the termination event in the sidebar thread occurs. WhileFIG.9illustrated a sidebar thread initiator selecting the return point when the sidebar thread is created, in other examples the selection of the fork and/or return point may be made using a GUI such as GUI900at different times. For example, just before posting the approved principal thread message from the sidebar thread. While the initiator may be the person to select the fork and/or return points, in other examples, other participants of the sidebar thread may be the ones to select the fork and/or return point. For example, as part of an approval process. FIG.10illustrates a GUI1000of a message screen shown to a sidebar thread participant when receiving a sidebar message of a sidebar thread according to some examples of the present disclosure. The GUI1000may include a toolbar1005, a sidebar thread message display area1015and a principal thread proposal message display area1020. The sidebar thread message display area1015shows the sidebar thread messages and the principal thread proposal message display area1020may show the proposed principal thread message, and in some examples, any edits made to the proposal (e.g., using track changes). The principal thread proposal message display area1020may also show one or more previous messages of the principal thread, such as a message history of messages prior to the point at which the sidebar thread forked from the principal thread. Actions controls1010may provide the sidebar message recipient with one or more options for taking action on the received sidebar message. The available actions may vary based upon the identity of the sidebar message recipient. For example, the actions shown may not be valid for all users. The valid actions may depend on settings of the sidebar thread intiator who may specify valid actions for all users, valid actions for particular users (e.g., one user may be able to edit the proposed principal thread message while another user may not based upon the settings specified by the initiator), or the like. Shown inFIG.10, the actions include a reply action and a reply all action. The reply action replies to the sender of this sidebar thread message. In the case of a first message in the sidebar thread, the reply button would reply to the sidebar initiator. A reply-all action may reply to all participants in the sidebar thread. The reply message may show a GUI like GUI600which allows the sidebar participant to edit the proposed principal thread message, contribute a sidebar message (and the sidebar message box may include the sidebar message history), add or remove sidebar recipients and/or the principal thread participants (for when the accepted principal thread message is posted to the principal thread—if that happens), and the like. In some examples, and as already noted, the particular sidebar participant may not have authorization to edit the proposed principal thread message or the principal thread participants. In some examples, the particular sidebar participant may not have authorization to reply or reply-all. In some examples, the actions may include approving or disapproving the currently displayed principal thread proposal. The approval or disapproval actions for participants may be available for each new message in the sidebar thread. That is, a first sidebar message may be approved, disapproved, replied, replied-all, edited, and the like. An approval signifies that the proposed principal thread message is acceptable for this participant. Disapproval signifies that the proposed principal thread message is not acceptable for this participant. Both approval and disapproval may allow the participant to propose edits to the principal thread proposal. In some examples, an edit to the proposed principal thread message may invalidate all previous approvals or disapprovals. As previously described, one possible sidebar thread termination event may include approval or disapproval of the proposed principal thread message. In some examples, a certain prespecified number or percentage of sidebar thread participants must approve for the proposed principal thread message to become an accepted principal thread message that is then posted to the principal thread. If the required number or percentage is not reached, then the sidebar thread may continue—e.g., edits may be made by one or more participants to the proposed principal thread message or conversations with sidebar thread messages may occur until either the required number or percentage is reached that approve, a required number or percentage is reached that disapprove, a time limit, or a sidebar thread message limit is reached, or some other termination event is reached. As noted, once a particular version of the proposed principal thread message achieves enough approvals, it may be posted to the principal thread as an accepted principal thread message. Additionally, in some examples, and as already noted, only certain sidebar thread participants may have approval or disapproval power. FIG.11illustrates a logical diagram of a sidebar thread that takes place on a different communication modality than the principal thread according to some examples of the present disclosure. An original message1110serves as a root message of a principal conversation thread1105. A first reply message1112is created and a communication participant in the principal thread decides to fork the first reply message1112to a sidebar thread1150. The sidebar thread commences with message1152and a reply message1154. The reply message1154terminates the sidebar thread1150and the proposed principal thread message becomes an accepted principal thread message and is posted to the principal thread1105with message1114. In some examples, the principal thread is hosted on a first communication modality and the sidebar thread is hosted on a second communication modality. For example, the principal thread may be an email thread and the sidebar thread may be a message board. In some examples, the system may suggest a sidebar thread of a different communication modality. For example, based upon a number of common participants between the sidebar thread (as entered by the initiator when forking the thread) and the second communication modality, a similarity of topics between the sidebar thread and one or more threads of the second communication modality (e.g., as determined by a Natural Language Processing algorithm, Latent Dirichlet Analysis, or the like). The suggestion may be presented to the user either before or after the user activates a control to send the sidebar message to create the sidebar thread. The system my post the entire sidebar thread to the second communication modality and then post a message from that second communication modality back to the principal thread on the first communication modality (as shown inFIG.11). In other examples the system may post the sidebar thread to the first communication modality and a link to the sidebar thread to the second communication modality. FIG.12illustrates a logical diagram of a sidebar thread that is forked from a principal thread into an existing second communication thread according to some examples of the present disclosure. A principal thread1200is started by an original message1205among the participants in group A. A member of group A replies to create a first reply message1210. A member of group A then initiates a sidebar thread with a sidebar fork. The initiator may specify one or more sidebar thread participants. The system may scan pre-existing communication threads of the initiator to find similar threads and may prompt the initiator to ask if the initiator wishes to post the sidebar thread within one of the existing communication threads. Similar threads may be determined based upon a number of common participants between the second communication thread and those entered for the sidebar thread. That is, a similar thread may be one where a percentage or number of common participants exceeds a threshold. In still other examples, the subject of the thread may be compared with the subject of the sidebar communication and/or the principal thread. Similarity may be judged based upon a similarity score using a NLP algorithm. Similarity scores above a threshold may be used to indicate similar threads. In yet other examples, a similarity of the contents of the principal thread, the sidebar thread, and the second communication thread may be assessed using a NLP algorithm to produce the similarity score. Similarity scores above a threshold may be used to indicate similar threads. In still other examples, two or more of common participant measurements, similarity scores of the subject, and similarity scores of the contents of the threads may be used to suggest similar threads. User selection of the similar threads may be used to refine the NLP models, or to adjust the thresholds. The second communication thread1250may be started by an original message1252and may have a first reply1254. The sidebar thread1256may then be merged into the second communication thread1250. For example, by posting the first sidebar message1258as a reply, or reply-all to a message of the second communication thread1250, such as the first reply1254. The first sidebar message1258may include a second thread message history1260, a principal thread message history1262, the proposed principal thread message1264and a sidebar message. A participant of the sidebar thread may reply. The first sidebar reply message1266may include a copy of the second thread message history1268, the principal thread message history1270, the sidebar message thread history1272, and the proposed principal thread message1274(either the original or as edited by the reply). Once the termination event is detected, the sidebar thread terminates and either the proposed principal thread message (either the original or as-edited based upon the configuration and approvals received) is posted to the principal thread as accepted principal thread message1215or nothing is posted. In the example ofFIG.12, the message is posted as a reply to the first reply message from the initiator in participant group A. As previously stated, the message may be posted to the principal thread as coming from a first user (the initiator) automatically based upon an approval of a second user to the proposed principal thread message. Once the sidebar thread terminates, the second communication thread may continue with third reply1276. In some examples, the third reply1276may be a reply from the first sidebar reply message1266and may preserve the history of the sidebar thread. In other examples, it may be a reply from the first reply1254and the history of the sidebar thread1256may be removed from the second communication thread. In some examples, the records of the messages of the sidebar communication thread may be removed from the participants of the sidebar thread. For example, messages and other records from the sidebar thread may be deleted from the inboxes, outboxes, or other folders of the participants. In other examples, the messages and records of the sidebar communication thread may be saved or preserved in a special folder (e.g., an archive folder). In yet other examples, whether the sidebar thread messages and records are removed, saved, or moved to a different place may be specified by settings of the initiator, an administrator, or the like. FIG.13illustrates an example of data structures corresponding to a sidebar thread according to some examples of the present disclosure. Sidebar thread communication message1310may store information related to a single sidebar thread communication message. Sidebar header1312may identify the message as part of a sidebar conversation and may indicate addresses of recipients of the sidebar thread message. The sidebar header1312may also include the subject of the sidebar thread communication message. The sidebar message body1314may store the sidebar message body contents, including the new sidebar thread communication message and in some examples (e.g., email) past sidebar thread communications. The principal thread header1316may include the recipients of an accepted principal thread message (if the termination event results in posting the proposed principal thread message as the accepted principal thread message) which may be all the participants of the principal thread or may be a subset of the participants of the principal thread. The recipients of the principal thread may include additional, newly added participants not already participants of the principal thread. The principal thread message body1318may include principal thread history that is shown to the sidebar participants. The history may be all the messages of the principal thread, a summary of the messages of the principal thread (either entered by the initiator manually, or by a summarization algorithm), a link to thread history, or the like. The proposed principal thread message1320is the current proposed principal thread message that will be posted as the accepted principal thread message if the termination event of the sidebar thread indicates acceptance. Sidebar thread communication message1310may be created by one or more communication clients or the communication server, stored on a communication server (e.g., in a mailbox) and then transmitted to one or more communication clients for display to sidebar thread participants. In other examples, the sidebar thread communication message1310may be converted to/from other structures as appropriate for the Application Programming Interface between the communication clients and communication server. Sidebar thread data structure1322may be stored on the server and/or clients and may be a record of the sidebar thread. Proposed principal thread message1324may store the current (as edited) proposed principal thread message1324. In some examples, the proposed principal thread message1324may store an edit history of the proposed principal thread message1324that may provide a view of each edit and by which participant of the sidebar thread made the edit. Sidebar thread message list1326may be a header of a linked list of sidebar thread communication messages1310. Principal thread pointer1328may point to a message in the principal thread or to a principal thread structure that points to the messages in the principal thread. Setting structure1330includes or points to a setting structure, such as settings structure1340. In some examples (and not shown) each sidebar thread communication message may include a pointer to a next message in a list. Thus, the sidebar thread message list1326may be a linked list with the sidebar thread message list1326being a list head. Settings structure1340may include an allowed actions field1342. The allowed actions field1342may specify allowed actions for participants of the sidebar thread. The allowed actions may be for all participants or may be broken down by user such that some users have different actions they can take. Termination events1344specifies the events that terminate the sidebar thread and which of those events and under what conditions the proposed principal thread message to be accepted as the accepted principal thread message and posted to the principal thread and which of the termination events and under what conditions the proposed principal thread message is not accepted as the accepted principal thread message and thus not posted to the principal thread. In some examples, the sidebar thread may be serviced entirely by a same communication service. For example, all participants of a sidebar thread may obtain the communication service from a same communication service. In these examples, computing devices of the communication service may provide the sidebar thread, monitor for termination events, and post the result to the principal thread.FIG.14illustrates a logical diagram1400of a group of principal thread participants1405and a group of sidebar communication thread participants1410according to some examples of the present disclosure. Participants are represented by user accounts of the communication service. For example, the principal thread participants include user account A1415, user account B1420, and user account C1425. User accounts may be logged into and viewing communications using communication applications executing on one or more devices corresponding to, or owned by, the users that own the user accounts. User account C1425is an initiator user and account. The sidebar communication thread participants1410include the initiator user account, user account C1425, user account D1435, and user account E1430. In the example ofFIG.14, all the thread participants are serviced by a same communication service1440. In these examples, the communication service1440may provide both the sidebar thread and the principal thread. The communication service1440may handle detection of the termination event of the sidebar thread and posting the accepted principal thread message to the principal thread. For example, when a termination event is detected, such as an approval of the user accounts D and/or E of the proposed principal thread message, the communication service1440may make the proposed principal thread message the accepted principal thread message and post that to the principal thread on behalf of (but not directly sent by) the user account C1425. This post is made, as if directly made, by the user account C1425, but is in actuality may be responsive to an action of another user account of the sidebar communication thread participants1410. FIG.15illustrates a logical diagram1500of a group of principal thread participants1505and a group of sidebar communication thread participants1510according to some examples of the present disclosure. One or more of the user accounts of either or both of the principal thread participants1505or the sidebar communication thread participants1510may be hosted by first communication service1540and/or second communication service1570. In examples in which multiple communication services service accounts are involved in the sidebar thread, both communication services and/or applications may be configured to provide sidebar threads and to recognize a message addressed to a communication recipient hosted by the communication service is a sidebar thread. This allows the communication service and/or the applications to notify the participants that this is a sidebar thread and provide a GUI such as that shown inFIG.10. In addition having both communication services support sidebar threads allows the communication service and/or the applications to provide the appropriate actions to the participants, detect various sidebar thread termination events, and to send the accepted principal thread message to the principal thread when a termination event causes the proposed principal thread message to become the accepted principal thread message. For some termination events, only one communication service may track these events. For example, a time-based termination event may be monitored by the communication service hosting the initiator user's account. In some examples, upon detection of a termination event of the sidebar thread that results in the proposed principal thread message becoming an accepted principal thread message, and if the communication service that hosts the initiator account is not the one that detected the termination event, the communication service that detects the termination event may contact the communication service that hosts the initiator account to inform that communication service that the proposed principal thread message should be sent to the principal thread on behalf of the initiator as the accepted principal thread message. Communication services may communicate with each other using one or more Application Programming Interfaces (APIs) and/or one or more standard messages—such as those specified by one or more Request For Comments (RFC) that may describe or standardize message exchanges and formats for implementing communications such as email. These RFCs may be updated to provide for sidebar threads. FIG.16illustrates logical diagrams of a communication service1605and a communication application1655according to some examples of the present disclosure. One or more computing devices may be configured to implement the communication service1605and communication application1655, for example, by software. While certain components are shown inFIG.16, it will be appreciated by one of ordinary skill in the art that additional components not shown may be part of the communication service1605and the communication application1655. In addition, the logical diagrams inFIG.16are exemplary and fewer or more components may be used and functionality described for one or more components may be performed by one or more other components. Communication service1605includes components (which may not be shown for clarity inFIG.16) which may, along with the communication application1655, provide communications between one or more user accounts of one or more users of both this communication service and one or more other communication services over one or more communication modalities. Communication modalities may include email, text, chat, instant messaging, message boards, Voice over Internet Protocol (VoIP), video calling, online meetings, and the like. Communication application1655may interface with the communication service1605to provide the communication service to client computing user devices of the user accounts. Communication service1605may include a message posting component1610which may post one or more messages, including messages for a principal thread, a sidebar thread, or other threads. Messages may be posted in a variety of ways, including adding the message to a mailbox of a user or a message store for a group in the message data store1615, sending a message to a communication application1655, sending a message to another communication service, or the like. Posting, as used herein, means a method of delivering a message using a particular communication modality. For example, for email, posting means sending an email; for instant messaging, posting means sending an instant message; for message boards, posting means publishing a message to a message board; or the like. Communication service1605may include a sidebar thread management component1620that may manage sidebar threads, such as creation of sidebar threads, termination of sidebar threads, communication with other communication services with respect to sidebar threads, posting the accepted principal thread message in the principal thread, and the like. For example, when the communication service1605receives a command to create a sidebar communication thread forked from a principal communication thread, the sidebar thread creation component1625of the sidebar thread management component1620may receive (as part of the command, or as the command) the sidebar thread communication message1310ofFIG.13and in response, may create a sidebar thread data structure1322. The sidebar thread creation component1625may initialize a settings structure1330with settings as received in the creation command or as subsequently entered by a user or sent with a different command. The creation command may be a specific creation command (which may include a sidebar thread message), or a sidebar thread message (which may be identified using one or more header fields or flags) sent to sidebar thread participants. In addition, the proposed principal thread message may be extracted and added to the proposed principal thread message1324of the sidebar thread data structure1322. As sidebar messages are posted by sidebar thread participants, the sidebar thread management component1620may add them to the sidebar thread message list1326. In addition, the sidebar thread management component1620may fill out and track the other elements of the data structures ofFIG.13. In some examples, the principal message selection component1630manages selection of both the fork point for the sidebar thread within a principal message and a return point if a proposed principal thread message is accepted and posted to the principal thread as the accepted principal thread message. In some examples, messages of the communication service include both a thread identifier (which may be termed a conversation identifier) and a message identifier. Upon receipt of the command to create a sidebar thread, the command may include a thread identifier of the principal thread and a message identifier of which message to fork from. In some examples, this corresponds to the principal message to which the user selected a sidebar thread fork option. In other examples, this may be a different message as selected by the principal message selection component1670of the communication application1655. The principal message selection component1630of the communication service1605may determine the message that the sidebar thread is forked from using one or both of the thread identifier and the message identifier by searching the message data store1615for the message matching the thread identifier and/or message identifier. The principal message selection component1630may then determine a communication of the principal thread by extracting the message body from a message body field in the message data structure. This communication may then become part of the sidebar thread. The principal message selection component1630may then determine principal thread recipients to the sidebar communication using one or more recipient fields in the message data structure. In some examples, the same procedures may apply to the merge point where the accepted principal thread message may be posted to the principal thread. That is, the principal message selection component1630may determine the message of the principal thread that the sidebar thread is merged back to using one or both of the thread identifier and the message identifier by searching the message data store1615for the message matching the thread identifier and/or message identifier. The principal message selection component1630may then determine a communication of the principal thread by extracting the message body from a message body field in the message data structure. This communication may then become part of the message posted back to the principal thread along with the accepted principal thread message. Sidebar thread tracking component1635may track sidebar thread messages and maintain the sidebar thread data structure1322by updating the sidebar thread message list1326, track any settings changes made by the initiator or other users in the settings structure1330, track any changes made to the proposed principal thread message1324, track changes made to principal thread or sidebar thread participants, and/or monitor for termination events specified by the settings in the setting structure1330. Sidebar thread communication component1640communicates with one or more other communication services in order to provide the sidebar thread as previously described with reference toFIG.15. The sidebar thread communication component1640may use one or more APIs, standard formats (such as RFCs), protocols, or the like. Communications may include messages of the sidebar thread, the principal thread, settings information, termination event information, or the like. Similarity component1645may find and recommend to the initiator one or more other similar threads to the sidebar thread or the principal thread, whether in a same communication modality or a different modality. In some examples, the similarity component may contact other communication services to find these threads involving the sidebar participants, the principal thread participants, or the like. In some examples, similarity may be determined based upon the number of common participants between the principal thread or the sidebar thread and the similar thread. Threads with a number of common participants above a threshold may be suggested for an initiator. In other examples, a similarity score may be calculated that may quantify a calculated similarity in the threads based upon a number of factors. In some examples, the factors may be weighted. In some examples, the similarity score may be calculated as a weighted sum of the factors. Factors include a number of common participants between the principal thread or the sidebar thread and the similar thread; a calculated textual similarity between subjects of the threads (e.g., in the subject line of the email, the topic description of a chat room or message board, or the like); a calculated textual similarity in the contents of the threads; a number of same attachments in the threads (e.g., as determined by file fingerprints, names of the files, sizes, and/or the like); and the like. The similarity score of text may be measured by one or more NLP algorithms as described herein. As noted, the termination events may be monitored by the sidebar thread tracking component1635. Upon detecting a termination event, the sidebar thread tracking component1635may evaluate whether the event caused the proposed principal thread message to become the accepted principal thread message. If not, then the sidebar thread tracking component1635may clean up the sidebar thread. Cleanup may include deleting or archiving one or more of the data structures ofFIG.13, hiding the sidebar thread from communication applications of participants, removing the sidebar thread from communication applications of participants, and the like. Should the event cause the principal thread message to become the accepted principal thread message, then the sidebar thread tracking component1635notifies the principal thread composition component1650. Principal thread composition component1650may create the post to the principal thread upon the sidebar thread experiencing a termination event causing the proposed principal thread message to become the accepted principal thread message. Example termination events include an acceptance of the proposed principal thread message. Such a posting may be done responsive to the approval or modification to the proposed principal thread message and in some examples, may be done automatically in response to the approval or modification to the proposed principal thread message. The principal thread composition component1650may create a principal thread communication from an address of the initiator of the sidebar thread addressed to the address of one or more principal thread recipients (such as the recipients identified by the principal thread selection component or those identified in the command creating the sidebar thread). The principal thread composition component1650includes the approved principal thread message (e.g., either the initial proposed principal thread message or a modification thereto) as the message body. The principal thread composition component1650may then post this message to the principal thread recipients from the address of the initiator in the principal communication thread. In some examples, the principal thread composition component1650may not include any of the sidebar thread messages in the post to the principal thread. Communication application1655may include a message posting component1657that posts messages (including sidebar thread creation commands or messages) to one or more communication threads by sending a command or other message to the communication service1605where it is handled by the message posting component1610and/or the sidebar thread management component1620in the case the message or command is a sidebar thread. GUI component1665provides for one or more GUIs, such as those inFIGS.4-10. New sidebar thread creation display component1690may provide one or more sidebar creation thread GUIs in response to the selection of a sidebar thread creation UI element. For example, in response to activation of the sidebar fork button420, the new sidebar thread creation display component1690may display one or more GUIs such asFIGS.6,7, and8. Sidebar settings component1675may display one or more GUIs allowing users (e.g., participants of the sidebar thread such as the initiator) to specify one or more settings of the sidebar thread. For example, the dropdown menu510ofFIG.5. Principal message selection component1670may display one or more user interfaces allowing a sidebar participant such as an initiator to select a fork point and/or a return point where the accepted principal thread message from the sidebar thread is posted to the principal thread. For example, using a GUI such as shown inFIG.9. Received sidebar message display component1680displays received messages for a sidebar thread, whether that sidebar thread was initiated with the user of this communication application1655instance, or another user. For example, by displaying a GUI as shown inFIG.10. Action component1685may determine one or more available actions for this user and display111controls allow the user to take the actions on a GUI. These actions may be described as metadata in the sidebar message that is received. For example, metadata of the sidebar message may describe actions available to each user. Action component1685may both determine the actions available to this user but also: display user interface elements to allow the user to take those actions; respond to a user activating the user interface elements; and sending an activation indication of the action to the sidebar message command component1695. Sidebar message command component1695may create and send a command to create a sidebar thread to the communication service1605, send one or more action notifications (e.g., accept, approve, reply, edit the proposed principal thread message), or other sidebar thread messages or commands to the communication service1605. FIG.17illustrates a flowchart of a method1700of providing a sidebar thread according to some examples of the present disclosure. At operation1710the communication service may receive a command from an initiator user account via a computing device of the initiator to create a sidebar communication thread forked from a first communication thread (the principal communication thread). The principal communication thread may be between a first plurality of principal communication thread participants including the initiator (e.g., the sender of the command to create the sidebar thread). The first plurality of principal communication thread participants may include a final recipient user account. The final recipient user account is designated to receive the accepted principal thread message if the sidebar thread terminates in a manner that causes the proposed principal thread message to become the accepted principal thread message. The final recipient may be all of the participants of the principal thread, some members of the principal thread, a single member of the principal thread, or may not be a member of the principal thread until added by the post from the sidebar thread. In some examples, the sidebar communication thread may be a thread between a second plurality of thread participants including the sender and a sidebar recipient. In some examples, the command to create the sidebar communication thread includes a proposed principal thread message, an address of the final recipient, and an address of a sidebar thread recipient. The command may be in the form of a sidebar thread message, a specific command, or the like and may be received from the initiator's computing device over a network. In some examples, the command may include one or more sidebar thread messages that may be delivered along with the proposed principal thread message to the sidebar participants. The proposed principal thread message and the sidebar thread messages (e.g., the initial sidebar thread message as well as subsequent sidebar thread messages) may be displayed in one or more GUIs in a manner to visually distinguish them from each other. In response, the communication system creates the sidebar thread (e.g., as described with respect toFIG.16) by initializing one or more data structures (such as those shown inFIG.13). At operation1712, the communication system may cause a notification to be sent to a sidebar thread recipient. For example, by sending a sidebar thread message, or other command to the sidebar thread recipient over a communication network. At operation1714, the system may provide the sidebar thread until a termination event occurs. This may include sending and receiving one or more sidebar thread messages from one or more sidebar thread participants. This may also include modifications to the proposed principal thread message. This may include tracking those modifications. This may also include tracking and responding to various actions of the sidebar thread participants. At operation1716a termination event may occur, which may be received, or otherwise recognized. At operation1718, the system may determine whether the termination event corresponds to an event that is an approval or modification of the proposed principal thread message that causes the proposed principal thread message to be posted as the accepted principal thread message to the principal thread. If the termination event does not correspond to an event that is an approval or modification of the proposed principal thread message that causes the proposed principal thread message to be posted as the accepted principal thread message to the principal thread—e.g., the proposed principal thread message is rejected, then at operation1724, the sidebar thread may be ended without a message to the principal thread. Additional clean up operations may be undertaken to remove the sidebar thread message history from inboxes or other communication stores and either delete them or move them to a different location that does not clutter up the inbox or communication store of the sidebar thread participants. If, on the other hand, the termination event does correspond to an event that is an approval or modification of the proposed principal thread message that causes the proposed principal thread message to be posted as the accepted principal thread message to the principal thread, then operations1720and1722may be performed. At operation1720, the system may create a principal thread communication from an address of the initiator addressed to the address of the final recipient (e.g., one or more participants of the principal communication thread). The principal thread communication includes the accepted principal thread message. The accepted principal thread message may be the proposed principal thread message or any modification of that message as specified by the settings of the sidebar thread. The modifications may be made by sidebar thread participants, the initiator, or the like and may be approved by one or more other participants, depending on the sidebar settings and the approved actions and roles of sidebar thread participants. At operation1722the principal thread communication may be posted to the principal thread. In some examples, the approval at operation1718may be received from a sidebar thread participant that is not the initiator. In some examples, the operations1720and1722may be performed automatically in response to the approval at operation1718. In some examples, the message posted to the principal thread may be sent or posted to the principal thread on behalf of the initiator of the sidebar thread. In some examples, the principal thread communication may be created without including any of the sidebar thread messages and only includes the accepted principal thread message (which is the proposed principal thread message and accepted edits according to the settings of the sidebar thread). FIG.18illustrates a flowchart of a method1800in which a reply position in the principal thread for the accepted principal thread message is specified according to some examples of the present disclosure. At operation1810, the system may receive an indication of a reply position in the principal thread. The indication may be included in the command to create the sidebar thread at operation1710ofFIG.1. In other examples, the indication may be a setting of the user, the system, or an administrator. The setting may be that the reply position is the position at which the sidebar thread was forked, a current principal thread message (which may be a later message from the position at which the sidebar thread was forked), or the like. In other examples, the indication may be received after the command to create the sidebar thread, for example, a sidebar thread participant (the initiator or other participant) may specify the position in the principal thread while the sidebar thread is ongoing. The reply position may be a message identifier relative to the thread identifier. At operation1812, the system may determine a thread identifier of the principal thread—e.g., from the sidebar thread creation command or the like. For example, based upon the principal thread pointer1328of the sidebar thread data structure1322. At operation1814, the data store of the communication service may be searched for a message data structure with a thread identifier matching the determined thread identifier from operation1812and with a message index matching the reply position. At operation1816, the system may determine a communication of the principal thread using the data structure. For example, a message body of the message and a message history. At operation1818, the system may utilize the message data structure from operation1814to determine recipient principal thread participants. The principal thread communication determined at operation1816and the recipients determined at operation1818may be used to create the principal thread communication at operation1720and as recipients that receive the communication posted at operation1722. FIG.19illustrates a flowchart of a method1900of providing a sidebar thread as part of a related pre-existing thread according to some examples of the present disclosure. At operation1910the communication service may receive a command from an initiator user account to create a sidebar communication thread forked from a principal communication thread. The principal communication thread may be between a first plurality of principal communication thread participants including the initiator and also including a final recipient user account. In some examples, the sidebar communication thread may be a thread between a second plurality of sidebar communication thread participants including the sender and at least one sidebar recipient. In some examples, the command to create the intermediate communication thread includes a proposed principal thread message, an address of the final recipient, and an address of the intermediate recipient. The command may be a sidebar thread message, a specific command, or the like and may be received from the initiator user device over a network. In some examples, the command may include one or more sidebar thread messages that may be delivered along with the proposed principal thread message to the sidebar participants. The proposed principal thread message and the sidebar thread messages (e.g., the initial sidebar thread message as well as subsequent sidebar thread messages) may be displayed in one or more GIs in a manner to visually distinguish them from each other. At operation1912, the system may find a second, pre-existing communication thread. In some examples the second thread may be based upon a similarity to the principal thread; the sidebar thread; or both the principal thread and the sidebar thread; to the second thread. In some examples, the second thread may be selected based upon at least one common participant of the sidebar communication thread and the second communication thread. In some examples, the second thread must be a thread to which the initiator user is a participant. In other examples, the initiator may not be a thread to which the initiator user is a participant. In response, the communication system creates the sidebar thread (e.g., as described with respect toFIG.16) by initializing one or more data structures. At operation1913, the communication system may create the sidebar thread by posting a message of the sidebar thread to the second communication thread. For example, by sending a sidebar thread message, or other command to the sidebar thread recipients over a communication network that is posted to the second communication thread. In some examples, the sidebar thread is part of, or a child of, the second communication thread. In some examples, once the sidebar thread terminates the messages of the sidebar thread are removed from the second communication thread. In other examples, the messages stay, but the second communication thread resumes the conversations and/or discussions of that thread. In some examples, the sidebar communication thread may be a fork of the second communication thread. Thus, the original conversation of the second communication thread may continue and the sidebar thread may be a fork of the second communication thread. The system may provide a UI dialogue to the user to ask if the user wishes to utilize the second thread for the sidebar thread. If the user accepts, then the method continues. If the user does not accept than the sidebar thread continues as perFIG.17. At operation1914, if the user accepts, the system may provide the sidebar thread as part of the second communication thread until a termination event occurs. This may include sending and receiving one or more sidebar thread messages from one or more sidebar thread participants. This may also include modifications to the proposed principal thread message. At operation1916a termination event may occur, which may be received, or otherwise recognized from the second communication thread. At operation1918, the system may determine whether the termination event corresponds to an event that is an approval or modification of the proposed principal thread message that causes the proposed principal thread message to be posted as the accepted principal thread message to the principal thread. If the termination event does not correspond to an event that is an approval or modification of the proposed principal thread message that causes the proposed principal thread message to be posted as the accepted principal thread message to the principal thread—e.g., the proposed principal thread message is rejected, then at operation1924, the sidebar thread may be ended without a message to the principal thread. If, on the other hand, the termination event does correspond to an event that is an approval or modification of the proposed principal thread message that causes the proposed principal thread message to be posted as the accepted principal thread message to the principal thread, then operations1920and1922may be performed. At operation1920, the system may create a principal thread communication from an address of the initiator addressed to the address of the final recipient (e.g., one or more participants of the principal communication thread). The final recipient may be a reply, reply-all, subset of the principal communication thread participants, or the like. The principal thread communication including the accepted principal thread message. The accepted principal thread message may be the proposed principal thread message or any modification of that message as specified by the settings of the sidebar thread. The modifications may be made by sidebar thread participants, the initiator, or the like and may be approved by one or more other participants, depending on the sidebar settings and the approved actions and roles of sidebar thread participants. At operation1922the message created at operation1920may be posted to the principal thread. In some examples, the approval at operation1918may be received by a sidebar thread participant that is not the initiator. In some examples, the operations1920and1922may be performed automatically in response to the approval at operation1918. In some examples, the message posted to the principal thread may be sent or posted to the principal thread on behalf of the initiator of the sidebar thread. In some examples, the principal thread communication may be created without including any of the sidebar thread messages and only includes the accepted principal thread message (which is the proposed principal thread message and accepted edits according to the settings of the sidebar thread). In the example ofFIG.19, if there are differences in the participants of the sidebar thread and the second communication thread, there are several options. In some examples, the initiator specifies the participants in the sidebar thread, and those participants selected to be in the sidebar thread, but who are not also in the second communication thread may not be included in the sidebar thread. That is, the selection, by the user, of the second thread to host the sidebar thread may be an acknowledgement to remove users indicated for the sidebar thread that are not in the second communication thread. This may also be an acknowledgement that users not indicated for the sidebar thread that are present in the second communication thread are to be included as sidebar thread participants. In other examples, the users in the sidebar thread that are not part of the second communication thread may be invited to participate in the second communication thread. In some examples, this invite terminates when the termination event of the sidebar occurs. Once that occurs, the additional participants may not participate in the second communication thread. In some examples, the additional participants in the sidebar thread that were not previously part of the second communication thread may see previous history of the second thread. In other examples, the additional participants in the sidebar thread that were not previously part of the second communication thread may not see previous history of the second thread prior to the sidebar thread. In some examples, participants of the second communication thread that are not invited by the initiator to be part of the sidebar thread may not see the sidebar thread communications as part of the second communication thread. In other examples, these users can see the sidebar thread communications but cannot reply or participate. In still other examples, they may be invited as participants of the sidebar thread. Participants of the communication threads, including the sidebar communication threads may change over time. That is a first message of the communication thread may have a set of participants and a second message may have different participants. Thus, the set of participants of a thread may depend on the message of a thread being considered. In some examples, threads, messages, or the like may be compared to determine whether they are similar. In some examples, these similarity metrics may use textual or topical similarity. These similarity metrics may use various algorithms such as NLP algorithms for similarity between two texts. One algorithm may be a cosine similarity of text vectors where each dimension is a different term or word for the words in both texts being compared (e.g., one hot encoding). The vectors for the first text and the second text are then scored based upon a cosine angle between the vectors. Other algorithms may include a Euclidean distance algorithm, Word2Vec algorithm, neural networks, Latent Dirichlet algorithms, and the like. FIG.20illustrates a GUI2002of a unified communication platform and illustrates creation of a sidebar message that creates a sidebar thread according to some examples of the present disclosure. The GUI2002includes a title bar2006with a search and command bar2004where users may search for matching communications, files, folders, and other content. Status icon2005shows a user's status and availability. Clicking or otherwise selecting the status icon allows users to set their status and availability as well as other account and application options. A function bar2010allows users to select various functions of the unified communication application; such as observing recent activity on the unified communication service related to the user; participating in one or more chat sessions; viewing one or more communication topic groups (called Teams), attending or scheduling online meetings, participating in one or more voice calls, viewing files shared within the unified communication service, accessing an add-on store which adds functionality to the unified communication service, providing feedback, and the like. As shown inFIG.20, the Teams function is selected and in the team selection bar2008, the communication groups the user is subscribed to are shown. In the example ofFIG.20, the user is subscribed to two teams: team 1 and team 2. Communication topic groups may have sub-groups called channels for discussion of one or more sub-topics to the communication topic groups. In the Example ofFIG.20, there are two Teams, each with four channels. The Team selection bar2008allows users to select between the Teams to which the user is subscribed as well as the channels. In the Example ofFIG.20, the first Channel of the second Team is selected. Each channel may in turn have multiple tabs which are selected through a tab selection bar2012. Tabs may separate different types of content and communications as well as provide another way to organize a Channel. In the example shown inFIG.20, the selected Channel has four tabs: Posts, Files, Content Tab 1, and Content Tab 2. The posts tab is selected. The tab area2013displays the contents of the selected Tab. InFIG.20, this is a bulletin board for the Channel. The bulletin board includes posts organized by threads. Two threads are shown, a first thread2026comprising two posted posts,2014, and2018and a third post that is being entered into sidebar thread message creation input area2022. Second thread2028is shown with one post2024, but additional posts may be part of the thread and accessible using the vertical scroll bar. Posts may include one or more file attachments. At the bottom of a thread a user may reply to a post using the action bar2020. Shown is a reply function and a sidebar thread function. In the example ofFIG.20, the user has activated the sidebar thread function and a sidebar thread message creation input area2022may be shown. The sidebar thread message creation input area2022may allow users to specify the sidebar thread participants (“sidebar to:”), sidebar thread messages (“sidebar thread text”), proposed principle thread text, and the like. The sidebar thread message creation input area2022may also allow users to specify a subset of the principle thread users that will see the proposed reply. FIG.21illustrates a partial GUI2112of a unified communication application and illustrates a sidebar thread according to some examples of the present disclosure.FIG.21shows only the tab area2113. Tab area2113may be the result of posting the sidebar message entered into the sidebar thread message creation input area2022of tab area2013ofFIG.20. The second thread2028is omitted inFIG.21for clarity. Sidebar thread2130may be set apart visually from the principal thread from which it was forked—namely thread2026. In some examples, one or more visual connections (such as an arrow) indicates that it is a sidebar thread forked from a particular message of the principal thread. Sidebar message2122and proposed principal thread message2124may be displayed along with any sidebar thread reply messages, such as reply2126. In some examples, edits to the proposed principal thread message2124may be posted as additional messages, or the proposed principal thread message2124may be edited inline. Sidebar thread participants may have one or more action icons (such as the action icons shown for the reply and sidebar thread). Upon approval, an icon, such as icon2128may be displayed in the approving user's post. FIG.22shows a partial GUI2212of a unified communication application according to some examples of the present disclosure. LikeFIG.21,FIG.22only illustrates the tab area2213. The GUI2212shows the thread2026after the proposed principal thread message2124becomes the accepted principal thread message and is posted to the thread2026from the initiator of the sidebar thread (in the examples ofFIG.20-22, “Brad Smith”). Message2260is posted to thread2026as if it was from the initiator of the sidebar thread upon the system detecting the termination event that terminates the sidebar thread and approves the proposed principle thread message. In some examples, sidebar thread2130may remain visible for participants of sidebar thread2130after the sidebar thread termination event occurs. In other examples, the sidebar thread2130may remain visible to participants of the sidebar thread for a specified period of time after the termination event occurs and then disappears. In yet other examples, the sidebar thread may disappear for the participants of the sidebar thread once the termination event occurs. The sidebar thread2130is no longer displayed inFIG.22. Participants in the thread2026who are not also participants of sidebar thread2130would not see sidebar thread2130. To these participants, they would see messages2014and2018of thread2026and then when the sidebar thread terminates (with a posting to the principal thread), they would see message2260. FIG.23shows a partial GUI of a unified communication application according to some examples of the present disclosure. LikeFIGS.21and22,FIG.23only illustrates the tab area2313of the unified communication application.FIG.23illustrates the example ofFIG.12where a sidebar thread of a first thread is posted within a second thread. First thread2330includes messages2310,2312,2314, and2316. After message2312, sidebar thread2338is created within the thread2330. Sidebar thread may include messages2350,2352, and2354. Sidebar thread2338may be posted as part of thread2330. In some examples, and as shown, sidebar thread2338may be visually distinguished from the other posts of thread2330. Once a sidebar thread termination event happens, the thread2330may continue. In some examples, the sidebar thread2338posts are removed from thread2330once the termination event occurs. In other examples, the sidebar thread posts are left within thread2330. Sidebar thread2338may be considered a sub-thread or child thread of thread2330. GUIs fromFIGS.4-10and20-23may be provided by a client communication application such as communication application1655. In other examples, the communication service, such as communication service1605may provide these GUIs. For example, by providing data to the communication application1655to create these GUIs. In other examples, the communication service1605may provide these GUIs in the form of one or more GUI descriptors. GUI descriptors may be rendered or otherwise displayed by an application on a client device such as a browser. Example GUI descriptors may include HyperText Markup Language (HTML), Cascading Style Sheets (CSS), scripts, java, or other files. As used herein, a message is “posted” to another message thread by causing it to be added to the thread by transmitting it or delivering it according to the communication modality of the thread. For example, a message may be posted by sending an email, posting it to a bulletin board, posting it to a message thread of a unified communication application, posting it to a chat room by sending a chat message, or the like. FIG.24illustrates a block diagram of an example machine2400upon which any one or more of the techniques (e.g., methodologies) discussed herein may be performed. In alternative embodiments, the machine2400may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine2400may operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine2400may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine2400may be in the form of a server computer, personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a smart phone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Machine2400may be configured to provide the communication threads shown in FIGS.1-3,11, and12; provide the GUIs ofFIGS.4-10; create, store, and manage the data structures ofFIG.13; be a communication server providing the communication service, be a user device of a user account shown inFIGS.14and15; implement one or more components shown inFIG.16; and implement the methods ofFIGS.17-19. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations. Examples, as described herein, may include, or may operate on one or more logic units, components, or mechanisms (hereinafter “components”). Components are tangible entities (e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a component. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) as a component that operates to perform specified operations. In an example, the software may reside on at machine readable medium. In an example, the software, when executed by the underlying hardware of the component, causes the hardware to perform the specified operations of the component. Accordingly, the term “component” is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which component are temporarily configured, each of the components need not be instantiated at any one moment in time. For example, where the components comprise a general-purpose hardware processor configured using software, the general-purpose hardware processor may be configured as respective different components at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different component at a different instance of time. Machine (e.g., computer system)2400may include one or more hardware processors, such as processor2402. Processor2402may be a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof. Machine2400may include a main memory2404and a static memory2406, some or all of which may communicate with each other via an interlink (e.g., bus)2408. Examples of main memory2404may include Synchronous Dynamic Random-Access Memory (SDRAM), such as Double Data Rate memory, such as DDR4 or DDR5. Interlink2408may be one or more different types of interlinks such that one or more components may be connected using a first type of interlink and one or more components may be connected using a second type of interlink. Example interlinks may include a memory bus, a peripheral component interconnect (PCI), a peripheral component interconnect express (PCIe) bus, a universal serial bus (USB), or the like. The machine2400may further include a display unit2410, an alphanumeric input device2412(e.g., a keyboard), and a user interface (UI) navigation device2414(e.g., a mouse). In an example, the display unit2410, input device2412and UI navigation device2414may be a touch screen display. The machine2400may additionally include a storage device (e.g., drive unit)2416, a signal generation device2418(e.g., a speaker), a network interface device2420, and one or more sensors2421, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine2400may include an output controller2428, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NEC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.). The storage device2416may include a machine readable medium2422on which is stored one or more sets of data structures or instructions2424(e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions2424may also reside, completely or at least partially, within the main memory2404, within static memory2406, or within the hardware processor2402during execution thereof by the machine2400. In an example, one or any combination of the hardware processor2402, the main memory2404, the static memory2406, or the storage device2416may constitute machine readable media. While the machine readable medium2422is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions2424. The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine2400and that cause the machine2400to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. Specific examples of machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; Random Access Memory (RAM); Solid State Drives (SSD); and CD-ROM and DVD-RO M disks. In some examples, machine readable media may include non-transitory machine readable media. In some examples, machine readable media may include machine readable media that is not a transitory propagating signal. The instructions2424may further be transmitted or received over a communications network2426using a transmission medium via the network interface device2420. The Machine2400may communicate with one or more other machines wired or wirelessly utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks such as an Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, an IEEE 802.15.4 family of standards, a 5G New Radio (NR) family of standards, a Long Term Evolution (LTE) family of standards, a Universal Mobile Telecommunications System (UMTS) family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device2420may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network2426. In an example, the network interface device2420may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. In some examples, the network interface device2420may wirelessly communicate using Multiple User MIMO techniques. OTHER NOTES AND EXAMPLES Example 1 is a method of providing a sidebar thread for a communication, the method comprising: using one or more processors of a communication server: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command, causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread; receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 2, the subject matter of Example 1 includes, wherein the sidebar thread comprises a plurality of sidebar communications in the sidebar thread between sidebar thread participants prior to the receipt of the approval or the modification of the first message, the plurality of sidebar communications including a plurality of sidebar thread message contents that are not the first message or a modification of the first message. In Example 3, the subject matter of Example 2 includes, wherein creating the communication message comprises excluding the plurality of sidebar thread message contents. In Example 4, the subject matter of Examples 1-3 includes, wherein the notification of the sidebar thread includes a message history of a second communication thread, and wherein the method further comprises: identifying the second communication thread, the second communication thread different than the first communication thread and the sidebar thread; and posting the communication message to the second communication thread in addition to the posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 5, the subject matter of Examples 1-4 includes, wherein posting the communication message to the final recipient from the address of the sender in the first communication thread comprises replying to a message in the first communication thread. In Example 6, the subject matter of Example 5 includes, wherein the message in the first communication thread is a message that the sidebar thread was forked from. In Example 7, the subject matter of Examples 5-6 includes, wherein the message in the first communication thread is a message in the first communication thread that was posted to the first communication thread after the sidebar thread was forked from the first communication thread. In Example 8, the subject matter of Examples 1-7 includes, wherein the first communication thread is hosted by a first communication modality and the sidebar thread is hosted by a second communication modality. In Example 9, the subject matter of Examples 1-8 includes, wherein the first communication thread and the sidebar thread are both one of an electronic mail (email) communication thread, a chat thread, a message board thread, a text message thread, or a thread from a unified communication application. In Example 10, the subject matter of Examples 1-9 includes, wherein receiving the command comprises receiving an email and wherein posting the communication message comprises sending an email. In Example 11, the subject matter of Examples 1-10 includes, wherein the final recipient is a participant of the first communication thread and wherein posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. In Example 12, the subject matter of Examples 1-11 includes, providing a graphical user interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; receiving an indication of a selection of the selectable option to create the sidebar thread; responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein receiving the command is responsive to a selection of the creation control in the second GUI. Example 13 is a computing device for providing a sidebar thread for a communication, the computing device comprising: a processor; a memory, storing instructions, which when executed by the processor cause the computing device to perform operations comprising: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command, causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread; receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 14, the subject matter of Example 13 includes, wherein the sidebar thread comprises a plurality of sidebar communications in the sidebar thread between sidebar thread participants prior to the receipt of the approval or the modification of the first message, the plurality of sidebar communications including a plurality of sidebar thread message contents that are not the first message or a modification of the first message. In Example 15, the subject matter of Example 14 includes, wherein the operation of creating the communication message comprises excluding the plurality of sidebar thread message contents. In Example 16, the subject matter of Examples 13.15 includes, wherein the notification of the sidebar thread includes a message history of a second communication thread, and wherein the operations further comprise: identifying the second communication thread, the second communication thread different than the first communication thread and the sidebar thread; and posting the communication message to the second communication thread in addition to the posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 17, the subject matter of Examples 13-16 includes, wherein the operation of posting the communication message to the final recipient from the address of the sender in the first communication thread comprises replying to a message in the first communication thread. In Example 18, the subject matter of Example 17 includes, wherein the message in the first communication thread is a message that the sidebar thread was forked from. In Example 19, the subject matter of Examples 17-18 includes, wherein the message in the first communication thread is a message in the first communication thread that was posted to the first communication thread after the sidebar thread was forked from the first communication thread. In Example 20, the subject matter of Examples 13-19 includes, wherein the first communication thread is hosted by a first communication modality and the sidebar thread is hosted by a second communication modality. In Example 21, the subject matter of Examples 13-20 includes, wherein the first communication thread and the sidebar thread are both one of an electronic mail (email) communication thread, a chat thread, a message board thread, a text message thread, or a thread from a unified communication application. In Example 22, the subject matter of Examples 13-21 includes, wherein the operation of receiving the command comprises receiving an email and wherein posting the communication message comprises sending an email. In Example 23, the subject matter of Examples 13-22 includes, wherein the final recipient is a participant of the first communication thread and wherein the operation of posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. In Example 24, the subject matter of Examples 13-23 includes, providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; receiving an indication of a selection of the selectable option to create the sidebar thread; responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein the operation of receiving the command is responsive to a selection of the creation control in the second GUI. Example 25 is a machine-readable medium, storing instructions for providing a sidebar thread for a communication, the instructions, when executed by a machine, cause the machine to perform operations comprising: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command, causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread; receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 26, the subject matter of Example 25 includes, wherein the sidebar thread comprises a plurality of sidebar communications in the sidebar thread between sidebar thread participants prior to the receipt of the approval or the modification of the first message, the plurality of sidebar communications including a plurality of sidebar thread message contents that are not the first message or a modification of the first message. In Example 27, the subject matter of Example 26 includes, wherein the operation of creating the communication message comprises excluding the plurality of sidebar thread message contents. In Example 28, the subject matter of Examples 25-27 includes, wherein the notification of the sidebar thread includes a message history of a second communication thread, and wherein the operations further comprise: identifying the second communication thread, the second communication thread different than the first communication thread and the sidebar thread; and posting the communication message to the second communication thread in addition to the posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 29, the subject matter of Examples 25-28 includes, wherein the operation of posting the communication message to the final recipient from the address of the sender in the first communication thread comprises replying to a message in the first communication thread. In Example 30, the subject matter of Example 29 includes, wherein the message in the first communication thread is a message that the sidebar thread was forked from. In Example 31, the subject matter of Examples 29-30 includes, wherein the message in the first communication thread is a message in the first communication thread that was posted to the first communication thread after the sidebar thread was forked from the first communication thread. In Example 32, the subject matter of Examples 25-31 includes, wherein the first communication thread is hosted by a first communication modality and the sidebar thread is hosted by a second communication modality. In Example 33, the subject matter of Examples 25-32 includes, wherein the first communication thread and the sidebar thread are both one of an electronic mail (email) communication thread, a chat thread, a message board thread, a text message thread, or a thread from a unified communication application. In Example 34, the subject matter of Examples 25-33 includes, wherein the operation of receiving the command comprises receiving an email and wherein posting the communication message comprises sending an email. In Example 35, the subject matter of Examples 25-34 includes, wherein the final recipient is a participant of the first communication thread and wherein the operation of posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. In Example 36, the subject matter of Examples 25-35 includes, providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; receiving an indication of a selection of the selectable option to create the sidebar thread; responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein the operation of receiving the command is responsive to a selection of the creation control in the second GUI. Example 37 is a device for providing a sidebar thread for a communication, the device comprising: means for receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; means for, responsive to receiving the command, causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread; means for receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, means for automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 38, the subject matter of Example 37 includes, wherein the sidebar thread comprises a plurality of sidebar communications in the sidebar thread between sidebar thread participants prior to the receipt of the approval or the modification of the first message, the plurality of sidebar communications including a plurality of sidebar thread message contents that are not the first message or a modification of the first message. In Example 39, the subject matter of Example 38 includes, wherein the means for automatically creating the communication message comprises means for excluding the plurality of sidebar thread message contents. In Example 40, the subject matter of Examples 37-39 includes, wherein the notification of the sidebar thread includes a message history of a second communication thread, and wherein the device further comprises: means for identifying the second communication thread, the second communication thread different than the first communication thread and the sidebar thread; and means for posting the communication message to the second communication thread in addition to the posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 41, the subject matter of Examples 37-40 includes, wherein the posting the communication message to the final recipient from the address of the sender in the first communication thread comprises means for replying to a message in the first communication thread. In Example 42, the subject matter of Example 41 includes, wherein the message in the first communication thread is a message that the sidebar thread was forked from. In Example 43, the subject matter of Examples 41-42 includes, wherein the message in the first communication thread is a message in the first communication thread that was posted to the first communication thread after the sidebar thread was forked from the first communication thread. In Example 44, the subject matter of Examples 37-43 includes, wherein the first communication thread is hosted by a first communication modality and the sidebar thread is hosted by a second communication modality. In Example 45, the subject matter of Examples 37-44 includes, wherein the first communication thread and the sidebar thread are both one of an electronic mail (email) communication thread, a chat thread, a message board thread, a text message thread, or a thread from a unified communication application. In Example 46, the subject matter of Examples 37-45 includes, wherein the means for receiving the command comprises means for receiving an email and wherein posting the communication message comprises sending an email. In Example 47, the subject matter of Examples 37-46 includes, wherein the final recipient is a participant of the first communication thread and wherein the posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. In Example 48, the subject matter of Examples 37-47 includes, means for providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; means for receiving an indication of a selection of the selectable option to create the sidebar thread; means for, responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein receiving the command is responsive to a selection of the creation control in the second GUI Example 49 is a method of providing a sidebar thread for a communication, the method comprising: using one or more processors of a communication server: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, an sidebar thread message, and an address of the sidebar recipient; responsive to receiving the command, causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread and the sidebar thread message; receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message and excluding the sidebar thread message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 50, the subject matter of Example 49 includes, identifying the first message or the modification of the first message using a natural language processing algorithm from communications of the sidebar thread. In Example 51, the subject matter of Examples 49-50 includes, subsequent to the causing the notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient and prior to receiving the approval, receiving a command to post a message to the sidebar thread; and responsive to receiving the command to post the message to the sidebar thread, notifying participants of the sidebar thread of the message. In Example 52, the subject matter of Examples 49-51 includes, identifying the first message or the modification of the first message using a data structure storing messages of the sidebar thread. In Example 53, the subject matter of Examples 49-52 includes, wherein the first communication thread is hosted by a first communication modality and the sidebar thread is hosted by a second communication modality. In Example 54, the subject matter of Examples 49-53 includes, wherein the first communication thread is an electronic mail (email) communication thread. In Example 55, the subject matter of Examples 49.54 includes, wherein receiving the command comprises receiving an email and wherein posting the communication message comprises sending an email. In Example 56, the subject matter of Examples 49-55 includes, wherein the sidebar thread is posted as a child thread of a second thread. In Example 57, the subject matter of Examples 49-56 includes, wherein creating the communication message comprises excluding all messages from the sidebar thread aside from the first message. In Example 58, the subject matter of Examples 49-57 includes, providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; receiving an indication of a selection of the selectable option to create the sidebar thread; responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein receiving the command is responsive to a selection of the creation control in the second GUI. In Example 59, the subject matter of Examples 49-58 includes, wherein the final recipient is a participant of the first communication thread and wherein the posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 60 is a computing device providing a sidebar thread for a communication, the computing device comprising: a processor; a memory, storing instructions, which when executed by the processor cause the computing device to perform operations comprising: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, an sidebar thread message, and an address of the sidebar recipient; responsive to receiving the command, causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread and the sidebar thread message; receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message and excluding the sidebar thread message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 61, the subject matter of Example 60 includes, wherein the operations further comprise: identifying the first message or the modification of the first message using a natural language processing algorithm from communications of the sidebar thread. In Example 62, the subject matter of Examples 60-61 includes, wherein the operations further comprise: subsequent to the causing the notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient and prior to receiving the approval, receiving a command to post a message to the sidebar thread; and responsive to receiving the command to post the message to the sidebar thread, notifying participants of the sidebar thread of the message. In Example 63, the subject matter of Examples 60-62 includes, wherein the operations further comprise: identifying the first message or the modification of the first message using a data structure storing messages of the sidebar thread. In Example 64, the subject matter of Examples 60-63 includes, wherein the first communication thread is hosted by a first communication modality and the sidebar thread is hosted by a second communication modality. In Example 65, the subject matter of Examples 60-64 includes, wherein the first communication thread is an electronic mail (email) communication thread. In Example 66, the subject matter of Examples 60-65 includes, wherein receiving the command comprises receiving an email and wherein posting the communication message comprises sending an email. In Example 67, the subject matter of Examples 60-66 includes, wherein the sidebar thread is posted as a child thread of a second thread. In Example 68, the subject matter of Examples 60-67 includes, wherein the operation of creating the communication message comprises excluding all messages from the sidebar thread aside from the first message. In Example 69, the subject matter of Examples 60-68 includes, wherein the operations further comprise: providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; receiving an indication of a selection of the selectable option to create the sidebar thread; responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein receiving the command is responsive to a selection of the creation control in the second CIII. In Example 70, the subject matter of Examples 60-69 includes, wherein the final recipient is a participant of the first communication thread and wherein the operation of posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 71 is a machine-readable medium, storing instructions providing a sidebar thread for a communication, the instructions, which when executed by a machine, cause the machine to perform operations comprising: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, an sidebar thread message, and an address of the sidebar recipient; responsive to receiving the command, causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread and the sidebar thread message; receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message and excluding the sidebar thread message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 72, the subject matter of Example 71 includes, wherein the operations further comprise: identifying the first message or the modification of the first message using a natural language processing algorithm from communications of the sidebar thread. In Example 73, the subject matter of Examples 71-72 includes, wherein the operations further comprise: subsequent to the causing the notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient and prior to receiving the approval, receiving a command to post a message to the sidebar thread; and responsive to receiving the command to post the message to the sidebar thread, notifying participants of the sidebar thread of the message. In Example 74, the subject matter of Examples 71-73 includes, wherein the operations further comprise: identifying the first message or the modification of the first message using a data structure storing messages of the sidebar thread. In Example 75, the subject matter of Examples 71-74 includes, wherein the first communication thread is hosted by a first communication modality and the sidebar thread is hosted by a second communication modality. In Example 76, the subject matter of Examples 71-75 includes, wherein the first communication thread is an electronic mail (email) communication thread. In Example 77, the subject matter of Examples 71-76 includes, wherein receiving the command comprises receiving an email and wherein posting the communication message comprises sending an email. In Example 78, the subject matter of Examples 71-77 includes, wherein the sidebar thread is posted as a child thread of a second thread. In Example 79, the subject matter of Examples 71-78 includes, wherein the operation of creating the communication message comprises excluding all messages from the sidebar thread aside from the first message. In Example 80, the subject matter of Examples 71-79 includes, wherein the operations further comprise: providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; receiving an indication of a selection of the selectable option to create the sidebar thread; responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein receiving the command is responsive to a selection of the creation control in the second GUI. In Example 81, the subject matter of Examples 71-80 includes, wherein the final recipient is a participant of the first communication thread and wherein the operation of posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 82 is a device for providing a sidebar thread for a communication, the device comprising: means for receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, an sidebar thread message, and an address of the sidebar recipient; means for, responsive to receiving the command, causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread and the sidebar thread message; means for receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, means for automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message and excluding the sidebar thread message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 83, the subject matter of Example 82 includes, means for identifying the first message or the modification of the first message using a natural language processing algorithm from communications of the sidebar thread. In Example 84, the subject matter of Examples 82-83 includes, means for, subsequent to the causing the notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient and prior to receiving the approval, receiving a command to post a message to the sidebar thread; and means for responsive to receiving the command to post the message to the sidebar thread, notifying participants of the sidebar thread of the message. In Example 85, the subject matter of Examples 82-84 includes, means for identifying the first message or the modification of the first message using a data structure storing messages of the sidebar thread. In Example 86, the subject matter of Examples 82-85 includes, wherein the first communication thread is hosted by a first communication modality and the sidebar thread is hosted by a second communication modality. In Example 87, the subject matter of Examples 82-86 includes, wherein the first communication thread is an electronic mail (email) communication thread. In Example 88, the subject matter of Examples 82-87 includes, wherein the means for receiving the command comprises means for receiving an email and wherein posting the communication message comprises sending an email. In Example 89, the subject matter of Examples 82-88 includes, wherein the sidebar thread is posted as a child thread of a second thread. In Example 90, the subject matter of Examples 82-89 includes, wherein the means for creating the communication message comprises excluding all messages from the sidebar thread aside from the first message. In Example 91, the subject matter of Examples 82-90 includes, means for providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; means for receiving an indication of a selection of the selectable option to create the sidebar thread; means for, responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein receiving the command is responsive to a selection of the creation control in the second GUI. In Example 92, the subject matter of Examples 82-91 includes, wherein the final recipient is a participant of the first communication thread and wherein the means for posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 93 is a method of providing a sidebar thread for a communication, the method comprising: using one or more processors of a communication server: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a specified message of a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command: causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread; receiving an indication of a reply position in the first communication thread to post a reply; determining an identifier of the first communication thread using a thread identifier field in a data structure of the specified message of the first communication thread; searching a communication server data store to find a message data structure with a thread identifier field that matches the identifier of the first communication thread and that has a message index matching the reply position; receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread as a reply to the message corresponding to the found message data structure, the message addressed to the final recipient and including at least a portion of a message body field of the message data structure. In Example 94, the subject matter of Example 93 includes, wherein receiving the indication of the reply position in the first communication thread to post the reply comprises identifying a configured option and determining the reply position based upon the configured option and messages of the first communication thread. In Example 95, the subject matter of Example 94 includes, wherein the configured option is set to a value indicating that the reply position is a last message in a thread, and wherein the reply position is determined based upon a last message in the first communication thread. In Example 96, the subject matter of Examples 93-95 includes, wherein receiving an indication of a reply position in the first communication thread to post a reply comprises receiving a selection of a reply position in the first communication thread from a sidebar thread participant. In Example 97, the subject matter of Example 96 includes, wherein the indication is part of the command. In Example 98, the subject matter of Examples 96-97 includes, wherein the sidebar thread participant is the sender. In Example 99, the subject matter of Examples 93-98 includes, wherein the sidebar thread comprises a plurality of communications prior to the approval or the modification of the first message. In Example 100, the subject matter of Examples 93-99 includes, wherein the method further comprises: providing a graphical user interface (GUI) with a display of a plurality of messages in the first communication thread; and wherein receiving an indication of a reply position in the first communication thread to post a reply comprises receiving a selection of one of the plurality of messages in the first communication thread from the GUI. In Example 101, the subject matter of Examples 93-100 includes, wherein the final recipient is a participant of the first communication thread and wherein posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 102 is a computing device for providing a sidebar thread for a communication, the computing device comprising: a processor; a memory, storing instructions, which when executed by the processor cause the computing device to perform operations comprising: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a specified message of a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command: causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread; receiving an indication of a reply position in the first communication thread to post a reply; determining an identifier of the first communication thread using a thread identifier field in a data structure of the specified message of the first communication thread; searching a communication server data store to find a message data structure with a thread identifier field that matches the identifier of the first communication thread and that has a message index matching the reply position; receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread as a reply to the message corresponding to the found message data structure, the message addressed to the final recipient and including at least a portion of a message body field of the message data structure. In Example 103, the subject matter of Example 102 includes, wherein the operation of receiving the indication of the reply position in the first communication thread to post the reply comprises identifying a configured option and determining the reply position based upon the configured option and messages of the first communication thread. In Example 104, the subject matter of Example 103 includes, wherein the configured option is set to a value indicating that the reply position is a last message in a thread, and wherein the operations comprise determining the reply position based upon a last message in the first communication thread. In Example 105, the subject matter of Examples 102-104 includes, wherein the operation of receiving the indication of the reply position in the first communication thread to post a reply comprises receiving a selection of a reply position in the first communication thread from a sidebar thread participant. In Example 106, the subject matter of Example 105 includes, wherein the indication is part of the command. In Example 107, the subject matter of Examples 105-106 includes, wherein the sidebar thread participant is the sender. In Example 108, the subject matter of Examples 102-107 includes, wherein the sidebar thread comprises a plurality of communications prior to the approval or the modification of the first message. In Example 109, the subject matter of Examples 102-108 includes, wherein the operations further comprise: providing a graphical user interface (GUI) with a display of a plurality of messages in the first communication thread; and wherein receiving an indication of a reply position in the first communication thread to post a reply comprises receiving a selection of one of the plurality of messages in the first communication thread from the GUI In Example 110, the subject matter of Examples 102-109 includes, wherein the final recipient is a participant of the first communication thread and wherein the operation of posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 111 is a machine-readable medium, storing instructions for providing a sidebar thread for a communication, the instructions, when executed by a machine, cause the machine to perform operations comprising: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a specified message of a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command: causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread; receiving an indication of a reply position in the first communication thread to post a reply; determining an identifier of the first communication thread using a thread identifier field in a data structure of the specified message of the first communication thread; searching a communication server data store to find a message data structure with a thread identifier field that matches the identifier of the first communication thread and that has a message index matching the reply position; receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread as a reply to the message corresponding to the found message data structure, the message addressed to the final recipient and including at least a portion of a message body field of the message data structure. In Example 112, the subject matter of Example 111 includes, wherein the operation of receiving the indication of the reply position in the first communication thread to post the reply comprises identifying a configured option and determining the reply position based upon the configured option and messages of the first communication thread. In Example 113, the subject matter of Example 112 includes, wherein the configured option is set to a value indicating that the reply position is a last message in a thread, and wherein the operations comprise determining the reply position based upon a last message in the first communication thread. In Example 114, the subject matter of Examples 111-113 includes, wherein the operation of receiving the indication of the reply position in the first communication thread to post a reply comprises receiving a selection of a reply position in the first communication thread from a sidebar thread participant. In Example 115, the subject matter of Example 114 includes, wherein the indication is part of the command. In Example 116, the subject matter of Examples 114-115 includes, wherein the sidebar thread participant is the sender. In Example 117, the subject matter of Examples 111-116 includes, wherein the sidebar thread comprises a plurality of communications prior to the approval or the modification of the first message. In Example 118, the subject matter of Examples 111-117 includes, wherein the operations further comprise: providing a graphical user interface (GUI) with a display of a plurality of messages in the first communication thread; and wherein receiving an indication of a reply position in the first communication thread to post a reply comprises receiving a selection of one of the plurality of messages in the first communication thread from the GUI. In Example 119, the subject matter of Examples 111-118 includes, wherein the final recipient is a participant of the first communication thread and wherein the operation of posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 120 is a device for providing a sidebar thread for a communication, the device comprising: means for receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a specified message of a first communication thread, the first communication thread between a first plurality of first communication thread participants including the sender, the sidebar thread being a thread between a second plurality of sidebar thread participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command: means for causing a notification of the sidebar thread to be sent over the communication network to the address of the sidebar recipient, the notification including the first message and a portion or a link to a portion of the first communication thread; means for receiving an indication of a reply position in the first communication thread to post a reply; means for determining an identifier of the first communication thread using a thread identifier field in a data structure of the specified message of the first communication thread; means for searching a communication server data store to find a message data structure with a thread identifier field that matches the identifier of the first communication thread and that has a message index matching the reply position; means for receiving an approval or a modification of the first message, over the communication network, from the sidebar recipient in the sidebar thread; means for, responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender addressed to the address of the final recipient by including the approved first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread as a reply to the message corresponding to the found message data structure, the message addressed to the final recipient and including at least a portion of a message body field of the message data structure. In Example 121, the subject matter of Example 120 includes, wherein the means for receiving the indication of the reply position in the first communication thread to post the reply comprises means for identifying a configured option and determining the reply position based upon the configured option and messages of the first communication thread. In Example 122, the subject matter of Example 121 includes, wherein the configured option is set to a value indicating that the reply position is a last message in a thread, and wherein the reply position is determined based upon a last message in the first communication thread. In Example 123, the subject matter of Examples 120-122 includes, wherein the means for receiving an indication of a reply position in the first communication thread to post a reply comprises means for receiving a selection of a reply position in the first communication thread from a sidebar thread participant. In Example 124, the subject matter of Example 123 includes, wherein the indication is part of the command. In Example 125, the subject matter of Examples 123-124 includes, wherein the sidebar thread participant is the sender. In Example 126, the subject matter of Examples 120-125 includes, wherein the sidebar thread comprises a plurality of communications prior to the approval or the modification of the first message. In Example 127, the subject matter of Examples 120-126 includes, wherein the device further comprises: means for providing a graphical user interface (GUI) with a display of a plurality of messages in the first communication thread; and wherein the means for receiving an indication of a reply position in the first communication thread to post a reply comprises means for receiving a selection of one of the plurality of messages in the first communication thread from the GUI In Example 128, the subject matter of Examples 120-127 includes, wherein the final recipient is a participant of the first communication thread and wherein the means for posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 129 is a method of providing a sidebar thread for a communication, the method comprising: using one or more processors of a communication server: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of participants including the sender, the sidebar thread being a thread between a second plurality of participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command: selecting a second communication thread based upon the sender being a common participant of the sidebar thread and the second communication thread, the second communication thread preexisting prior to the receipt of the command and including a third plurality of participants; posting a message to the second communication thread including the first message of the sidebar thread and including the second plurality of participants as members of the second communication thread; receiving an approval or a modification of the first message in the second communication thread over the communication network, from the third plurality of participants; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender to the final recipient, that includes, the first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 130, the subject matter of Example 129 includes, wherein the first communication thread is provided over a first communication modality and the second communication thread is provided over a second communication modality. In Example 131, the subject matter of Example 130 includes, wherein the first communication modality is one of: electronic mail or instant messaging and the second communication modality is the other of the electronic mail or instant messaging. In Example 132, the subject matter of Examples 129-131 includes, wherein the first communication thread and the second communication thread are both provided over a same communication modality. In Example 133, the subject matter of Examples 129-132 includes, wherein selecting the second communication thread comprises selecting the second communication thread also based upon a similarity in a subject of the first communication thread and either the second communication thread or the sidebar communication thread. In Example 134, the subject matter of Examples 129-133 includes, wherein a participant of the sidebar thread is added to the second communication thread. In Example 135, the subject matter of Examples 129-134 includes, wherein selecting the second communication thread based upon the sender being the common participant of the sidebar thread and the second communication thread comprises also selecting the second communication thread based upon a textual similarity metric of messages of the second communication thread and the sidebar thread. In Example 136, the subject matter of Example 135 includes, determining the similarity metric using a natural language processing algorithm. In Example 137, the subject matter of Example 136 includes, determining the similarity metric using text string matching. In Example 138, the subject matter of Examples 129-137 includes, wherein posting a message to the second communication thread including the first message of the sidebar thread and including the second plurality of participants as members of the second communication thread comprises one of: sending an email with the message, posting the message in a chat room, posting the message in a discussion forum, or posting the message as part of a group discussion in a unified communications service. In Example 139, the subject matter of Examples 129-138 includes, providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; receiving an indication of a selection of the selectable option to create the sidebar thread; responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein receiving the command is responsive to a selection of the creation control in the second GUI. In Example 140, the subject matter of Examples 129-139 includes, providing a GUI to the sender with GUI elements providing the sender with a choice to post the sidebar thread in the second communication thread or start a new thread; and wherein posting the message to the second communication thread is responsive to a receipt of a selection in the GUI to post the sidebar thread in the second communication thread. In Example 141, the subject matter of Examples 129-140 includes, wherein the final recipient is a participant of the first communication thread and wherein posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 142 is a computing device for providing a sidebar thread for a communication, the computing device comprising: a processor; a memory, storing instructions, which when executed by the processor cause the computing device to perform operations comprising: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of participants including the sender, the sidebar thread being a thread between a second plurality of participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command: selecting a second communication thread based upon the sender being a common participant of the sidebar thread and the second communication thread, the second communication thread preexisting prior to the receipt of the command and including a third plurality of participants; posting a message to the second communication thread including the first message of the sidebar thread and including the second plurality of participants as members of the second communication thread; receiving an approval or a modification of the first message in the second communication thread over the communication network, from the third plurality of participants; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender to the final recipient, that includes, the first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 143, the subject matter of Example 142 includes, wherein the first communication thread is provided over a first communication modality and the second communication thread is provided over a second communication modality. In Example 144, the subject matter of Example 143 includes, wherein the first communication modality is one of: electronic mail or instant messaging and the second communication modality is the other of the electronic mail or instant messaging. In Example 145, the subject matter of Examples 142-144 includes, wherein the first communication thread and the second communication thread are both provided over a same communication modality. In Example 146, the subject matter of Examples 142-145 includes, wherein the operation of selecting the second communication thread comprises selecting the second communication thread also based upon a similarity in a subject of the first communication thread and either the second communication thread or the sidebar thread. In Example 147, the subject matter of Examples 142-146 includes, wherein a participant of the sidebar thread is added to the second communication thread. In Example 148, the subject matter of Examples 142-147 includes, wherein the operation of selecting the second communication thread based upon the sender being the common participant of the sidebar thread and the second communication thread comprises also selecting the second communication thread based upon a textual similarity metric of messages of the second communication thread and the sidebar thread. In Example 149, the subject matter of Example 148 includes, wherein the operations further comprise determining the similarity metric using a natural language processing algorithm. In Example 150, the subject matter of Example 149 includes, wherein the operations further comprise determining the similarity metric using text string matching. In Example 151, the subject matter of Examples 142-150 includes, wherein the operation of posting a message to the second communication thread including the first message of the sidebar thread and including the second plurality of participants as members of the second communication thread comprises one of sending an email with the message, posting the message in a chat room, posting the message in a discussion forum, or posting the message as part of a group discussion in a unified communications service. In Example 152, the subject matter of Examples 142-151 includes, wherein the operations further comprise: providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; receiving an indication of a selection of the selectable option to create the sidebar thread; responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein receiving the command is responsive to a selection of the creation control in the second GUI. In Example 153, the subject matter of Examples 142-152 includes, wherein the operations further comprise: providing a GUI to the sender with GUI elements providing the sender with a choice to post the sidebar thread in the second communication thread or start a new thread; and wherein posting the message to the second communication thread is responsive to a receipt of a selection in the GUI to post the sidebar thread in the second communication thread. In Example 154, the subject matter of Examples 142-153 includes, wherein the final recipient is a participant of the first communication thread and wherein the operation of posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 155 is a machine-readable medium, storing instructions for providing a sidebar thread for a communication, the instructions, when executed by a machine, cause the machine to perform operations comprising: receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of participants including the sender, the sidebar thread being a thread between a second plurality of participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command: selecting a second communication thread based upon the sender being a common participant of the sidebar thread and the second communication thread, the second communication thread preexisting prior to the receipt of the command and including a third plurality of participants; posting a message to the second communication thread including the first message of the sidebar thread and including the second plurality of participants as members of the second communication thread; receiving an approval or a modification of the first message in the second communication thread over the communication network, from the third plurality of participants; responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender to the final recipient, that includes, the first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 156, the subject matter of Example 155 includes, wherein the first communication thread is provided over a first communication modality and the second communication thread is provided over a second communication modality. In Example 157, the subject matter of Example 156 includes, wherein the first communication modality is one of: electronic mail or instant messaging and the second communication modality is the other of the electronic mail or instant messaging. In Example 158, the subject matter of Examples 155-157 includes, wherein the first communication thread and the second communication thread are both provided over a same communication modality. In Example 159, the subject matter of Examples 155-158 includes, wherein the operation of selecting the second communication thread comprises selecting the second communication thread also based upon a similarity in a subject of the first communication thread and either the second communication thread or the sidebar thread. In Example 160, the subject matter of Examples 155-159 includes, wherein a participant of the sidebar thread is added to the second communication thread. In Example 161, the subject matter of Examples 155-160 includes, wherein the operation of selecting the second communication thread based upon the sender being the common participant of the sidebar thread and the second communication thread comprises also selecting the second communication thread based upon a textual similarity metric of messages of the second communication thread and the sidebar thread. In Example 162, the subject matter of Example 161 includes, wherein the operations further comprise determining the similarity metric using a natural language processing algorithm. In Example 163, the subject matter of Example 162 includes, wherein the operations further comprise determining the similarity metric using text string matching. In Example 164, the subject matter of Examples 155-163 includes, wherein the operation of posting a message to the second communication thread including the first message of the sidebar thread and including the second plurality of participants as members of the second communication thread comprises one of sending an email with the message, posting the message in a chat room, posting the message in a discussion forum, or posting the message as part of a group discussion in a unified communications service. In Example 165, the subject matter of Examples 155-164 includes, wherein the operations further comprise: providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; receiving an indication of a selection of the selectable option to create the sidebar thread; responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein receiving the command is responsive to a selection of the creation control in the second GUI. In Example 166, the subject matter of Examples 155-165 includes, wherein the operations further comprise: providing a GUI to the sender with GUI elements providing the sender with a choice to post the sidebar thread in the second communication thread or start a new thread; and wherein posting the message to the second communication thread is responsive to a receipt of a selection in the GUI to post the sidebar thread in the second communication thread. In Example 167, the subject matter of Examples 155-166 includes, wherein the final recipient is a participant of the first communication thread and wherein the operation of posting the communication message to the final recipient from the address of the sender in the first communication thread comprises posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 168 is a device for providing a sidebar thread for a communication, the device comprising: means for receiving a command, over a communication network, from a sender, to create a sidebar thread forked from a first communication thread, the first communication thread between a first plurality of participants including the sender, the sidebar thread being a thread between a second plurality of participants including the sender and a sidebar recipient, the command to create the sidebar thread including a first message, an address of a final recipient, and an address of the sidebar recipient; responsive to receiving the command: means for selecting a second communication thread based upon the sender being a common participant of the sidebar thread and the second communication thread, the second communication thread preexisting prior to the receipt of the command and including a third plurality of participants; means for posting a message to the second communication thread including the first message of the sidebar thread and including the second plurality of participants as members of the second communication thread; means for receiving an approval or a modification of the first message in the second communication thread over the communication network, from the third plurality of participants; means for, responsive to receiving the approval or the modification of the first message, automatically: creating a communication message from an address of the sender to the final recipient, that includes, the first message or the modification of the first message; and posting the communication message to the final recipient from the address of the sender in the first communication thread. In Example 169, the subject matter of Example 168 includes, wherein the first communication thread is provided over a first communication modality and the second communication thread is provided over a second communication modality. In Example 170, the subject matter of Example 169 includes, wherein the first communication modality is one of: electronic mail or instant messaging and the second communication modality is the other of the electronic mail or instant messaging. In Example 171, the subject matter of Examples 168-170 includes, wherein the first communication thread and the second communication thread are both provided over a same communication modality. In Example 172, the subject matter of Examples 168-171 includes, wherein the means for selecting the second communication thread comprises means for selecting the second communication thread also based upon a similarity in a subject of the first communication thread and either the second communication thread or the sidebar thread. In Example 173, the subject matter of Examples 168-172 includes, wherein a participant of the sidebar thread is added to the second communication thread. In Example 174, the subject matter of Examples 168-173 includes, wherein the means for selecting the second communication thread based upon the sender being the common participant of the sidebar thread and the second communication thread comprises also means for selecting the second communication thread based upon a textual similarity metric of messages of the second communication thread and the sidebar thread. In Example 175, the subject matter of Example 174 includes, determining the similarity metric using a natural language processing algorithm. In Example 176, the subject matter of Example 175 includes, determining the similarity metric using text string matching. In Example 177, the subject matter of Examples 168-176 includes, wherein the means for posting the message to the second communication thread including the first message of the sidebar thread and including the second plurality of participants as members of the second communication thread comprises one of sending an email with the message, posting the message in a chat room, posting the message in a discussion forum, or posting the message as part of a group discussion in a unified communications service. In Example 178, the subject matter of Examples 168-177 includes, means for providing a Graphical User Interface (GUI) to the sender, the GUI displaying a message of the first communication thread and a selectable option to create the sidebar thread from the first communication thread; means for receiving an indication of a selection of the selectable option to create the sidebar thread; means for, responsive to the selection of the selectable option, providing a second GUI to the sender, the second GUI providing GUI controls that accept a designation of the sidebar recipient, a designation of the final recipient, the first message, and a creation control to create the sidebar thread; and wherein receiving the command is responsive to a selection of the creation control in the second GUI. In Example 179, the subject matter of Examples 168-178 includes, means for providing a GUI to the sender with GUI elements providing the sender with a choice to post the sidebar thread in the second communication thread or start a new thread; and wherein posting the message to the second communication thread is responsive to a receipt of a selection in the GUI to post the sidebar thread in the second communication thread. In Example 180, the subject matter of Examples 168-179 includes, wherein the final recipient is a participant of the first communication thread and wherein the means for posting the communication message to the final recipient from the address of the sender in the first communication thread comprises means for posting the communication message to the final recipient and all the other participants of the first communication thread, the posting comprising one of: emailing the communication message, posting the communication to a chat session, posting the communication message to a message board, or posting the communication message to a group of a unified communication service. Example 181 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-180. Example 182 is an apparatus comprising means to implement of any of Examples 1-180. Example 183 is a system to implement of any of Examples 1-180. Example 184 is a method to implement of any of Examples 1-180. | 183,502 |
11943191 | Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Exemplary Live Location Sharing FIG.1is a diagram illustrating exemplary live location sharing. Mobile device102can communicate with mobile device104over communications network110using an IM program. The IM program can be hosted on a server to which mobile device102and mobile device104connect. Alternatively, each of mobile device102and mobile device104can host a separate copy of an IM program. A first user of mobile device102and a second user of mobile device104may chat (112A,112B) with each other online using the IM program. During the chat, mobile device104can display location sharing interface114in response to an input from the second user. Location sharing interface114allows the second user to enable location sharing. Location sharing can include allowing mobile device102to see a real-time location of mobile device104in the IM program. Allowing mobile device102to see the location of mobile device104can include allowing mobile device102to access the location through a server. The location can be stored on mobile device104, or submitted by mobile device104to be stored on the server temporarily for duration of location sharing. Mobile device104receives the input to enable location sharing. In response, mobile device104notifies mobile device102of the location sharing. Mobile device102acquires the location of mobile device104. Mobile device102can display virtual map116in the IM program. Mobile device102can represent the real-time location of mobile device104using marker118in virtual map116. Marker118can move in virtual map116, corresponding to physical movement of mobile device104. Exemplary User Interface FIGS.2A-2Dillustrate exemplary user interfaces for live location sharing. Each user interface can be a user interface of an IM program executing on either mobile device102or mobile device104ofFIG.1. For convenience, each user interface will be described in reference to mobile device102. FIG.2Aillustrates exemplary user interface202for initiating live location sharing. The live location sharing can be sharing a location of mobile device102with a device that is in communication with mobile device through an IM program. The sharing can be limited to the IM program, where the shared location is visible in an IM program on the other device. User interface202can include settings user interface item204. Settings user interface item204can have a label “details” or any other label indicating that a user can access detailed settings of the IM program. Upon receiving a user input in settings user interface item204, mobile device102can display a list of settings. One of the settings can be location sharing user interface item206. Location sharing user interface item206can include a virtual button that, when touched, can cause mobile device102to display location sharing user interface208. FIG.2Billustrates exemplary location sharing user interface208. Location sharing user interface208can include various user interface items for specifying when to share a location of mobile device102with another mobile device in an IM program. Location sharing user interface208can include virtual button210that, when selected, causes mobile device102to share location of mobile device102in an IM program for a first time period, e.g., one hour. Location sharing user interface208can include virtual button212that, when selected, causes mobile device102to share location of mobile device102in an IM program for a second time period, e.g., one day. Location sharing user interface208can include virtual button214that, when selected, causes mobile device102to share location of mobile device102in an IM program for a third time period, e.g., indefinitely. Location sharing user interface208can include virtual button216that, when selected, causes mobile device102to share location of mobile device102in an program with another device when mobile device102is in proximity with the other device and in communication with the other device. The proximity can be user defined, e.g., within a same country, within a same city, or within X miles or meters of one another. FIG.2Cillustrates exemplary map user interface218of an IM program executing on mobile device102. Mobile device102can display map user interface218upon receiving a user confirmation for sharing the location. Map user interface218can include marker220indicating a current location of mobile device102, as can be visible in an IM program on another device that receives the shared location. Accordingly, a user of mobile device102can be aware of what a user of the other device sees. FIG.2Dillustrates exemplary map user interface222of an IM program executing on mobile device102. Mobile device102is in communication with another mobile device using the program. Mobile device102shared location of mobile device102with the other device. The other device, in return, shared location of that device with mobile device102. Mobile device102can display map user interface222that includes a virtual map, marker224indicating a real-time location of mobile device102, and marker226indicating the real-time location of the other device. Exemplary System Components FIG.3is a block diagram illustrating exemplary interaction between mobile devices and their respective servers for live location sharing. Mobile device102and mobile device104can communicate with one another using communication channel302. Communication channel302can be a communication channel for IM programs and can be based on a first telephone number PN1of mobile device102and a second telephone number PN2of mobile device104. Mobile device102has logged into a user account on first server304. The user account is associated with an account identifier ID1, e.g., an account name. Mobile device104has logged into a user account on second server306. The user account is associated with an account identifier ID2. Mobile device102received a user input requesting mobile device102to share a location of mobile device102with mobile device104in the IM program. In response, mobile device102can submit request308to server304requesting server304to provide location sharing information for passing to mobile device104through communication channel302. In response, server304can provide mapping packet310A to mobile device102. Mapping packet310A can include PN1and ID1, and information on how long the location will be shared. Mobile device102can submit mapping packet310B, which can be the same as mapping packet310A, to mobile device104through communication channel302. Mobile device104provides the mapping packet310B to server306as request310C. Server306may already store the second telephone number PN2of mobile device104and account identifier ID2. Server306can submit the number PN1and ID1to an identity service (IDS)312. The IDS312can include one or more computers configured to determine, based on PN1and ID1, whether mobile device102is still logged in to server304. The IDS312can send token314to server306. Server306can submit token314to server304. Server304can retrieve location of mobile device102and provide the location to server306. Server306can, in turn, provide the location to mobile device104for displaying in the IM program. FIG.4is a block diagram illustrating components of an exemplary server and an exemplary mobile device for live location sharing. The server can be either server304or server306(ofFIG.3). The mobile device can be either mobile device102or mobile device104(ofFIG.3). For convenience,FIG.4will be described in reference to server304and mobile device102. Mobile device102can include instant messaging subsystem402. Instant messaging subsystem402is a component of mobile device102configured to execute an IM program and sharing a location of mobile device102in the IM program with another device. Instant messaging subsystem402can include location interface module404configured to share the location in the IM program. Instant messaging subsystem402can include map module406configured to display a map in the IM program, including displaying in the map the location of the mobile device102and, if a location of another device is shared, the location of the other device. Instant messaging subsystem402can include device communication module408configured to establish a telephone number based communication channel with another device and communicate with the other device using an IM program over that channel. Mobile device102can include server communication subsystem410. Server communication subsystem410is a component of mobile device102configured to send a request to server304for mapping packet upon receiving instructions from location interface module404to share location. Server communication subsystem410can receive the mapping packet from server304. If another device shares a location with mobile device102, the other device can notify mobile device102of the sharing through device communication module408. Location interface module404can then instruct server communication subsystem410to request the shared location from server304. Location interface module404can provide the shared location to location interface module404for displaying in a map of the program. Mobile device102can include location subsystem412. Location subsystem412is a component of mobile device102configured to determine a location of mobile device102, for example, by using signals from a cellular communication system, one or more wireless access points, or a global satellite navigation system. Location subsystem412can provide the location to server communication subsystem410for submitting to the server for sharing. Exemplary Procedures FIG.5is a flowchart of an exemplary process500of live location sharing. A first mobile device, e.g., mobile device102, can submit (502) a notification to a second mobile device, e.g., mobile device104, through an instant message program. The notification can indicate that the first mobile device shall provide a first location of the first mobile device for sharing with the second mobile device. At time of submitting the notification, the first mobile device and the second mobile device can be in communication through the instant message program. The communication can be established based on a phone number of the first mobile device and a phone number of the second mobile device. The first mobile device can receive (504), through the instant message program and from the second mobile device, a response to the notification. The response can be triggered by the notification. The response can be approved by a user of the second mobile device. The response can indicate that the second mobile device shall provide a second location of the second mobile device for sharing with the first mobile device. The first mobile device can obtain (506) from a server, the second location. The first mobile device can then provide (508) a marker representing the second location for display on a virtual map in the instant message program on the first mobile device. Likewise, the second mobile device can provide a marker representing the first location of the first mobile device for display on a virtual map in an instant message program on the second mobile device. The first mobile device can obtain, from the server, one or more updates of the second location. The updates can correspond to a movement of the second mobile device. The first mobile device can provide a representation of updated second location for display in the instant message program on the first mobile device. The representation of the updated second location can indicate a path of the movement. FIG.6is a flowchart of an exemplary process600of live location sharing. An instant message program executing on first mobile device, e.g., mobile device102, can receive (602) a notification to a second mobile device, e.g., mobile device104. The notification can indicate that the second mobile device shares a location of the second mobile device with the first mobile device. The notification can include a mapping packet including a phone number of the second mobile device and an account identifier of the second mobile device. The first mobile device can submit (604) and to a server, the mapping packet including the phone number and the account identifier for retrieving the location of the second mobile device. Upon successful authentication by the server indicating that the second mobile device is logged in and that a location of the second mobile device is available, the first mobile device can receive (606) the location from the server during a time period as specified by the second device for sharing the location. The time period can be an hour, a day, or an indefinite time period as specified by the second mobile device according to a user input in the instant message program. The first mobile device then provides (608) a marker representing the location for display on a virtual map in the instant message program on the first mobile device. During the time period, the first mobile device can provide the marker representing the location of the second mobile device for display in one or more other programs for displaying locations. The programs can include, for example, a “find my friend” application program. FIG.7is a flowchart of an exemplary process700of live location sharing. A first server, e.g., server304ofFIG.3can receive (702) a mapping packet from an instant message program of a first mobile device, e.g., mobile device102. The mapping packet can include a phone number of a second mobile device, e.g., mobile device104. The mapping packet can include an account identifier of the second mobile device. The mapping packet can indicate that the second mobile device has shared a location of the second mobile device with the first mobile device in the instant message program. The first server can be connected to the first mobile device by a communications network. The second server can be connected to the second mobile device by the communications network. The first mobile device and the second mobile device can be connected to one another by the same communications network or a different communications network. The first server can submit (704) the phone number and the account identifier to an identity service for determining whether the second mobile device is logged into the account on a second server. The identity service can provide a token indicating that the second mobile device is logged into the account. Upon receiving the token from the identity service, the first server can submit (706) a request to the second server for retrieving a current location of the second mobile device. The request can include the account identifier of the second mobile device. The current location of the second mobile device can be received by the second server from the second mobile device in response to an input on the second mobile device indicating that the second mobile device shares location of the second mobile device with the first mobile device. Upon receiving the current location from the second server, the first server can submit (708) the current location to the first mobile device for display in the instant message program. Exemplary Mobile Device Architecture FIG.8is a block diagram of an exemplary architecture800for the mobile devices ofFIGS.1-7. A mobile device (e.g., mobile device102) can include memory interface802, one or more data processors, image processors and/or processors804, and peripherals interface806. Memory interface802, one or more processors804and/or peripherals interface806can be separate components or can be integrated in one or more integrated circuits. Processors804can include application processors, baseband processors, and wireless processors. The various components in mobile device102, for example, can be coupled by one or more communication buses or signal lines. Sensors, devices, and subsystems can be coupled to peripherals interface806to facilitate multiple functionalities. For example, motion sensor810, light sensor812, and proximity sensor814can be coupled to peripherals interface806to facilitate orientation, lighting, and proximity functions of the mobile device. Location processor815(e.g., GPS receiver) can be connected to peripherals interface806to provide geopositioning. Electronic magnetometer816(e.g., an integrated circuit chip) can also be connected to peripherals interface806to provide data that can be used to determine the direction of magnetic North. Thus, electronic magnetometer816can be used as an electronic compass. Motion sensor810can include one or more accelerometers configured to determine change of speed and direction of movement of the mobile device. Barometer818can include one or more devices connected to peripherals interface806and configured to measure pressure of atmosphere around the mobile device. Camera subsystem820and an optical sensor822, e.g., a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, can be utilized to facilitate camera functions, such as recording photographs and video clips. Communication functions can be facilitated through one or more wireless communication subsystems824, which can include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of the communication subsystem824can depend on the communication network(s) over which a mobile device is intended to operate. For example, a mobile device can include communication subsystems824designed to operate over a GSM network, a CPRS network, an EDGE network, a Wi-Fi™ or WiMax™ network, and a Bluetooth™ network. In particular, the wireless communication subsystems824can include hosting protocols such that the mobile device can be configured as a base station for other wireless devices. Audio subsystem826can be coupled to a speaker828and a microphone830to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. Audio subsystem826can be configured to receive voice commands from the user. I/O subsystem840can include touch surface controller842and/or other input controller(s)844. Touch surface controller842can be coupled to a touch surface846or pad. Touch surface846and touch surface controller842can, for example, detect contact and movement or break thereof using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch surface846. Touch surface846can include, for example, a touch screen. Other input controller(s)844can be coupled to other input/control devices848, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus. The one or more buttons (not shown) can include an up/down button for volume control of speaker828and/or microphone830. In one implementation, a pressing of the button for a first duration may disengage a lock of the touch surface846; and a pressing of the button for a second duration that is longer than the first duration may turn power to mobile device102on or off. The user may be able to customize a functionality of one or more of the buttons. The touch surface846can, for example, also be used to implement virtual or soft buttons and/or a keyboard. In some implementations, mobile device102can present recorded audio and/or video files, such as MP3, AAC, and MPEG files. In some implementations, mobile device102can include the functionality of an MP3 player. Mobile device102may, therefore, include a pin connector that is compatible with the MP3 player. Other input/output and control devices can also be used. Memory interface802can be coupled to memory850. Memory850can include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). Memory850can store operating system852, such as Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks. Operating system852may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, operating system852can include a kernel (e.g., UNIX kernel). Memory850may also store communication instructions854to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers. Memory850may include graphical user interface instructions856to facilitate graphic user interface processing; sensor processing instructions858to facilitate sensor-related processing and functions; phone instructions860to facilitate phone-related processes and functions; electronic messaging instructions862to facilitate electronic-messaging related processes and functions; web browsing instructions864to facilitate web browsing-related processes and functions; media processing instructions866to facilitate media processing-related processes and functions; GPS/Navigation instructions868to facilitate GPS and navigation-related processes and instructions; camera instructions870to facilitate camera-related processes and functions; magnetometer data872and calibration instructions874to facilitate magnetometer calibration. The memory850may also store other software instructions (not shown), such as security instructions, web video instructions to facilitate web video-related processes and functions, and/or web shopping instructions to facilitate web shopping-related processes and functions. In some implementations, the media processing instructions866are divided into audio processing instructions and video processing instructions to facilitate audio processing-related processes and functions and video processing-related processes and functions, respectively. An activation record and international Mobile Equipment Identity (MEI) or similar hardware identifier can also be stored in memory850. Memory850can store live location sharing instructions876that, when executed, can cause processor804to perform operations of live location sharing, e.g., procedures as described in reference toFIG.5andFIG.6. Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. Memory850can include additional instructions or fewer instructions. Furthermore, various functions of the mobile device may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits. Exemplary Operating Environment FIG.9is a block diagram of an exemplary network operating environment900for the mobile devices ofFIGS.1-7. Mobile devices902aand902bcan, for example, communicate over one or more wired and/or wireless networks910in data communication. For example, a wireless network912, e.g., a cellular network, can communicate with a wide area network (WAN)914, such as the Internet, by use of a gateway916. Likewise, an access device918, such as an 802.11g wireless access point, can provide communication access to the wide area network914. Each of mobile devices902aand902bcan be mobile device102and mobile device104, respectfully, configured to communicate with one another using an instant messaging program and to share a respective location in the instant messaging program. In some implementations, both voice and data communications can be established over wireless network912and the access device918. For example, mobile device902acan place and receive phone calls (e.g., using voice over Internet Protocol (VoIP) protocols), send and receive e-mail messages (e.g., using Post Office Protocol 3 (POP3)), and retrieve electronic documents and/or streams, such as web pages, photographs, and videos, over wireless network912, gateway916, and wide area network914(e.g., using Transmission Control Protocol/Internet Protocol (TCP/IP) or User Datagram Protocol (UDP)). Likewise, in some implementations, the mobile device902bcan place and receive phone calls, send and receive e-mail messages, and retrieve electronic documents over the access device918and the wide area network914. In some implementations, mobile device902aor902bcan be physically connected to the access device918using one or more cables and the access device918can be a personal computer. In this configuration, mobile device902aor902bcan be referred to as a “tethered” device. Mobile devices902aand902bcan also establish communications by other means. For example, wireless device902acan communicate with other wireless devices, e.g., other mobile devices, cell phones, etc., over the wireless network912. Likewise, mobile devices902aand902bcan establish peer-to-peer communications920, e.g., a personal area network, by use of one or more communication subsystems, such as the Bluetooth™ communication devices. Other communication protocols and topologies can also be implemented. The mobile device902aor902bcan, for example, communicate with one or more services930and940over the one or more wired and/or wireless networks. For example, instant messaging services930can allow mobile devices902aand902bto communicate with one another using an instant messaging program. Location service940can provide the location and map data to mobile devices902aand902bfor determining locations of mobile devices902aand902b. Mobile device902aor902bcan also access other data and content over the one or more wired and/or wireless networks. For example, content publishers, such as news sites, Really Simple Syndication (RSS) feeds, web sites, blogs, social networking sites, developer networks, etc., can be accessed by mobile device902aor902b. Such access can be provided by invocation of a web browsing function or application (e.g., a browser) in response to a user touching, for example, a Web object. A number of implementations of the invention have been described. Nevertheless, it will be understood that various modifications can be made without departing from the spirit and scope of the invention. Exemplary System Architecture FIG.10is a block diagram of an exemplary system architecture for implementing the features and operations ofFIGS.1-7. Other architectures are possible, including architectures with more or fewer components. In some implementations, architecture1000includes one or more processors1002(e.g., dual-core Intel® Xeon® Processors), one or more output devices1004(e.g., LCD), one or more network interfaces1006, one or more input devices1008(e.g., mouse, keyboard, touch-sensitive display) and one or more computer-readable media1012(e.g., RAM, ROM, SDRAM, hard disk, optical disk, flash memory, etc.). These components can exchange communications and data over one or more communication channels1010(e.g., buses which can utilize various hardware and software for facilitating the transfer of data and control signals between components. The term “computer-readable medium” refers to a medium that participates in providing instructions to processor1002for execution, including without limitation, non-volatile media (e.g., optical or magnetic disks), volatile media (e.g., memory) and transmission media. Transmission media includes, without limitation, coaxial cables, copper wire and fiber optics. Computer-readable media1012can further include operating system1014(e.g., a Linux® operating system), network communication module1016, location sharing manager1020, location manager1030, and identity service manager1040. Operating system1014can be multi-user, multiprocessing, multitasking, multithreading, real time, etc. Operating system1014performs basic tasks, including but not limited to: recognizing input from and providing output to devices1006,1008; keeping track and managing files and directories on computer-readable media1012(e.g., memory or a storage device); controlling peripheral devices; and managing traffic on the one or more communication channels1010. Network communications module1016includes various components for establishing and maintaining network connections (e.g., software for implementing communication protocols, such as TCP/IP, HTTP, etc.). Location sharing manager1020can include computer instructions that, when executed, cause processor1002to perform operations of location sharing, e.g., procedure700as described in reference toFIG.7. Location manager1030can include computer instructions that, when executed, cause processor1002to provide location of mobile device and virtual maps to a mobile device. Identity service manager1040can include computer instructions that, when executed, cause processor1002to perform functions of identity services312as described in reference toFIG.3. Architecture1000can be implemented in a parallel processing or peer-to-peer infrastructure or on a single device with one or more processors. Software can include multiple software components or can be a single body of code. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language (e.g., Objective-C, Java), including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, a browser-based web application, or other unit suitable for use in a computing environment. Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits). To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor or a retina display device for displaying information to the user. The computer can have a touch surface input device (e.g., a touch screen) or a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer. The computer can have a voice input device for receiving voice commands from the user. The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server. A system of one or more computers can be configured to perform particular actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described. In the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted. In the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed. In a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. | 35,918 |
11943192 | DETAILED DESCRIPTION A messaging server system, which hosts backend service for an associated messaging client, is configured to detect a co-location event indicating that two devices executing respective messaging clients are located within a certain physical proximity and respond to the co-location event by unlocking one or more user experiences previously designated as co-location experiences. The technical problem of providing an online experience to a pair of users represented by respective user profiles in the messaging server system, in a way that the experience served to the respective associated messaging clients changes based on the users' physical proximity to each other, is addressed by an online co-location connection service configured to selectively pair user profiles associated with respective client devices equipped with sensors that communicate with each other within the predetermined physical proximity range, monitor physical proximity of the client devices based on the sensor data obtained by the co-location connection service from the respective messaging clients executing at the respective client devices and, in response to detecting that the client devices are within a predetermined physical proximity range, modifying the user interface in the respective messaging clients. A predetermined physical proximity range may be referred to as the co-location range. A user interface modified in response to detecting that the client devices are within a predetermined physical proximity range is an example of a co-location experience. The operation of pairing two user profiles associated with respective client devices comprises designating these two user profiles, in a database that stores profiles representing users in the messaging server system, as co-location buddies. For example, each of the paired profiles may include an identification of the other profile and a flag indicating that the other profile is its co-location buddy. In some embodiments, the process of pairing includes receiving, from a user, a request to be paired with another user, obtaining a consent to be paired from the other user, and determining that the respective client devices of the two users are configured to communicate with each other directly over a near field communication technology, such as, e.g., a wireless personal area network technology, radio-frequency identification (RFID), etc. The profiles representing the two users are then designated as co-location buddies in the database. Obtaining the consent to be paired from a user may entail communicating, from the messaging server system to the associated client device, a message or a user interface including a selectable option to grant or to deny consent to be paired. The messaging server system effectuates the pairing if the option to grant consent was selected and does not effectuate the pairing if the option to deny consent was selected of if not response was received. For the purposes of this description, the messaging clients associated with the paired user profiles are referred to as paired messaging clients, and the associated client devices are referred to as paired client devices. When the paired client devices come into the co-location range within each other, a co-location event is sent from one client device to the other, and, also, the co-location event is sent to the messaging server system. As mentioned above, an example of a co-location experience is a user interface modified in response to detecting that the client devices are within a predetermined physical proximity range, also referred to as a co-location user interface (UI). The co-location UI may include an indication of co-location of the devices, as well as a visual control actionable to activate a feature that is not otherwise made available to the users, such as, e.g., an HTML5-based app or a game. The co-location UI may, in some embodiments, include animation configured to playback overlaid over a screen of the messaging client. Such animation may be an animated image with a transparent background, e.g., of a couple engaged in an activity that in non-virtual realm is only possible when two people are in close proximity, such as hugging or dancing. Another example of such animation is a depiction of hearts or balloons floating through the screen of the messaging client. The co-location UI may show respective custom avatars representing the paired user profiles, where the avatars are modified in a manner indicating that the other person is nearby. When the messaging server system detects that paired devices are no longer within a predetermined physical proximity range, the co-location experience is made unavailable to the users of the paired messaging clients. While some less resource-intensive co-location experiences (sharing a simple animation) may be provided by the paired messaging clients to their users directly, without a roundtrip to the messaging server system, other co-location experiences (a more complex animation or a two player game) may include interaction with the messaging server system. Furthermore, while a co-location connection service is described in the context of a messaging system, the co-location methodology described herein may be utilized beneficially in any scenario where users interact via their client devices. For example, when users are engaged in an interactive game via their client devices, co-location methodology may be used to unlock additional power-ups in response to detecting co-location of the client devices. An online co-location connection service may be provided in an online messaging system comprising a messaging client and an associated backend service, which is described with reference toFIG.1below. Networked Computing Environment FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple instances of a client device102, each of which hosts a number of applications, including a messaging client104. Each messaging client104is communicatively coupled to other instances of the messaging client104and a messaging server system108via a network106(e.g., the Internet). A messaging client104is able to communicate and exchange data with another messaging client104and with the messaging server system108via the network106. The data exchanged between messaging client104, and between a messaging client104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). A client device hosting a messaging client104may be equipped with sensors permitting the messaging client104to communicate and exchange data (e.g., a Bluetooth UUID) with another messaging client104over a near field communication technology, such as, e.g., Bluetooth Low Energy technology. The messaging server system108provides server-side functionality via the network106to a particular messaging client104. While certain functions of the messaging system100are described herein as being performed by either a messaging client104or by the messaging server system108, the location of certain functionality either within the messaging client104or the messaging server system108may be a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108but to later migrate this technology and functionality to the messaging client104where a client device102has sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client104. This data may include message content, client device information, geolocation information, media augmentation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client104. Turning now specifically to the messaging server system108, an Application Program Interface (API) server110is coupled to, and provides a programmatic interface to, application servers112. The application servers112are communicatively coupled to a database server118, which facilitates access to a database120. A web server124is coupled to the application servers112and provides web-based interfaces to the application servers112. To this end, the web server124processes incoming network requests over the Hypertext Transfer Protocol (HTTP) and several other related protocols. The database120stores data associated with messages processed by the application servers112, such as, e.g., profile data about a particular entity. Where the entity is an individual, the profile data includes, for example, a user name, notification and privacy settings, as well as records related to changes made by the user to their profile data. Where a first user profile and a second user profile have been designated as co-location buddies for the purpose of accessing the co-location connection service, the first user profile includes a unique identification of the user's client device and an identification of the second user profile. The second user profile, in turn, includes a unique identification of their client device and an identification of the first user profile. An example of profile data that represents a profile paired with another user profile in the messaging system, where the paired profiles represent users of the co-location connection service is shown inFIG.7, which is described further below. The Application Program Interface (API) server110receives and transmits message data (e.g., commands and message payloads) between the client device102and the application servers112. Specifically, the Application Program Interface (API) server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client104in order to invoke functionality of the application servers112. The Application Program Interface (API) server110exposes various functions supported by the application servers112, including account registration, login functionality, the sending of messages, via the application servers112, from a particular messaging client104to another messaging client104, the sending of media files (e.g., images or video) from a messaging client104to a messaging server114, and for possible access by another messaging client104, opening an application event (e.g., relating to the messaging client104), as well as various functions supported by developer tools provided by the messaging server system108for use by third party computer systems. The application servers112host a number of server applications and subsystems, including for example a messaging server114, an image processing server116, and a social network server122. The messaging server114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client104. The image processing server116that is dedicated to performing various image processing operations, typically with respect to images or video within the payload of a message sent from or received at the messaging server114. The social network server122supports various social networking functions and services and makes these functions and services available to the messaging server114. Also shown inFIG.1is a co-location server117. The co-location server117provides an online co-location connection service configured to selectively pair user profiles associated with respective client devices equipped with sensors that communicate with each other within the predetermined physical range, monitor physical proximity of the client devices based on the sensor data obtained by the co-location connection service from the respective messaging clients executing at the respective client devices and, in response to detecting that the client devices are within a predetermined physical proximity range, generates co-location experience by modifying the user interface in the respective messaging clients. While, as shown inFIG.1, an online co-location connection service is provided at the co-location server117, in some examples, an online co-location connection service may be provided at a messaging server, e.g., by the messaging server114. The location of a co-location functionality may be either within the messaging client104or the messaging server system108or both. An example co-location system, which is supported on the client-side by the messaging client104and on the sever-side by the application servers112, is discussed below with reference toFIG.6. System Architecture FIG.6is a block diagram illustrating further details regarding the messaging system100, according to some examples. Specifically, the messaging system100is shown to comprise the messaging client104and the application servers112. The messaging system100embodies a number of subsystems, which are supported on the client-side by the messaging client104and on the sever-side by the application servers112. These subsystems include, for example, an augmentation system606, a map system608, a game system610, as well as a co-location connection system612. The co-location connection system612is configured to selectively pair user profiles associated with respective client devices equipped with sensors that communicate with each other within the predetermined physical proximity range. The co-location connection system612monitors physical proximity of the client devices based on the sensor data obtained by the co-location connection service from the respective messaging clients executing at the respective client devices. In response to detecting that the client devices are within a predetermined co-location range, the co-location connection system612serves a co-location experience to the respective associated messaging clients executing at the respective client devices by modifying the user interface in the respective messaging clients. An example of a co-location experience is an augmented reality experience provided by the augmentation system606. The augmentation system606provides various functions that enable a user to augment (e.g., annotate or otherwise modify or edit) media content associated with a message. For example, the augmentation system606provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The augmentation system606operatively supplies a media overlay or augmentation (e.g., an image filter) to the messaging client104based on a geolocation of the client device102. In another example, the augmentation system606operatively supplies a media overlay to the messaging client104based on other information, such as in response to the co-location connection system612detecting that the client devices are within a predetermined co-location range. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device102. For example, the media overlay may include text or image that can be overlaid on top of a photograph taken by the client device102. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the co-location connection system612and/or the augmentation system606cooperate with the map system608, provides various geographic location functions, and supports the presentation of map-based media content and messages by the messaging client104. Other examples of co-location experiences are experiences provided by the game system610, where the co-location connection system612generates a co-location UI that includes a visual control actionable to activate a game. The game system610provides various gaming functions within the context of the messaging client104. The messaging client104provides a game interface that includes a list of available games that can be launched by a user within the context of the messaging client104, and played with other users of the messaging system100. The messaging system100further enables a particular user to invite other users to participate in the play of a specific game, by issuing invitations to such other users from the messaging client104. The messaging client104also supports both the voice and text messaging (e.g., chats) within the context of gameplay, provides a leaderboard for the games, and, also, supports the provision of in-game rewards (e.g., coins and items). In some examples, a co-location experience provided by the co-location connection system612includes providing access to certain external resources, e.g., applications or applets that the respective messaging clients associated with the paired client devices may launch, e.g., by accessing an HTML5 file from a third-party servers. HTML5 is used as an example technology for programming games, but applications and resources programmed based on other technologies can also be used. As mentioned above, where two user profiles have been paired for the purpose of accessing the co-location connection service, the database that stores profile data (e.g., database120ofFIG.1) reflects such pairing. Example data architecture is illustrated inFIG.7, which is discussed below. Data Architecture FIG.7is a schematic diagram illustrating data structures700, which may be stored in the database120of the messaging server system108, according to certain examples. While the content of the database120is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database120includes message data stored within a message table702. This message data includes, for any particular one message, at least message sender data, message recipient (or receiver) data, and a payload. Further details regarding information that may be included in a message, and included within the message data stored in the message table702is described below with reference toFIG.4. An entity table704stores entity data, and is linked (e.g., referentially) to an entity graph706and profile data708. Entities for which records are maintained within the entity table704may include individuals, corporate entities, organizations, objects, places, events, and so forth. Regardless of entity type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph706stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. The entity graph706may also store information reflecting the pairing of user profiles representing users of the co-location connection system612ofFIG.6. The profile data708stores multiple types of profile data about a particular entity. The profile data708may be selectively used and presented to other users of the messaging system100, based on privacy settings specified by a particular entity. Where the entity is an individual, the profile data708includes, for example, a user name, telephone number, address, settings (e.g., notification and privacy settings), as well as a user-selected avatar representation (or collection of such avatar representations). A particular user may then selectively include one or more of these avatar representations within the content of messages communicated via the messaging system100, and on map interfaces displayed by messaging clients104to other users. The collection of avatar representations may include “status avatars,” which present a graphical representation of a status or activity that the user may select to communicate at a particular time. The profile data708that represents a profile paired with another user profile, where the paired profiles represent users of the co-location connection service117, include, in addition to a user identification718, a user device identification720and a paired user identification722. In one example, given a user profile that includes a user identification, a user device identification and a paired user identification, the location data exchange component of the power optimization system206shown inFIG.2obtains location data of a user device (represented by the user device identification), determines the paired profile based on the paired user identification, and communicates the obtained location data of the user device to the paired device represented by a user device identification stored in the paired profile. The database120also stores augmentation data, such as overlays or filters, in an augmentation table710. The augmentation data is associated with and applied to videos (for which data is stored in a video table714) and images (for which data is stored in an image table716). As mentioned above, the video table714stores video data that, in one example, is associated with messages for which records are maintained within the message table702. Similarly, the image table716stores image data associated with messages for which message data is stored in the entity table704. The entity table704may associate various augmentations from the augmentation table710with various images and videos stored in the image table716and the video table714. FIG.2is a block diagram illustrating an example system200for providing co-location experience to users of the of the co-location connection system612ofFIG.6. In some examples, the system200corresponds to the co-location connection system612shown inFIG.6. The system200includes a pairing component210, a co-location detector220, and a co-location UI generator230. The pairing component210is configured to pair two user profiles. In some embodiments, only paired user profiles can access the co-location service provided by the co-location connection system612. The pairing of a first user profile associated with a first client device and a second user profile associated with a second client device is performed online. The pairing comprises determining that the first client device and the second client device include respective short range communication sensors configured to communicate with each other within the predetermined physical range. The pairing operation may be performed without requiring that the two client devices are, at the time of pairing, are within a communication range permitted by their respective short range communication sensors and without requiring a communication between the first client device and the second client device via a short-range wireless communication technology. The pairing comprises receiving, from the first client device, a pairing request to pair the first user profile with the second user profile; in response to the pairing request, obtaining a consent response from the second device, the consent associated with the second user profile; and subsequent to the obtaining of the consent response, pairing the first user profile and the second user profile. The co-location detector220is configured to detect a co-location event indicating that a first client device executing a messaging client and a second client device executing the messaging client are located within a predetermined physical range. The detecting of the co-location event comprises receiving, from the first client device an indication of a connection established between the first client device and the second client device via a short-range wireless communication technology. The co-location detector220is further configured to detect a distancing event with respect to two client devices and, in response to the detecting of the distancing event, communicate, to the client devices, a visual indication of the distancing event. A distancing event indicates that the first client device and the second client device are located outside of the predetermined physical range. The distancing event comprises receiving, from the first client device, an indication that a previously established connection between the first client device and the second client device via a short-range wireless communication technology has been terminated. The co-location UI generator230is configured to generate, in response to the co-location detector220detecting of the co-location event, a co-location user interface. The co-location user interface may include, e.g., an indication of co-location of the first client device and the second client device, a visual control actionable to activate an HTML5-based application, and/or animation configured to playback overlaid over a screen of the messaging client executing at the first client device. Each of the various components of the system200may be provided at the client device102and/or at the messaging server system108ofFIG.1. Further details regarding the operation of the system200are described below. FIG.3is a flowchart of a method300for providing co-location experience. The method300may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, microcode, etc.), software, or a combination of both. In one example embodiment, some or all processing logic resides at the client device102ofFIG.1and/or at the messaging server system108ofFIG.1. Although the described flowchart can show operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a procedure, an algorithm, etc. The operations of methods may be performed in whole or in part, may be performed in conjunction with some or all of the operations in other methods, and may be performed by any number of different systems, such as the systems described herein, or any portion thereof, such as a processor included in any of the systems. At operation310, the co-location detector220of the co-location connection system612detects a co-location event indicating that a first client device executing a messaging client and a second client device executing the messaging client are located within a predetermined physical range. At operation320, the co-location UI generator230, in response to the detecting of the co-location event, generates a co-location user interface including an indication of co-location of the first client device and the second client device. The co-location user interface is communicated to the first client device and to the second client device at operation330. FIG.4is a diagrammatic representation400of an example co-location experience manifested on respective display devices of co-location buddies. As shown inFIG.4, paired client devices410and420host respective messaging clients. Respective screens412and422of the messaging clients display respective indications414and424of the client devices410and420being located within the communication range of a signal430and thus identified by a co-location connection service442hosted at a messaging server440as co-located. The paired client devices410and420communicate with the messaging server440via a network, such as, e.g., the Internet. Respective screens412and422of the messaging clients also display respective animations416and426configured to playback (e.g., float upwards) overlaid over the respective screens412and422and respective visual controls418and428actionable to activate a further application, e.g., an HTML5-based app. Machine Architecture FIG.5is a diagrammatic representation of the machine600within which instructions608(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine500to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions508may cause the machine500to execute any one or more of the methods described herein. The instructions508transform the general, non-programmed machine500into a particular machine500programmed to carry out the described and illustrated functions in the manner described. The machine500may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine500may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine500may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions508, sequentially or otherwise, that specify actions to be taken by the machine500. Further, while only a single machine500is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions508to perform any one or more of the methodologies discussed herein. The machine500, for example, may comprise the client device102or any one of a number of server devices forming part of the messaging server system108. In some examples, the machine500may also comprise both client and server systems, with certain operations of a particular method or algorithm being performed on the server-side and with certain operations of the particular method or algorithm being performed on the client-side. The machine500may include processors502, memory504, and input/output I/O components538, which may be configured to communicate with each other via a bus540. In an example, the processors502(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor506and a processor510that execute the instructions508. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.5shows multiple processors502, the machine500may include a single processor with a single-core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory504includes a main memory512, a static memory514, and a storage unit516, both accessible to the processors502via the bus540. The main memory504, the static memory514, and storage unit516store the instructions508embodying any one or more of the methodologies or functions described herein. The instructions508may also reside, completely or partially, within the main memory512, within the static memory514, within machine-readable medium518within the storage unit515, within at least one of the processors502(e.g., within the Processor's cache memory), or any suitable combination thereof, during execution thereof by the machine500. The I/O components538may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components538that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components538may include many other components that are not shown inFIG.5. In various examples, the I/O components538may include user output components524and user input components526. The user output components524may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The user input components526may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further examples, the I/O components538may include biometric components528, motion components530, environmental components532, or position components534, among a wide array of other components. For example, the biometric components528include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components530include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). The environmental components532include, for example, one or cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. With respect to cameras, the client device102may have a camera system comprising, for example, front cameras on a front surface of the client device102and rear cameras on a rear surface of the client device102. The front cameras may, for example, be used to capture still images and video of a user of the client device102(e.g., “selfies”), which may then be augmented with augmentation data (e.g., filters) described above. The rear cameras may, for example, be used to capture still images and videos in a more traditional camera mode, with these images similarly being augmented with augmentation data. In addition to front and rear cameras, the client device102may also include a 360° camera for capturing 360° photographs and videos. Further, the camera system of a client device102may include dual rear cameras (e.g., a primary camera as well as a depth-sensing camera), or even triple, quad or penta rear camera configurations on the front and rear sides of the client device102. These multiple cameras systems may include a wide camera, an ultra-wide camera, a telephoto camera, a macro camera and a depth sensor, for example. The position components534include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components538further include communication components536operable to couple the machine500to a network520or devices522via respective coupling or connections. For example, the communication components536may include a network interface Component or another suitable device to interface with the network520. In further examples, the communication components536may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices522may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components636may detect identifiers or include components operable to detect identifiers. For example, the communication components636may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components536, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., main memory512, static memory514, and memory of the processors502) and storage unit516may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions508), when executed by processors502, cause various operations to implement the disclosed examples. The instructions508may be transmitted or received over the network520, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components536) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions608may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices522. Glossary “Carrier signal” refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device. “Client device” refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “Communication network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In embodiments in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors1004or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented components may be distributed across a number of geographic locations. “Computer-readable storage medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. “Machine storage medium” refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.” “Non-transitory computer-readable storage medium” refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine. “Signal medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. | 51,350 |
11943193 | DETAILED DESCRIPTION In the following description of various illustrative embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown, by way of illustration, various embodiments in which aspects of the disclosure may be practiced. It is to be understood that other embodiments may be utilized, and structural and functional modifications may be made, without departing from the scope of the present disclosure. Various connections between elements are discussed in the following description. It is noted that these connections are general and, unless specified otherwise, may be direct or indirect, wired or wireless, and that the specification is not intended to be limiting in this respect. As a brief introduction to the concepts described further below, one or more aspects of the disclosure relate to data loss due to misdirected email and prevention thereof. For example, employees and/or other individuals may accidentally send an email to an unintended recipient (e.g., due to simple negligence, auto-suggestion, small key size, and/or other reasons), and in some instances, these emails may include sensitive data. This may result in, among other things, financial loss due to general data protection regulation (GDPR) fines, loss of brand reputation, and/or loss in productivity. Potential use cases may include: 1) sending a message to an unintended recipient (e.g., wrong domain name, or the like), 2) sending a message to a personal account rather than a business account, 3) adding recipients in the CC line instead of the BCC line (e.g., people listed in the CC field may have their identity exposed to other recipients of the message), 4) replying all instead of replying to a single individual, 5) making spelling mistakes in an email address, and/or other user cases. Accordingly, the disclosure herein describes integrating a feature into the email gateway that may pull email information and send it to a cloud based system. The system may then identify whether the target recipient is an intended or unintended recipient. Both heuristics and machine learning techniques may be used to make this identification. In some examples, historical data may be analyzed to identify relationships between users, context of communications between users, and the like. In some arrangements, historical email data may be used to train a machine learning model. The analyzed historical data and/or machine learning model may detect potentially misdirected email based on types of data included in the email, whether the email contains sensitive information, email handles of the email recipients, whether a reply or reply-all selection was made, and the like. Subsequently, for each new email, a page ranking may be determined by searching previous communications for similar contexts and performing one or more calculations, e.g., a Levenshtein distance calculation, to identify a potential misdirected email. Querying historical data may include querying specific information in the communications history of a user as well as independent information to determine a potential misdirected email. If an unintended recipient, or potentially unintended recipient, is identified, real time notifications may be provided to indicate potential risk and/or to provide additional security awareness training to the sender. For instance, a notification may be displayed to the user prior to the email being sent, e.g., asking the user to confirm the accuracy of the recipient or whether the recipient was intended prior to sending the email. FIG.1depicts an illustrative operating environment for preventing data loss due to misdirected emails in accordance with one or more example embodiments. Referring toFIG.1, computing environment100may include various computer systems, computing devices, networks, and/or other operating infrastructure. For example, computing environment100may include misdirected email identification platform110, enterprise network gateway system120, initiating user device130, administrator user device140, electronic messaging server150, recipient user device160, data loss prevention system170, and a network190. Network190may include one or more wired networks and/or one or more wireless networks that interconnect misdirected email identification platform110, enterprise network gateway system120, initiating user device130, administrator user device140, electronic messaging server150, recipient user device160, data loss prevention system170, and/or other computer systems and/or devices. In addition, each of misdirected email identification platform110, enterprise network gateway system120, initiating user device130, administrator user device140, electronic messaging server150, recipient user device160, and data loss prevention system170, may be special purpose computing devices configured to perform specific functions, as illustrated in greater detail below, and may include specific computing components such as processors, memories, communication interfaces, and/or the like. Misdirected email identification platform110may include one or more processor(s)111, one or more memory(s)112, and one or more communication interface(s)113. In some instances, misdirected email identification platform110may be made up of a plurality of different computing devices, which may be distributed within a single data center or a plurality of different data centers. In these instances, the one or more processor(s)111, one or more memory(s)112, and one or more communication interface(s)113included in misdirected email identification platform110may be part of and/or otherwise associated with the different computing devices that form misdirected email identification platform110. In one or more arrangements, processor(s)111may control operations of misdirected email identification platform110. Memory(s)112may store instructions that, when executed by processor(s)111, cause misdirected email identification platform110to perform one or more functions, as discussed below. Communication interface(s)113may include one or more wired and/or wireless network interfaces, and communication interface(s)113may connect misdirected email identification platform110to one or more networks (e.g., network190) and/or enable misdirected email identification platform110to exchange information and/or otherwise communicate with one or more devices connected to such networks. In one or more arrangements, memory(s)112may store and/or otherwise provide a plurality of modules (which may, e.g., include instructions that may be executed by processor(s)111to cause misdirected email identification platform110to perform various functions) and/or databases (which may, e.g., store data used by misdirected email identification platform110in performing various functions). For example, memory(s)112may store and/or otherwise provide user graph module112aand misdirected email identification module112b. In some instances, user graph module112amay store instructions that cause misdirected email identification platform110to identify user connections, which may, e.g., inform misdirected email determinations, and/or execute one or more other functions described herein to prevent data loss. Additionally, misdirected email identification module112bmay store instructions that cause misdirected email identification platform110to identify whether an email is misdirected, initiate data loss prevention actions, and/or execute one or more other functions described herein. For example, the misdirected email identification module112bmay be configured to train, host, and/or otherwise refine a machine learning model that may be used to perform these functions. In some instances, the misdirected email identification platform110may host or otherwise support an electronic messaging plugin, which may be used to performed any of the below described features performed by the misdirected email identification platform110. Enterprise network gateway system120may be or include one or more devices configured to route messages to message recipients (e.g., based on message routing commands received from the misdirected email identification platform110). In some instances, the enterprise network gateway system120may be associated with an enterprise organization of the misdirected email identification platform110. Initiating user device130may be configured to be used by an individual who may, e.g., be an employee or otherwise associated with an enterprise organization of the misdirected email identification platform110and/or enterprise network gateway system120. For example, the individual may use the initiating user device130to compose and/or otherwise send an electronic message. In some instances, the initiating user device130may be one of a mobile device, smartphone, tablet, laptop computer, desktop computer, and/or other device configured for electronic messaging. In some instances, initiating user device130may be configured to present one or more user interfaces (e.g., which may, e.g., enable the individual to create electronic messages, and/or otherwise provide user input). Administrator user device140may be configured to be used by an individual who may, e.g., be an employee or otherwise associated with an enterprise organization of the misdirected email identification platform110and/or enterprise network gateway system120. For example, the individual may use the administrator user device140to define initial data loss prevention rules, policies, and/or other information. In some instances, the administrator user device140may be one of a mobile device, smartphone, tablet, laptop computer, desktop computer, and/or other device configured for electronic messaging. In some instances, administrator user device140may be configured to present one or more user interfaces (e.g., which may, e.g., enable the individual to define data loss prevention rules, policies, and/or other information). In some instances, the administrator user device140may be configured to communicate with the misdirected email identification platform110and/or data loss prevention system170. Electronic messaging server150may be or include one or more devices configured to route messages to message recipients, maintain historical message information, and/or perform other functions. In some instances, the electronic messaging server150may be associated with an enterprise organization of the misdirected email identification platform110. Recipient user device160may be configured to be used by an individual who may, e.g., be an employee or otherwise associated with an enterprise organization affiliated with the misdirected email identification platform110and/or enterprise network gateway system120. For example, the individual may use the recipient user device160to receive or otherwise access an electronic message. In some instances, the recipient user device160may be one of a mobile device, smartphone, tablet, laptop computer, desktop computer, and/or other device configured for electronic messaging. In some instances, recipient user device160may be configured to present one or more user interfaces (e.g., which may, e.g., electronic messaging interfaces and/or other interfaces). Data loss prevention system170may be or include one or more devices configured to store data loss prevention rules configured to identify and/or otherwise prevent data loss. In some instances, data loss prevention system170be independent of misdirected email identification platform110(e.g., separate products), or included within the misdirected email identification platform110(e.g., an integrated product). In some instances, the enterprise network gateway system120may be associated with an enterprise organization of the misdirected email identification platform110. In some instances, the data loss prevention system170may host or otherwise support an electronic messaging plugin, which may be used to performed any of the below described features performed by the data loss prevention system170. FIGS.2A-2Idepict an illustrative event sequence for preventing data loss due to misdirected emails in accordance with one or more example embodiments. Referring toFIG.2A, at step201, the misdirected email identification platform110may monitor the electronic messaging server150for historical message information. For example, the misdirected email identification platform110may monitor the electronic messaging server150to detect previously sent messages and their corresponding senders, recipients, content, timestamps, metadata, and/or other message information. At step202, the misdirected email identification platform110may generate a user graph based on the historical message information. In these instances, the user graph may include nodes for each identified recipient and sender, and may represent various messages as edges between the nodes. For example, if sender #1 sent message #1 to recipient #1, the misdirected email identification platform110may represent this message as an edge between the nodes of sender #1 and recipient #1. In some instances, the misdirected email identification platform110may also include content, timestamps, metadata, and/or other message information within this relationship (e.g., embedded within or otherwise attached to the relationship). In doing so, the misdirected email identification platform110may generate a graph representative of all communications (e.g., as related to an enterprise network or otherwise), storing connections between individuals (including additional layers such as friends of friends, and so on). In some instances, in generating the user graph, the misdirected email identification platform110may generate a multi-modal directed graph, with edges between each node representing emails, instant messaging or other chat messages, meetings initiated by the corresponding user, and/or other messages. In some instances, the misdirected email identification platform may weight each mode of communication. In some instances, the misdirected email identification platform110may establish a collaboration trust rank (e.g., a weighted average of an email trust rank, chat trust rank, and/or a meeting trust rank). In these instances, the email, chat, and/or meeting trust ranks may be personalized edge weighted page ranks of emails, chats, and/or meetings respectively (which may be identified, e.g., using machine learning and/or other techniques based on, for example, a number of communications between the corresponding individuals, content of the communications, a number of previously identified misdirected messages, a number of data loss prevention violations, and/or other information). In some instances, the misdirected email identification platform110may regularly update the user graph (e.g., as new messaging information is received, at a predetermined time interval, and/or otherwise). At step203, the misdirected email identification platform110may train a misdirected email model. For example, using the historical message information, the misdirected email identification platform may train a machine learning model to detect potentially misdirected email based on types of data included in the email (e.g., sensitive information, email handles of recipients, whether reply-all selections were made, and/or otherwise). In some instances, the misdirected email identification platform110may train the misdirected email model to calculate a page ranking for each new email. For example, the misdirected email model may be trained to identify similar contexts for a new email based on previous communications and to perform one or more calculations to identify the page ranking (e.g., use a Levenshtein distance to identify a potential typo mismatch, context mismatch, and/or otherwise). In some instances, in training the misdirected email model, the misdirected email identification platform110may use labelled data to train a supervised and/or unsupervised machine learning model (e.g., latent Dirichlet allocation (LDA) model, named entity recognition (NER) model, text summarization model, decision tree, natural language processing model, and/or other model). For example, the misdirected email identification model may train the LDA to identify one or more topics a message. Additionally or alternatively, the misdirected email identification platform110may train the NER model to identify one or more named entities (e.g., people, organizations, products, and/or other entities) in a message. Additionally or alternatively, the misdirected email identification platform110may train the text summarization model to identify a predetermined number of most frequently used keywords messages). In some instances, the misdirected email identification platform110may train different models for different individuals, groups, teams, and/or other subset of individuals. At step204, the administrator user device140may send data loss prevention information to the misdirected email identification platform110and/or a data loss prevention system170(which may, e.g., communicate with the misdirected email identification platform110). For example, the administrator user device140may send manually defined heuristics rules, which may be used to identify misdirected emails. For example, the administrator user device140may send heuristic rules such as: 1) all other recipients are on a different domain than the target recipient, 2) there are recipients with multiple domains listed on a CC line, 3) comparing the target recipient with an auto-populated list (e.g., populated to include similar addresses with a webmail or company domain), 4) loose data loss prevention (DLP) rules that may be used to warn users, and/or other rules. In some instances, the loose DLP rules may include: 1) emails with pre-configured keywords in a subject line or the content, 2) emails to pre-configured sensitive clients, domains, domain categories, or the like, 3) emails with confidential tags in attachments to external recipients, 4) emails with links to sensitive documents, and/or other rules. In some instances, the misdirected email identification platform110may store the heuristics in a data loss prevention model, and input of the messaging information into the data loss prevention model may cause the data loss prevention model to output the data loss prevention result (which may, e.g., indicate whether or not any of the heuristics rules are violated). In some instances, the administrator user device140may send different data loss prevention information for different individuals, groups, teams, and/or other subset of individuals. Additionally or alternatively, the data loss prevention information may be sent to the data loss prevention system170. At step205, the data loss prevention system170and/or misdirected email identification platform110may receive and store the data loss prevention information. Referring toFIG.2B, at step206, the initiating user device130may send messaging information (e.g., for a first message) to the misdirected email identification platform110. In some instances, the initiating user device130may send the messaging information to the misdirected email identification platform110while the first message is being composed and before the first message is sent (e.g., for analysis of the first message in real time). Additionally or alternatively, the initiating user device130may send the messaging information to the misdirected email identification platform110once a “send” button is selected (e.g., for analysis of the first message once it has been completed). In some instances, in sending the messaging information, the initiating user device130may send any information corresponding to the first message (e.g., sender, recipient, content, timestamp, metadata, and/or other information that may be analyzed using the user graph, misdirected email identification model, and/or heuristics as defined above. In some instances, the initiating user device130may send the messaging information via a plugin to an electronic mailbox or other messaging service. At step207, the misdirected email identification platform110may receive the messaging information sent at step206. In some instances, the misdirected email identification platform110may continuously monitor the initiating user device130to detect input of a message recipient (e.g., a first target recipient domain) and/or corresponding context information. At step208, the misdirected email identification platform110may identify, using the messaging information and the user graph, nearest neighbor recipients corresponding to a message sender using initiating user device130(e.g., sender of the first message). For example, the misdirected email identification platform110may identify, using the user graph, all individuals with whom the message sender has communicated or a subset of individuals with whom the message sender has communicated (e.g., communicated with within a predetermined amount of time of composing the first message, a predetermined number of individuals with whom the message sender has communicated the most, and/or otherwise). In some instances, the misdirected email identification platform110may identify the nearest neighbors as team members reporting to a common manager, a top x % of users with the highest collaboration trust rank (e.g., a largest quantity of messages between the message sender and corresponding recipient), where the message sender initiated the communication, recent contacts with whom the sending user initiated the communication, and/or other group individuals. At step209, the misdirected email identification platform110may input the identified nearest neighbor information and the messaging information into the misdirected email identification model to identify whether or not the context of the first message is an exact match with the context of other, previously sent, messages between the message sender and the message recipient. In some instances, this may be referred to as a first level match. In some instances, this may cause the misdirected email model to compare the messaging information to historical messaging information between the message sender and the identified nearest neighbors to identify whether or not the context of the first message matches the context of other, previously sent, messages between the message sender and the message recipient. For example, the misdirected email identification model may identify one or more topics in the first email message using the LDA. Additionally or alternatively, the misdirected email identification model may identify one or more named entities (e.g., people, organizations, products, and/or other entities) in the first email message using the NER. Additionally or alternatively, the misdirected email identification model may identify a predetermined number of most frequently used keywords in the first email message using the text summarization model (which may, e.g., be a TF IDF model, or other text summarization model). In these instances, the misdirected email identification model may identify a context of the first message based on the identified one or more topics, one or more named entities, most frequently used keywords, and/or other messaging information. Once the context of the first message is identified, the misdirected email identification model may identify whether the context matches the context of historical messages between the message sender and the message recipient (and/or nearest neighbors of the message sender). For example, the misdirected email identification model may identify whether a predetermined threshold number of topics, named entities, keywords, and/or other information matches the topics, named entities, keywords, and/or other information of the historical messages. In some instances, the misdirected email identification model may have specific match thresholds for each of the topics, named entities, keywords, and/or other information. In other instances, the thresholds may be general context thresholds, corresponding to a number of matches between any of the categories (e.g., topics, named entities, keywords, and/or other information). In some instances, the misdirected email identification model may identify an exact match if at least one topic, named entity, and keyword are identified in the first message that matches the historical messages. In some instances, the misdirected email identification platform110may also analyze the message sender, message recipients, dates, times, subject lines, attachments (e.g., content of the attachment, file name, attachment label, and/or other information), and/or other information of the first message. In some instances, the misdirected email identification platform110may output a page rank indicating a trustworthiness of the message recipient (e.g., the collaboration trust rank). In some instances, the misdirected email identification platform110may perform one or more calculations to identify the page ranking (e.g., use a Levenshtein distance to identify a potential typo mismatch, context mismatch, and/or otherwise). In these instances, the output of the misdirected email identification model may be based on the collaboration trust rank, the Levenshtein distance, and/or other information. If the misdirected email identification platform110does detect a context match between the message sender and the message recipient (and/or the nearest neighbors), the misdirected email identification platform110may proceed to step210. If the misdirected email identification platform does not detect a match, it may proceed to step216. At step210, the misdirected email identification platform110and/or data prevention system170may identify a data loss prevention result indicating whether or not the data loss prevention information/criteria (sent at step204) is satisfied. For example, the misdirected email identification platform110may analyze the messaging information using the heuristics described above at step204, such as 1) are all other recipients are on a different domain than the target recipient, 2) are there are recipients with multiple domains listed on a CC line, 3) comparing the target recipient with an auto-populated list (e.g., populated to include similar addresses with a webmail or company domain), 4) loose DLP rules that may be used to warn users, and/or other rules. In some instances, the loose DLP rules may include: 1) emails with pre-configured keywords in a subject line or the content, 2) emails to pre-configured sensitive clients, domains, domain categories, or the like, 3) emails with confidential tags in attachments to external recipients, 4) emails with links to sensitive documents, and/or other rules. In some instances, the misdirected email identification platform110may store the heuristics in a data loss prevention model, and input of the messaging information into the data loss prevention model may cause the data loss prevention model to output the data loss prevention result (which may, e.g., indicate whether or not any of the heuristics rules are violated). In some instances, the misdirected email identification platform110may apply different data loss prevention information for different individuals, groups, teams, and/or other subset of individuals (e.g., in instances where the respective individuals are enrolled in email data loss prevention). In these instances, the misdirected email identification platform110and/or data loss prevention system170may perform a method similar to the method shown inFIG.8. For example, referring toFIG.8, at step805, the misdirected email identification platform110may identify a message. At step810, once a message is identified, the misdirected email identification platform110may identify whether the context of the message violates a user specific data loss prevention rule for the message sender. If so, the misdirected email identification platform110may proceed to step820. Otherwise, the misdirected email identification platform110may proceed to step815. At step820, the misdirected email identification platform110may identify whether a preconfigured setting indicates that messages flagged as data loss prevention violations should be blocked. If such messages should be blocked, the misdirected email identification platform110may block the message and notify the message sender. Otherwise, if such messages should not be blocked, the method may end. Returning to step810, if there is no violation of a user specific data loss prevention rule, the misdirected email identification platform110may identify whether the message contains confidential or other sensitive content based on a generic data loss prevention scan at step815. If not, the method may end. Otherwise, the misdirected email identification platform110may block the message and notify the message sender as described above with regard to step825. Additionally or alternatively, a generic data loss prevention analysis may be performed. In these instances, the misdirected email identification platform110and/or data loss prevention system170may perform only steps815and825as described above (e.g., without an analysis based on user specific rules). With further reference toFIG.2B, in some instances, the analysis described at step210may be performed by the misdirected email identification platform110and/or the data loss prevention system170. In some instances, the data loss prevention system170may identify the data loss prevention result, and may send the data loss prevention result (e.g., a success code, violation code, and/or other information) to the misdirected email identification platform110. Additionally or alternatively, the misdirected email identification platform110may send a result of the analysis performed at step209to the data loss prevention system170(e.g., a success code, violation code, and/or other information), which may then identify the data loss prevention result, and proceed from there. Accordingly, actions described at step210may be performed by and/or communicated between misdirected email identification platform110and/or data loss prevention system170without departing from the scope of the disclosure. If the data loss prevention rules are satisfied, the misdirected email identification platform110may proceed to step211. If the data loss prevention rules are not satisfied, the misdirected email identification platform110may proceed to step214. Referring toFIG.2C, at step211, based on identifying that the messaging information was a context match for the message sender (based on the knowledge graph and machine learning analysis), as well as satisfied the data loss prevention information/criteria, the misdirected email identification platform110may send one or more commands directing the enterprise network gateway system120to route the first message to the target recipient (e.g., the recipient user device160). At step212, based on or in response to the one or more commands directing the enterprise network gateway system120to route the first message to the recipient user device160, the enterprise network gateway system120may route the first message to the recipient user device160. At step213, the recipient user device160may receive and display the first message routed at step212. Returning to step211, if the misdirected email identification platform110determined that the messaging information did not satisfy the data loss prevention information/criteria, the misdirected email identification platform110may proceed to step214. At step214, the misdirected email identification platform110may send a data loss prevention notification, indicating that data loss prevention criteria was not satisfied, to the initiating user device130. In some instances, the misdirected email identification platform110may also send one or more commands directing the initiating user device130to display the data loss prevention notification. At step215, the initiating user device130may receive the data loss prevention notification sent at step214. Based on or in response to the one or more commands directing the initiating user device130to display the data loss prevention notification, the initiating user device130may display the data loss prevention notification. For example, the initiating user device may display a graphical user interface similar to graphical user interface300, which is shown inFIG.3, and which indicates that sensitive or otherwise confidential information should be removed from the first message. Once such information has been removed, or an attempt to re-send the first message is otherwise detected, the misdirected email identification platform110may return to step210to re-assess the first message based on the data loss prevention criteria. In some instances, the data loss prevention notification may also include an option to engage in email security compliance training. In some instances, the notification may include an indication that the target recipient is compromised (e.g., business email compromise notifications, or the like). In some instances, the notification may include options to send the first message to the target recipient anyway or to modify the intended recipient domain. In some instances, the notification may include one or more additional information components or selectable options, such as an indication of a type of data compliance at risk, or an option to select compliance training for reviewing. Returning to step209, if the misdirected email identification platform110does not identify a nearest neighbors context match, the misdirected email identification platform110may proceed to step216. At step216, the misdirected email identification platform110may identify whether or not the recipient domain is included in the identified nearest neighbor domains (e.g., identified at step208). Referring toFIG.2D, at step217, the misdirected email identification platform110may input the messaging information into the misdirected email identification model and/or revisit the results of the machine learning analysis performed at step209to identify whether or not the message information is an approximate context match with historical messages corresponding to the message recipient (e.g., as opposed to an exact match, as the misdirected email identification model attempted to identify at step209). For example, the misdirected email identification platform110may use similar techniques to those described above at step209and/or fuzzy matching to identify an approximate match. In some instances, to identify whether there is an approximate match, the misdirected email identification model may compare any identified topics, named entities, keywords, and/or other information to less strict thresholds than those described above with regard to step209. For example, the misdirected email identification model may, in some instances, have an exact match threshold of 5 (e.g., 5 matching topics, named entities, keywords, and/or other information), whereas the approximate match threshold may be 2. Additionally or alternatively, the misdirected email identification model may identify at least one matching topic, named entity, and keyword to identify an exact match, whereas an approximate match may be identified if at least one matching topic, named entity, or keyword is identified, but not all three. Additionally or alternatively, the misdirected email identification may identify that topics, named entities, and/or keywords identified in the first message do not match the historical messages, but are related to topics, named entities, and/or keywords of the historical messages, and thus may identify an approximate match. If both the recipient domain is included in the identified nearest neighbor domains and the messaging information indicates an approximate context match (which may, e.g., be referred to as a second level match), the misdirected email identification platform110may proceed to step218. Otherwise, the misdirected email identification platform110may proceed to step224. At step218, the misdirected email identification platform110may identify a data loss prevention result indicating whether or not the data loss prevention information/criteria (sent at step204) is satisfied. For example, the misdirected email identification platform110may analyze the messaging information using the heuristics described above at step204, such as 1) are all other recipients are on a different domain than the target recipient, 2) are there are recipients with multiple domains listed on a CC line, 3) comparing the target recipient with an auto-populated list (e.g., populated to include similar addresses with a webmail or company domain), 4) loose DLP rules that may be used to warn users, and/or other rules. In some instances, the loose DLP rules may include: 1) emails with pre-configured keywords in a subject line or the content, 2) emails to pre-configured sensitive clients, domains, domain categories, or the like, 3) emails with confidential tags in attachments to external recipients, 4) emails with links to sensitive documents, and/or other rules. In some instances, the misdirected email identification platform110may store the heuristics in a data loss prevention model, and input of the messaging information into the data loss prevention model may cause the data loss prevention model to output the data loss prevention result (which may, e.g., indicate whether or not any of the heuristics rules are violated). In some instances, the misdirected email identification platform110may apply different data loss prevention information for different individuals, groups, teams, and/or other subset of individuals (e.g., as described with regard toFIG.8). Additionally or alternatively, a generic analysis may be performed. In some instances, the analysis described at step210may be performed by the misdirected email identification platform110and/or the data loss prevention system170. In some instances, the data loss prevention system170may identify the data loss prevention result, and may send the data loss prevention result to the misdirected email identification platform110. Additionally or alternatively, the misdirected email identification platform110may send a result of the analysis performed at steps216/217to the data loss prevention system170, which may then identify the data loss prevention result, and proceed from there. Accordingly, actions described at step218may be performed by and/or communicated between misdirected email identification platform110and/or data loss prevention system170without departing from the scope of the disclosure. If the data loss prevention rules are satisfied, the misdirected email identification platform110may proceed to step219. If the data loss prevention rules are not satisfied, the misdirected email identification platform110may proceed to step222. In some instances, actions performed at step218may be similar to those described above with regard to step210. At step219, based on identifying that the messaging information was an approximate context match for the message sender and that the message recipient was included in the identified nearest neighbors (based on the knowledge graph and machine learning analysis), as well as satisfied the data loss prevention information/criteria, the misdirected email identification platform110may send one or more commands directing the enterprise network gateway system120to route the first message to the target recipient (e.g., the recipient user device160). In some instances, prior to sending the one or more commands directing the enterprise network gateway system120to route the first message, the misdirected email identification platform110may send or otherwise cause display, at the initiating user device130, of a prompt or other notification indicating that an exact context match was not identified, but that an approximate context match was identified, which may prompt the message sender to confirm that the first message should be sent and/or to correct a potentially unintended recipient. For example, the initiating user device130may display a graphical user interface similar to graphical user interface400, which is shown inFIG.4. In some instances, the notification may also include an option to engage in email security compliance training. In some instances, the notification may include an indication that the target recipient is compromised (e.g., business email compromise notifications, or the like). In some instances, the notification may include options to send the first message to the target recipient anyway or to modify the intended recipient domain. In some instances, the notification may include one or more additional information components or selectable options, such as an indication of a type of data compliance at risk, or an option to select compliance training for reviewing. In these instances, if the first message should be sent, the event sequence may proceed to step220. Otherwise, if the message should not be sent, the event sequence may proceed to step245. Actions performed at step219may be similar to those described above with regard to step211. At step220, based on or in response to the one or more commands directing the enterprise network gateway system120to route the first message to the recipient user device160, the enterprise network gateway system120may route the first message to the recipient user device160. Actions performed at step220may be similar to those described above with regard to step212. At step221, the recipient user device160may receive and display the first message routed at step220. Actions performed at step221may be similar to those described above with regard to step213. Returning to step218, if the misdirected email identification platform110determined that the messaging information did not satisfy the data loss prevention information/criteria, the misdirected email identification platform110may proceed to step222. Referring toFIG.2E, at step222, the misdirected email identification platform110may send a data loss prevention notification, indicating that data loss prevention criteria was not satisfied, to the initiating user device130. In some instances, the misdirected email identification platform110may also send one or more commands directing the initiating user device130to display the data loss prevention notification. In some instances, actions performed at step222may be similar to those described above with regard to step214. At step223, the initiating user device130may receive the data loss prevention notification sent at step222. Based on or in response to the one or more commands directing the initiating user device130to display the data loss prevention notification, the initiating user device130may display the data loss prevention notification. For example, the initiating user device130may display a graphical user interface similar to graphical user interface300, which is shown inFIG.3, and which indicates that sensitive or otherwise confidential information should be removed from the first message. In some instances, the data loss prevention notification may also include an option to engage in email security compliance training. In some instances, the notification may include an indication that the target recipient is compromised (e.g., business email compromise notifications, or the like). In some instances, the notification may include options to send the first message to the target recipient anyway or to modify the intended recipient domain. In some instances, the notification may include one or more additional information components or selectable options, such as an indication of a type of data compliance at risk, or an option to select compliance training for reviewing. Once such information has been removed, or an attempt to re-send the first message is otherwise detected, the misdirected email identification platform110may return to step218to re-assess the first message based on the data loss prevention criteria. In some instances, actions performed at step223may be similar to those described above with regard to step215. Returning to step217, if the recipient domain is not included in the identified nearest neighbor domains and/or the messaging information is not an approximate context match with the historical messaging information, the misdirected email identification platform110may proceed to step224. At step224, the misdirected email identification platform110may identify, using the user graph, an additional layer of nearest neighbors (e.g., using a similar technique as described above with regard to the identification of the nearest neighbors at step208). For example, at step224, rather than identifying nearest neighbors on the user graph for only the message sender, the misdirected email identification platform110may identify nearest neighbor groups for each of the originally identified nearest neighbors (e.g., the nearest neighbor network for each originally identified nearest neighbor, friends of friends, or the like). At step225, the misdirected email identification platform110may identify whether or not the recipient domain is included in the expanded list of nearest neighbor domains (e.g., identified at step224). In some instances, this may be referred to as a third level match. For example, actions performed at step225may be similar to those performed at step216, though may be performed with an expanded set of possible recipient domains. If the recipient domain is included in the expanded list of nearest neighbor domains, the misdirected email identification platform110may proceed to step226. Otherwise, if the recipient domain is not included in the expanded list of nearest neighbor domains, the misdirected email identification platform may proceed to step234. At step226, the misdirected email identification platform110may identify a data loss prevention result indicating whether or not the data loss prevention information/criteria (sent at step204) is satisfied. For example, the misdirected email identification platform110may analyze the messaging information using the heuristics described above at step204, such as 1) are all other recipients are on a different domain than the target recipient, 2) are there are recipients with multiple domains listed on a CC line, 3) comparing the target recipient with an auto-populated list (e.g., populated to include similar addresses with a webmail or company domain), 4) loose DLP rules that may be used to warn users, and/or other rules. In some instances, the loose DLP rules may include: 1) emails with pre-configured keywords in a subject line or the content, 2) emails to pre-configured sensitive clients, domains, domain categories, or the like, 3) emails with confidential tags in attachments to external recipients, 4) emails with links to sensitive documents, and/or other rules. In some instances, the misdirected email identification platform110may store the heuristics in a data loss prevention model, and input of the messaging information into the data loss prevention model may cause the data loss prevention model to output the data loss prevention result (which may, e.g., indicate whether or not any of the heuristics rules are violated). In some instances, the misdirected email identification platform110may apply different data loss prevention information for different individuals, groups, teams, and/or other subset of individuals (e.g., as described with regard toFIG.8). Additionally or alternatively, a generic analysis may be performed. In some instances, the analysis described at step226may be performed by the misdirected email identification platform110and/or the data loss prevention system170. In some instances, the data loss prevention system170may identify the data loss prevention result, and may send the data loss prevention result to the misdirected email identification platform110. Additionally or alternatively, the misdirected email identification platform110may send a result of the analysis performed at step225to the data loss prevention system170, which may then identify the data loss prevention result, and proceed from there. Accordingly, actions described at step226may be performed by and/or communicated between misdirected email identification platform110and/or data loss prevention system170without departing from the scope of the disclosure. If the data loss prevention rules are satisfied, the misdirected email identification platform110may proceed to step227. If the data loss prevention rules are not satisfied, the misdirected email identification platform110may proceed to step232. In some instances, actions performed at step218may be similar to those described above with regard to step210. Referring toFIG.2F, at step227, the misdirected email identification platform110may send a notification to the initiating user device130indicating that a friends historical match is identified. For example, the misdirected email identification platform110may send a notification indicating that although the content of the first message may be unusual between the message sender and the message recipient, similar content has been exchanged in messages between the identified nearest neighbors and/or nearest neighbors of those identified individuals. In some instances, the notification may prompt the message sender to confirm whether or not the first message should be sent. In some instances, the misdirected email identification platform110may also send one or more commands directing the initiating user device130to display the friends historical match notification. At step228, the initiating user device130may receive the friends historical match notification. In some instances, based on or in response to one or more commands directing the initiating user device130to display the friends historical match notification, the initiating user device130may display the friends historical match notification (which may, e.g., be similar to graphical user interface500, which is shown inFIG.5). In some instances, the friends historical match notification may indicate that although there are no historical messages between the message sender and the message recipient (e.g., the message recipient is not included in the identified nearest neighbors), there are historical messages between the identified nearest neighbors for the message sender and the message recipient (e.g., the message recipient is included in the expanded group of nearest neighbors, corresponding to contacts of the message sender's contact (e.g., friends of friends)). In some instances, the friends historical match notification may also include an option to engage in email security compliance training. In some instances, the notification may include an indication that the target recipient is compromised (e.g., business email compromise notifications, or the like). In some instances, the notification may include options to send the first message to the target recipient anyway or to modify the intended recipient domain. In some instances, the notification may include one or more additional information components or selectable options, such as an indication of a type of data compliance at risk, or an option to select compliance training for reviewing. If the initiating user device130receives input indicating that the first message should be sent, the event sequence may proceed to step229. Otherwise, if the initiating user device receives input indicating that the first message should not be sent, the event sequence may proceed to step245. At step229, based on identifying that the messaging information was friends historical match for the message sender, as well as satisfied the data loss prevention information/criteria, the misdirected email identification platform110may send one or more commands directing the enterprise network gateway system120to route the first message to the target recipient (e.g., the recipient user device160). Actions performed at step229may be similar to those described above with regard to step211. At step230, based on or in response to the one or more commands directing the enterprise network gateway system120to route the first message to the recipient user device160, the enterprise network gateway system120may route the first message to the recipient user device160. Actions performed at step230may be similar to those described above with regard to step212. At step231, the recipient user device160may receive and display the first message routed at step220. Actions performed at step231may be similar to those described above with regard to step213. Returning to step226, if the data loss prevention criteria were not satisfied, the misdirected email identification platform110may proceed to step232. Referring toFIG.2G, at step232, the misdirected email identification platform110may send a data loss prevention notification, indicating that data loss prevention criteria was not satisfied, to the initiating user device130. In some instances, the misdirected email identification platform110may also send one or more commands directing the initiating user device130to display the data loss prevention notification. In some instances, actions performed at step232may be similar to those described above with regard to step214. At step233, the initiating user device130may receive the data loss prevention notification sent at step232. Based on or in response to the one or more commands directing the initiating user device130to display the data loss prevention notification, the initiating user device130may display the data loss prevention notification. For example, the initiating user device may display a graphical user interface similar to graphical user interface300, which is shown inFIG.3, and which indicates that sensitive or otherwise confidential information should be removed from the first message. In some instances, the data loss prevention notification may also include an option to engage in email security compliance training. In some instances, the notification may include an indication that the target recipient is compromised (e.g., business email compromise notifications, or the like). In some instances, the notification may include options to send the first message to the target recipient anyway or to modify the intended recipient domain. In some instances, the notification may include one or more additional information components or selectable options, such as an indication of a type of data compliance at risk, or an option to select compliance training for reviewing. Once such information has been removed, or an attempt to re-send the first message is otherwise detected, the misdirected email identification platform110may return to step26to re-assess the first message based on the data loss prevention criteria. In some instances, actions performed at step233may be similar to those described above with regard to step215. Returning to step225, if the recipient domain is not included in the expanded nearest neighbor domains, the misdirected email identification platform110may proceed to step234. At step234, the misdirected email identification platform110may input the messaging information and the nearest neighbor information into the misdirected email identification to identify whether or not there is an approximate match between the messaging information and historical message recipient information of messages between the message sender and/or the nearest neighbors (e.g., using similar techniques as described above with regard to the analysis described above at step217). In some instances, this may be referred to as a fourth level match. For example, the misdirected email identification model may identify a Levenschtein distance between the message recipient address and each of the addresses for the nearest neighbors (e.g., the originally identified nearest neighbors rather than the expanded nearest neighbor group). In these instances, the misdirected email identification model may compare the smallest identified Levenschtein distance to an approximate historical match threshold. If the Levenschtein distance exceeds the approximate historical match threshold, an approximate match might not be determined. If the Levenschtein distance does not exceed the approximate historical match threshold, an approximate match may be determined. If an approximate match is determined, the misdirected email identification platform110may proceed to step235. Otherwise, if no approximate match is determined, the misdirected email identification platform110may proceed to step243. At step235, the misdirected email identification platform110may identify a data loss prevention result indicating whether or not the data loss prevention information/criteria (sent at step204) is satisfied. For example, the misdirected email identification platform110may analyze the messaging information using the heuristics described above at step204, such as 1) are all other recipients are on a different domain than the target recipient, 2) are there are recipients with multiple domains listed on a CC line, 3) comparing the target recipient with an auto-populated list (e.g., populated to include similar addresses with a webmail or company domain), 4) loose DLP rules that may be used to warn users, and/or other rules. In some instances, the loose DLP rules may include: 1) emails with pre-configured keywords in a subject line or the content, 2) emails to pre-configured sensitive clients, domains, domain categories, or the like, 3) emails with confidential tags in attachments to external recipients, 4) emails with links to sensitive documents, and/or other rules. In some instances, the misdirected email identification platform110may store the heuristics in a data loss prevention model, and input of the messaging information into the data loss prevention model may cause the data loss prevention model to output the data loss prevention result (which may, e.g., indicate whether or not any of the heuristics rules are violated). In some instances, the misdirected email identification platform110may apply different data loss prevention information for different individuals, groups, teams, and/or other subset of individuals (e.g., as described with regard toFIG.8). Additionally or alternatively, a generic analysis may be performed. In some instances, the analysis described at step235may be performed by the misdirected email identification platform110and/or the data loss prevention system170. In some instances, the data loss prevention system170may identify the data loss prevention result, and may send the data loss prevention result to the misdirected email identification platform110. Additionally or alternatively, the misdirected email identification platform110may send a result of the analysis performed at step234to the data loss prevention system170, which may then identify the data loss prevention result, and proceed from there. Accordingly, actions described at step234may be performed by and/or communicated between misdirected email identification platform110and/or data loss prevention system170without departing from the scope of the disclosure. If the data loss prevention rules are satisfied, the misdirected email identification platform110may proceed to step236. If the data loss prevention rules are not satisfied, the misdirected email identification platform110may proceed to step241. In some instances, actions performed at step235may be similar to those described above with regard to step210. At step236, the misdirected email identification platform110may send a notification to the initiating user device130indicating that an approximate friends historical match is detected. For example, the misdirected email identification platform110may send a notification indicating a potential spelling mistake in the recipient address, and, in some instances, a recommended correction. In some instances, the misdirected email identification platform110may also send one or more commands directing the initiating user device130to display the approximate friends historical match notification. At step237, the initiating user device130may receive the approximate friends historical match notification. In some instances, the initiating user device130may display the approximate friends historical match notification based on or in response to the one or more commands directing the initiating user device130to display the approximate friends historical match notification. In some instances, the initiating user device130may display a graphical user interface similar to graphical user interface600, which indicates that although no approximate context matches have been identified in the message senders network, an approximate historical recipient has been identified (which may, e.g., be due to a spelling mistake in the recipient address). In some instances, the initiating user device130may display a difference between the recipient address and an alternative, suggested recipient address. In some instances, the approximate friends historical match notification may also include an option to engage in email security compliance training. In some instances, the notification may include an indication that the target recipient is compromised (e.g., business email compromise notifications, or the like). In some instances, the notification may include options to send the first message to the target recipient anyway or to modify the intended recipient domain. In some instances, the notification may include one or more additional information components or selectable options, such as an indication of a type of data compliance at risk, or an option to select compliance training for reviewing. In some instances, the approximate friends historical match notification may prompt the message sender as to whether or not the first message should still be sent. If the first message should still be sent, the event sequence may proceed to step238. If the first message should not be sent, the event sequence may proceed to step245. With reference toFIG.2H, at step238, based on identifying that the messaging information was friends historical match for the message sender, as well as satisfied the data loss prevention information/criteria, the misdirected email identification platform110may send one or more commands directing the enterprise network gateway system120to route the first message to the target recipient (e.g., the recipient user device160). Actions performed at step238may be similar to those described above with regard to step211. At step239, based on or in response to the one or more commands directing the enterprise network gateway system120to route the first message to the recipient user device160, the enterprise network gateway system120may route the first message to the recipient user device160. Actions performed at step239may be similar to those described above with regard to step212. At step240, the recipient user device160may receive and display the first message routed at step220. Actions performed at step240may be similar to those described above with regard to step213. Returning to step235, if the data loss prevention criteria are not satisfied, the misdirected email identification platform110may proceed to step241. At step241, the misdirected email identification platform110may send a data loss prevention notification, indicating that data loss prevention criteria was not satisfied, to the initiating user device130. In some instances, the misdirected email identification platform110may also send one or more commands directing the initiating user device130to display the data loss prevention notification. In some instances, actions performed at step241may be similar to those described above with regard to step214. At step242, the initiating user device130may receive the data loss prevention notification sent at step241. Based on or in response to the one or more commands directing the initiating user device130to display the data loss prevention notification, the initiating user device130may display the data loss prevention notification. In some instances, the data loss prevention notification may also include an option to engage in email security compliance training. In some instances, the notification may include an indication that the target recipient is compromised (e.g., business email compromise notifications, or the like). In some instances, the notification may include options to send the first message to the target recipient anyway or to modify the intended recipient domain. In some instances, the notification may include one or more additional information components or selectable options, such as an indication of a type of data compliance at risk, or an option to select compliance training for reviewing. For example, the initiating user device may display a graphical user interface similar to graphical user interface300, which is shown inFIG.3, and which indicates that sensitive or otherwise confidential information should be removed from the first message. Once such information has been removed, or an attempt to re-send the first message is otherwise detected, the misdirected email identification platform110may return to step235to re-assess the first message based on the data loss prevention criteria. In some instances, actions performed at step242may be similar to those described above with regard to step215. At step243, the misdirected email identification platform110may send a misdirected email notification to the initiating user device130. In some instances, the misdirected email identification platform110may also send one or more commands directing the initiating user device130to display the misdirected email notification. At step244, the initiating user device130may receive the misdirected email notification. In some instances, based on or in response to the one or more commands directing the initiating user device130to display the misdirected email notification, the initiating user device130may display the misdirected email notification. For example, the initiating user device130may display a notification indicating that the first message appears to be misdirected (and no alternative recipient could be identified based on the message senders message history and/or contacts), and will not be sent. In some instances, the misdirected email notification may also include an option to engage in email security compliance training. In some instances, the notification may include an indication that the target recipient is compromised (e.g., business email compromise notifications, or the like). In some instances, the notification may include options to send the first message to the target recipient anyway or to modify the intended recipient domain. In some instances, the notification may include one or more additional information components or selectable options, such as an indication of a type of data compliance at risk, or an option to select compliance training for reviewing. Referring toFIG.2I, at step245, the misdirected email identification platform110may send one or more security commands directing the enterprise network gateway system120to execute one or more security actions in response. For example, the misdirected email identification platform110may direct the enterprise network gateway system120to block future messages from the message sender, quarantine the message, update one or more network security policies, and/or perform other actions. At step246, the enterprise network gateway system120may receive the one or more security commands sent at step245. At step247, based on or in response to the one or more security commands, the enterprise network gateway system120may execute one or more security actions. At step248, the misdirected email identification platform110may feed the messaging information and any outputs from the misdirected email identification model back into the model. Additionally or alternatively, the misdirected email identification platform110may feed any user feedback (e.g., from the message sender) back into the misdirected email identification model. In doing so, the misdirected email identification platform110may establish a dynamic feedback loop that may continuously improve accuracy of the misdirected email identification model by updating based on any newly received or otherwise current information and/or model outputs. Additionally or alternatively, the misdirected email identification platform110may update the user graph based on the messaging information (e.g., add the message recipient and/or increase a trustworthiness of an existing recipient). In doing so, the misdirected email identification platform110may improve data loss prevention techniques performed by the misdirected email identification platform110over time. By implementing the methods described in steps201-248, both misdirected email identification methods and email data loss prevention methods may be integrated. For example, if an email is identified as misdirected, but does not violate data loss prevention rules, the email may nevertheless be sent (e.g., to minimize notifications to a user). In contrast, if an email is identified as properly directed, but does violate data loss prevention rules, the message may be blocked (e.g., to prevent unauthorized transfer of confidential or other sensitive information). If a message is flagged using both the misdirected email identification and data loss prevention methods, it may similarly be blocked. Although shown as being performed in sequence, this is for illustrative purposes only, and in some instances, the misdirected email identification and data loss prevention methods/techniques may be performed in parallel. Furthermore, in some instances, outputs of each method/technique may be sent to a separate system for a final determination of how to proceed and/or to notify the message sender. In doing so, user experience may be balanced with message security and data loss, so as to prevent the sending of misdirected messages only when necessary. In some instances, the results of these methods for different use cases may be summarized in table905, which is shown inFIG.9. The steps described in the illustrative event sequence herein may be performed in any alternative sequence or order without departing from the scope of the disclosure. Furthermore, the above described systems, event sequence, and methods may be applied in any messaging contexts (e.g., text messages, chat messages, emails, and/or other messages) without departing from the scope of the disclosure. In some instances, an output of the misdirected email identification method may be sent to the data loss prevention system170to finalize the analysis (and/or back and forth communication between the two systems may be performed). In some instances, an output may be sent from the data loss prevention system170to the misdirected email identification platform110to finalize the analysis (and/or back and forth communication between the two systems may be performed). In some instances, the misdirected email identification platform110and the data loss prevention system170may be separate distinct systems, and in other instances, may be combined into a single system. FIGS.7A-7Cdepict an illustrative method for preventing data loss due to misdirected emails in accordance with one or more example embodiments. Referring toFIG.7A, at step703, a computing platform having at least one processor, a communication interface, and memory may receive historical message information. At step706, the computing platform may generate a user graph based on the historical message information. At step709, the computing platform may train a misdirected email identification model using the historical message information. At step712, the computing platform may receive data loss prevention information/criteria. At step715, the computing platform may receive message information for a first message. At step718, the computing platform may identify the nearest neighbors of the message sender using the user graph. At step721, the computing platform may identify whether or not the message recipient is a context match (e.g., whether context of the message matches context of previous messages between the message sender and the message recipient). If the message recipient is not a context match, the computing platform may proceed to step724. At step724, the computing platform may identify whether the intended recipient is one of the identified nearest neighbors. If the intended recipient is one of the nearest neighbors, the computing platform may proceed to step727. Otherwise, if the intended recipient is not one of the nearest neighbors, the computing platform may proceed to step730. At step727, the computing platform may identify whether the context of the first message is an approximate match with context of historical messages between the message sender and the identified nearest neighbors. If the context is an approximate match, the computing platform may proceed to step739. If the context is not an approximate match, the computing platform may proceed to step730. At step730, the computing platform may expand the nearest neighbors set, using the user graph, to include a nearest neighbor set for each originally identified nearest neighbor. At step733, the computing platform may identify whether there is a context match between the first message and previous message sent between the message sender and/or the individuals of the expanded nearest neighbors set. If there is a context match, the computing platform may proceed to step739. If there is not a context match, the computing platform may proceed to step736inFIG.7B. Referring toFIG.7B, at step736, the computing platform may identify whether the first message context is an approximate historical match with historical messages sent by the message sender. If an approximate historical recipient match is not identified, the computing platform may proceed to step742. If an approximate historical recipient match is identified, the computing platform may proceed to step739. At step739, the computing platform may identify whether the content of the first message satisfies the data loss prevention rules/criteria. If the data loss prevention rules are satisfied, the computing platform may proceed to step754. If the data loss prevention rules are not satisfied, the computing platform may proceed to step742. At step742, the computing platform may send a misdirected email notification indicating that the first message is potentially misdirected, and prompting for confirmation to send the first message. At step745, the computing platform may identify whether confirmation to send the first message was received. If confirmation was not received, the computing platform may proceed to step748. At step748, the computing platform may block the first message from being sent and/or send security actions commands directed a network gateway to execute one or more additional security actions. At step751, the computing platform may update the misdirected email identification model based on any information of the first message, outputs of the misdirected email identification model, and/or user feedback. Returning to step745, if confirmation to send the first message was received, the computing platform may proceed to step754. At step754, the computing platform may send one or more commands directing the network gateway to route the first message to the corresponding recipient. Returning to step721inFIG.7A, if the computing platform identified that the message recipient is a context match, the computing platform may proceed to step757. Referring toFIG.7C, at step757, the computing platform may identify whether the content of the first message satisfies the data loss prevention rules/criteria. If the data loss prevention rules are satisfied, the computing platform may proceed to step769. If the data loss prevention rules are not satisfied, the computing platform may proceed to step760. At step760, the computing platform may send a data loss prevention notification, indicating that the first message includes sensitive and/or confidential information, and will not be sent. At step763, the computing platform may block the first message from being sent and/or send security actions commands directed a network gateway to execute one or more additional security actions. At step766, the computing platform may update the misdirected email identification model based on any information of the first message, outputs of the misdirected email identification model, and/or user feedback. Returning to step757, if confirmation to send the first message was received, the computing platform may proceed to step769. At step769, the computing platform may send one or more commands directing the network gateway to route the first message to the corresponding recipient. FIG.10depicts a simplified version of the misdirected email detection method, described in the event sequence above. For example, at step1005, the misdirected email identification platform110may identify whether historical messages between the message sender and the message recipient have a matching context with the new message. If there is a matching context, the misdirected email identification platform110may proceed to step1025. Otherwise, the misdirected email identification platform110may proceed to step1010. At step1010, the misdirected email identification platform110may identify whether the recipient is one of the nearest neighbors of the message sender and whether the context of the message is an approximate match with previously sent messages from the message sender. If both conditions are satisfied, the misdirected email identification platform may proceed to step1025. Otherwise, the misdirected email identification platform110may proceed to step1015. At step1015, the misdirected email identification platform110may identify whether the recipient is within an expanded group of nearest neighbors for the message sender (e.g., friends of friends). If the recipient is within the expanded group of nearest neighbors, the misdirected email identification platform110may proceed to step1025. Otherwise, the misdirected email identification platform110may proceed to step1020. At step1020, the misdirected email identification platform110may identify whether the recipient address is an approximate match with addresses of nearest neighbors of the message sender. If the recipient address is an approximate match, the misdirected email identification platform may proceed to step1025. For example, at step1025, the misdirected email identification platform110may perform a data loss prevention analysis as described above. Otherwise, if the recipient address is not an approximate match, the misdirected email identification platform110may block the message, and may notify the message sender at step1030. It should be understood that the analysis processes, method steps, and/or methods described herein may be performed in different orders and/or in alternative arrangements from those illustrated herein, without departing from the scope of this disclosure. Additionally or alternatively, one or more of the analysis processes, method steps, and/or methods described herein may be optional and/or omitted in some arrangements, without departing from the scope of this disclosure. One or more aspects of the disclosure may be embodied in computer-usable data or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices to perform the operations described herein. Program modules may include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types when executed by one or more processors in a computer or other data processing device. The computer-executable instructions may be stored as computer-readable instructions on a computer-readable medium such as a hard disk, optical disk, removable storage media, solid-state memory, RAM, and the like. The functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents, such as integrated circuits, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects of the disclosure, and such data structures are contemplated to be within the scope of computer executable instructions and computer-usable data described herein. One or more aspects described herein may be embodied as a method, an apparatus, or as one or more computer-readable media storing computer-executable instructions. Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment, an entirely firmware embodiment, or an embodiment combining software, hardware, and firmware aspects in any combination. In addition, various signals representing data or events as described herein may be transferred between a source and a destination in the form of light or electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, or wireless transmission media (e.g., air or space). The one or more computer-readable media may be and/or include one or more non-transitory computer-readable media. As described herein, the various methods and acts may be operative across one or more computing servers and one or more networks. The functionality may be distributed in any manner, or may be located in a single computing device (e.g., a server, a client computer, and the like). For example, in alternative embodiments, one or more of the computing platforms discussed above may be combined into a single computing platform, and the various functions of each computing platform may be performed by the single computing platform. In such arrangements, any and/or all of the above-discussed communications between computing platforms may correspond to data being accessed, moved, modified, updated, and/or otherwise used by the single computing platform. Additionally or alternatively, one or more of the computing platforms discussed above may be implemented in one or more virtual machines that are provided by one or more physical computing devices. In such arrangements, the various functions of each computing platform may be performed by the one or more virtual machines, and any and/or all of the above-discussed communications between computing platforms may correspond to data being accessed, moved, modified, updated, and/or otherwise used by the one or more virtual machines. Aspects of the disclosure have been described in terms of illustrative embodiments thereof. Numerous other embodiments, modifications, and variations within the scope and spirit of the appended claims will occur to persons of ordinary skill in the art from a review of this disclosure. For example, one or more of the steps depicted in the illustrative figures may be performed in other than the recited order, and one or more depicted steps may be optional in accordance with aspects of the disclosure. | 83,741 |
11943194 | PREFERRED MODE FOR CARRYING OUT THE INVENTION Embodiments of the present invention will be described below with reference to the drawings. It should be noted that these embodiments are merely examples, and the technical scope of the present invention is not limited to the embodiments. First Embodiment <Overall Constitution of SNS System 100> FIG.1is a diagram showing an overall constitution and functional blocks of an SNS system 100 according to a first embodiment. As shown inFIG.1, the SNS system 100 is a system that performs processing related to SNS, and in particular, is configured such that a classification item can be arbitrary set in advance that is commonly used for messages (contents) to be sent by all users belonging to a preset community. Thus, the SNS system 100 can be used as a communication system capable of sending all messages (contents) according to the classification item arbitrarily set in advance. Further, items related to all messages (contents) collected in the community can be comprehensively collected and classified. As will be described below, the SNS system 100 can be used, for example, among a user who performs a machining process on a workpiece, for example, using an industrial machine running in a factory, a member of the industrial machine tool builder who supports the machining process of the user, maintenance personnel of the industrial machine, and a call center. As shown inFIG.1, the SNS system 100 includes an SNS server1as a message exchanging device, a plurality of mobile terminals3(terminals), and an administrator terminal5. The SNS server1, the mobile terminals3, and the administrator terminal5can be communicate with each other via a communication network N. The communication network N is, for example, the Internet or a mobile phone communication network. Further, the communication network N may be, for example, a local area network (LAN) or a wide area network (WAN) including a wired line. Hereinafter, a description will be given as an example with respect to an exchange operation of the SNS system 100 in which a message is mainly exchanged among the user who performs the machining process on the workpiece, for example, using the industrial machine running in the factory, the member of the industrial machine-tool builder who supports the machining process of the user, the maintenance personnel of the industrial machine, and a receptionist of the call center, as described above. A user, who uses the SNS system, is not limited to the user described above. In any community set in the SNS system 100, the classification item can be arbitrary set in advance that is commonly used for messages (contents) to be sent by all users belonging to the community, and all messages (contents) can be sent according to the classification item set in advance. <SNS Server1> The SNS server1is a message exchanging device configured to perform communication processing between the plurality of mobile terminals3. In an open SNS environment, a virtual SNS space dedicated to a virtual community can be created. Here, the communication processing means, for example, message exchanging, posting, browsing, and calling along a timeline, and means information delivery that intentions, feelings, thoughts and the like are delivered between users via characters, voice, images, and the like. In this way, the messages include contents posted by the user within the group (virtual community). The SNS server1may be constituted by one computer, or may be constituted as a distributed processing system by a plurality of computers as will be described below. Specifically, for example, a message exchanged within a specific community set in advance, which will be described below, may be managed by a server managed by a host of the community or the like. Thus, the message exchanged within the specific community can physically ensure information security. When a plurality of computers are used, these computers are connected via the communication network N. Further, the SNS server1may be constituted as a virtual server (virtual machine) provided on a cloud, for example. The SNS server1includes a control unit10, a storage unit20, and a communication interface unit29. The control unit10is a central processing unit (CPU) that controls the entire SNS server1. The control unit10appropriately reads and executes an operating system (OS) and an application program stored in the storage unit20to cooperate with the above-described hardware and execute various functions. Specifically, an example will be described in the present embodiment in which the SNS server1is realized by causing the computer to execute the program. The program can be recorded in a computer-readable non-transitory information recording medium such as a compact disk, a flexible disk, a hard disk, a magneto-optical disk, a digital video disk, a magnetic tape, a ROM (Read Only Memory), an EEPROM (Electrically Erasable Programmable ROM), a flash memory, or a semiconductor memory. Such an information recording medium can be distributed and sold independently of the computer. Generally, the computer reads the program recorded in the non-transitory information recording medium into a RAM (Random Access Memory), which is a temporary storage device included in the storage unit20, and then executes a command included in the program read by the CPU serving as the control unit10. The program can be distributed and sold from a program distribution server or the like (not shown) to a computer or the like via a temporary transmission medium such as a communication network N, independently of the computer on which the program is executed. Further, the program can also be described by a programming language for operation level description of an electronic circuit. In this case, various design drawings such as wiring diagrams or timing charts of electronic circuits are Generated from the program described by the programming language for operation level description of the electronic circuit, and an electronic circuit constituting the SNS server1can be created based on the design drawings. For example, from the program described by the programming language for operation level description of the electronic circuit, the SNS server1can be constituted on hardware that can be reprogrammed by FPGA (Field Programmable Gate Array) technology, and an electronic circuit dedicated to specific use can also be constituted by ASIC (Application Specific Integrated Circuit) technology. The details of the control unit10will be described below, and first, the storage unit20will be briefly described. The storage unit20is a storage zone for a hard disk, a semiconductor memory element, or the like for storing programs, data, and the like necessary for the control unit10to execute various kinds of processing. The storage unit20includes a program storage unit21, a user storage unit22, a group storage unit23, a metadata storage unit24, and an item-specific message storage unit25. The program storage unit21is a storage zone that stores various programs. The program storage unit21stores a user management program. The user management program is a program for executing each function of the control unit10to be described below. The user storage unit22is a storage zone that stores data related to the user in association with the group. The user storage unit22stores, for example, a user ID, a user name, and a group ID for identifying the group, in association with each other. The group storage unit23is a storage zone that stores data related to the group. The group storage unit23stores, for example, a group ID and a group name (“item name” to be described below) in association with each other. Further, as will be described below, since the group (“item name” to be described below) can be set in a hierarchical structure, the group storage unit23may store a hierarchical relationship between the groups. In other words, members belonging to a high-ranking group belong to a low-ranking group in the high-ranking group. On the contrary, members only belonging to the low-ranking group may not belong to a high-ranking group or the like in the low-ranking group. For this reason, the group storage unit23may, for example, store a high-ranking group ID, which is a high rank closest to the group ID, in association with each other. The metadata storage unit24stores an item data dictionary that stores metadata for each item name with respect to item names, tags, and stamps set by the item setting unit13to be described below. The item names have a hierarchical structure, and each of the item names may be stored in association with an item name which is a high rank closest to the corresponding item name, for example. The item data dictionary may include the item ID by setting of an item ID corresponding to the item name. The item-specific message storage unit25stores all messages exchanged within a specific group (community) corresponding to each of the item names set by the item setting unit13to be described below, tags attached to the messages, and stamps for evaluating the messages, in association with each other. Further, the item-specific message storage unit25stores a set of item-specific messages for each item name. The communication interface unit29is an interface used to perform communication between the SNS server1, the mobile terminal3, and the administrator terminal5. Next, the control unit10will be described. The SNS server1is constituted such that the control unit10controls respective components shown below to execute respective processing described in the present embodiment. The control unit10includes an SNS basic function unit11, an item setting unit13, an item-specific message creation unit14, and an item-specific message management unit15. The SNS basic function unit11includes an SNS group management unit111, an SNS message exchanging unit112, and a SNS message management unit113. <SNS Group Management Unit111> The SNS group management unit111sets and manages, for example, a group (also referred to as “community”) including users who are interested in a common theme among SNS users and users who belong to a specific organization or the like. Specifically, for example, the SNS group management unit111may give a member management authority regarding addition or deletion of members to a user who is a group administrator (also referred to as “host”). Thus, the administrator can set and manage users belonging to the group (community) that is hosted by himself/herself. For example, the administrator transmits setting information regarding the group (community) from the administrator terminal5to the SNS group management unit111, and the SNS group management unit Ill may store data regarding the user who participates in the group (community) by associating the group (community) with the user ID stored in the user storage unit22in the storage unit20. In this way, the SNS group management unit111stores the data regarding the user, who participates in each group (community), in the user storage unit22in association with the group (community). Thus, when the user belonging to the group logs in to the SNS service, a message can be exchanged within the group (community) in which the user participates, for example. For example, the user may specify a group (community) at the time of log-in. Further, when the user logs in to the SNS service, the user may select the group (community) in which the message is exchanged by outputting a list of groups (communities) in which the user participates. In SNS, a user belonging to a group (community) may sometimes transmit information (invitation information) to invite, for example, an acquaintance who is an SNS user to the group (community). At this time, the SNS user receiving the invitation may be able to participate in the group (community) by accepting the invitation. In such a case, in a specific group (community) set in advance, the administrator of the group (community) may add a function of executing check processing. Thus, the administrator serving as the host of the community can decide whether to allow/disallow the invited SNS user to join the group (community). In other words, when a person belonging to a specific community invites a user who does not belong to the community, the administrator determines whether to add the invited user to the community. For example, as will be described below, in the group (community) set among the user who performs the machining process on the workpiece, for example, using the industrial machine running in the factory, the member of the industrial machine-tool builder who supports the machining process of the user, the maintenance personnel of the industrial machine, and the call center, the administrator decides whether to allow/disallow a user invited by the user belonging to the group to join. Thereby, it is possible to prevent a leakage of confidential information in the group (community) to a third party. Further, even when the administrator serving as the host overlooks the registration of useful user in the group, the members of the group can invite the useful users to add the overlooked useful users to the group. <SNS Message Exchanging Unit112> The SNS message exchanging unit112manages (exchanges) transmission/reception of the message such that the message created by the user belonging to the group (community) can be browsed in the group (community). Specifically, the SNS message exchanging unit112provides the mobile terminal3of the user with a user interface corresponding to the group (community), thereby controlling input/output of the message exchanged within the group (community) to and from the mobile terminal3of the user. As will be described below, the SNS message exchanging unit112may exchange a message added with a tag set for each set setting item with respect to a message in a specific group (community) in which an item name or the like is set in advance by an administrator, and also exchange, as a message, a stamp set for each set setting item. More specifically, during message exchange within a specific group (community), the control unit10includes a function unit (“item-specific message creation unit14” to be described below) that executes a function of adding the tag set for each setting item to the message and a function of using the stamp set for each setting item as a message, and the SNS message exchanging unit112may realize the above functions by cooperating with the function unit (“item-specific message creation unit14”). Details of the function unit (“item-specific message creation unit14”) will be described below. <SNS Message Management Unit113> The SNS message management unit113manages the message exchanged within the group (community) in association with the group (community). Specifically, the SNS message management unit113may store the message in an item-specific message storage zone, which is associated with the group (community), within the item-specific message storage unit25. Further, the SNS message management unit113may store the message exchanged within the preset specific group (community) by associating the tag set in advance in the storage zone provided for each specific group (community) with the message. Further, the SNS message management unit113may store the stamp exchanged within the preset specific group (community) in association with the target message of the stamp. More specifically, during message exchange within a specific group (community), the control unit10includes a function unit (“item-specific message management unit15” to be described below) that executes a function of storing and managing the tag set for each setting item in association with the message and a function of storing and managing the stamp set for each setting item in association with a target message of the stamp, and the SNS message management unit113may realize the above functions by cooperating with the function unit (“item-specific message management unit15”). Details of the function unit (“item-specific message management unit15”) will be described below. The basic functions of the SNS basic function unit11have been described above. Next, function units other than the SNS basic function unit11will be described. <Item Setting Unit13> When any member belonging to the group (community) sends, based on the designation from the administrator who manages the group (community), a message within the group (community) with respect to all messages exchanged within the group (community), the item setting unit13sets a common item name for the group (community) in advance such that the message is sent according to the common item name for the group (community). Specifically, the SNS group management unit111grants a common item setting authority for setting the common item name for the group (community) to the administrator who manages the group (community). For example, when the administrator newly hosts a group (community), the SNS group management unit111may grant the common item setting authority in addition to the member management authority in the group (community). Thus, the item setting unit13can set the common item name for the group (community) based on an instruction from the administrator having the common item setting authority. The item setting unit13can manage the item name set corresponding to the group (community) by storing the item data dictionary associated with the group (community) in the metadata storage unit24of the storage unit20which will be described below. <Item Name> The item name will be described with reference to an example before details of the item setting unit13are describedFIG.2is a diagram showing an example of the item name that is set hierarchically by taking, as an example, a community including, for example, a user who performs a machining process on a workpiece, for example, using the industrial machine running in the factory, a member of the industrial machine-tool builder who supports the machining process of the user, maintenance personnel of the industrial machine, and a receptionist of the call center. In the following example, an example is described in which the item name is set in a text format, but the present invention is not limited thereto. The item name may be set in any format, for example, a figure, a picture, or a pictogram other than text. As shown inFIG.2, items can be set hierarchically for the preset group. With reference toFIG.2, the items are set hierarchically as follows. For example, four items, that is, “Contact management”, “Task management”, “Search, and “Leave-from-company analysis” are set in a first layer. Then, when the user belonging to the group selects the group (community) after logging in to the SNS server, a selection menu screen may be displayed on the mobile terminal3of the user such that the user selects any item from the four items set in the first layer, that is, “Contact management”, “Task management”, “Search, and “Leave-from-company analysis”. Thus, the user can select any item from the items set in the first layer. When the user selects, for example, “Search” or “Leave-from-company analysis” constituted only kay the first layer, the user can participate in a virtual community with the theme of “Search” or “Leave-from-company analysis”, and communicate the virtual community. Further, a second layer is set for each of “Contact management” and “Task management”. For example, four items, that is, “Emergency management”, “Business contact”, “Task request”, and “Calendar” are set as a second layer for “Contact management” of the first layer. Further, five items, that is, “Mechanical unit-related”, “Control unit”, “Operation-related”, “Screen setting”, and “Machining-related” are set as a second layer for “Task management” of the first layer. Since a third layer is not set for each item of the second layer, a virtual community with a theme of each of “Emergency management”, “Business contact”, “Task request”, and “Calendar” is set in the contact management. When the user selects “Contact management” in the first layer, and then selects one item, for example, “Business contact”, from “Emergency management”, “Business contact”, “Task request”, and “Calendar” in the second layer, the user can participate in the virtual community with the theme of “Contact management (Business contact)”, and communicate in the virtual community. In this way, the item name can be set hierarchically. Then, the user can communicate in the virtual community by theme corresponding to the item set in the lowest layer by following the hierarchy. The SNS group management unit111described above sets members belonging to the virtual community by theme based on the designation of the administrator. Thus, the theme corresponding to each item and the members belonging to the virtual community of the theme are associated with each other. As described above, the members belonging to an upper-ranking hierarchy are associated with each other so as to automatically belong to a lower-ranking hierarchy. In this way, the number of options Presented to the user at one time is restricted to a relatively small number (the number that can be reasonably displayed on the mobile terminal3, for example, about 4 and 5), and the user is sequentially selected, whereby it is possible to present a virtual community by theme that meets the restrictions of a display means and an input means of the mobile terminal3. In this way, selection from at most 4 and 5 branch options is suitable for a branch selection type user interface that goes on the road, and even when the number of times of selection is large, since an operation can be made with a sense of game, it has the effect of not causing any pain to the user. According to the item setting illustrated inFIG.2, as the virtual community by theme, a total of 11 virtual communities by theme have been set, that is, “Contact management (Emergency contact)”, “Contact management (Business contact)”, “Contact management (Task request)”, “Contact management. (Calendar)”, “Task management (Mechanical unit-related)”, “Task management (Control unit)”, “Task management (Operation-related)”, “Task management (Screen setting)”, “Task management (Machining-related)”, “Search”, and “Leave-from-company analysis”. In this case, a user belonging to any of the virtual community groups by theme selects the virtual community, and thus can exchange messages on the selected virtual community by theme. For example, in the case of the virtual community such as Contact management (Business contact), it is assumed that the community is not a community constituted with all members including an operator of the industrial machine and a maintenance group of the industrial machine, but an organization unit (for example, xx department) within a company belonging to the community, for example. Then, the virtual community by theme may be constituted by a plurality of independent virtual subcommunities for each subgroup. Specifically, as described above, the administrator may set items corresponding to the plurality of independent virtual subcommunities for each subgroup in the lower-ranking layer with respect to the items corresponding to the virtual community by theme. Thus, as described above, a user belonging to a virtual subcommunity by theme can follow the hierarchy and perform communication in a virtual sub-community of the same theme to which the user belongs, by tracing up to the virtual subcommunity of the same theme to which the user belongs, the virtual sub-community being a lower-ranking layer of the virtual community by theme. <Tag> When the messages are exchanged in the virtual community by theme, it is useful to be able to use tags for classifying the contents of the messages, which are commonly used by the members belonging to the virtual community within the virtual community and exchange the messages by attaching a corresponding tag to any message. Therefore, the item setting unit13can set a plurality of tags for classifying the contents of the message in the item name corresponding to the virtual community, in which the messages are exchanged, based on the setting instruction of the administrator. Here, the tag is attached to the message exchanged in the virtual community by theme corresponding to the item name, and is used for classifying the contents of the message. In the following example, an example is describe in which a tag name is set in a text format, but the present invention is not limited thereto. Similarly to the item name, the tag name may be set in any format, for example, a figure, a picture, or a pictogram other than text. Referring toFIG.2, an example is shown in which a tag is set for each of the item names of the second layer, that is, “Mechanical unit-related”, “Control unit”, “Operation-related”, “Screen setting”, and “Machining-related” which are provided hierarchically in the “task management” of the first layer. Here, it is shown that two tags of “Question” and “Answer” are set in the item (second layer) set according to the structure of the machine tool (“Mechanical unit-related”, “Control unit”, “Operation-related”, “Screen setting”, and “Machining-related”), under the task management (first layer) related to the machine tool. Since it is assumed that the contents of the message exchanged in the virtual community by theme (“Mechanical unit-related”, “Control unit”, “Operation-related”, “Screen setting”, and “Machining-related”) are “Question” and its “Answer”, two tags including “Question” and its “Answer” are set herein. InFIG.2, an example is shown in which a common tag is set for five items corresponding to the second layer, but the present invention is not limited thereto. For example, a tag may be set for each item (theme) with respect to the five items (themes) corresponding to the second layer. Further, during the setting of the tags, the number of tags is not limited to two. The tags may be set as many as needed in each community by theme. In addition, the tags may be set for each community by theme. For example, as tags other than “Question” and “Answer”, tags can be arbitrarily set as “Inquiry”, “Do you know?”, “Please tell me”, and “Please tell me a little more”, for example. Further, in order to classify information about machines installed in the factory, for example, a machine identification number such as “Unit 1” or “Unit 2” may be set as a tag. <Stamp> In order to evaluate a quality of the contents of the message as contents, the item setting unit13can set an arbitrary stamp corresponding to the item name based on the setting instruction of the administrator. Similarly to the item name and the tag name, the stamp may be set in any format, for example, a text, a figure, a picture, or a pictogram. The stamp set corresponding to the item name may be displayed on a stamp screen for selecting the stamp at a lower side of the display screen (for example, the timeline) of the message. Thus, the user can evaluate the quality of the contents of the message by selecting an appropriate stamp from the stamp screen for the message.FIG.6is a screen example showing a stamp set in the task management (mechanical unit-related) as an item (theme). As shown inFIG.6, examples of stamps include “Recovery” and “Failure”. In this way, for example, a questioner selects a stamp for evaluating the contents of the response to his/her question. InFIG.6, “Recovery” and “Failure” are exemplified, but the present invention is not limited thereto. As described above, in the SNS, all messages (contents) to be exchanged can be comprehensively collected and classified by related items using, such as item names (themes), tag names, and stamps which are arbitrarily set in advance. The item name (theme), the tag, and the stamp have been described above. Next, the item setting unit13will be described. As described above, the item setting unit13sets a common item name in advance to the group (community), based on a setting instruction from the administrator who manages the group (community), such that the message is sent based on item names, tags, and stamps common to the group (community) when any member belonging to the group (community) sends a message within the group (community) with respect to all messages exchanged within the group (community). Specifically, the item setting unit13can hierarchically set the item name based on the setting instruction from the administrator. Further, the item setting unit13can set a plurality of tags for classifying the contents of the message in the item name corresponding to the virtual community, in which the message is exchanged, based on the setting instruction from the administrator. The tags can also be set hierarchically. The item setting unit13can further set a plurality of stamps for evaluating the quality of the contents of the message as contents in the item name corresponding to the virtual community, in which the message is exchanged, based on the setting instruction from the administrator. The stamps can also be set hierarchically. Here, each of the item name, the tag, and the stamp may be set in any format, for example, a text, a figure, a picture, or a pictogram. The item setting unit13may provide the administrator with the item setting screen via the administrator terminal in order to set the item name, the tag, and the stamp.FIGS.3A and3Bare diagrams showing examples of an item setting screen for setting an item name for classifying messages to be exchanged, taking as an example a community including an operator of an industrial machine and a maintenance group of the industrial machine. A process of setting the item name will be described with reference toFIGS.3A and3E. The item setting unit13may set the item name as a hierarchical structure, and may provide the item setting screen so as to set the item name in the order of the hierarchy.FIG.3Ashows an example of an item setting screen interface (hereinafter, referred to as “first layer item setting screen”) for setting an item name in the first layer. As shown inFIG.3A, on the item setting screen of the first layer, an interface for setting a plurality of item names in the first layer, the number of hierarchies of each item name and a common tag for each setting item of the first layer is provided. Referring toFIG.3A, four item names, that is, “Contact management”, “Task management”, “Search”, and “Leave-from-company analysis” are set as the first layer. Then, each of the item names “Contact management” and “Task management” has a second layer, and the item names “Search” and “Leave-from-company analysis” are set to be constituted only by the first layer. In addition, the common tag is not required in the item names included in the first layer. FIG.3Bshows an example of an item setting screen (hereinafter, referred to as “second layer item setting screen”) for setting an item name in the second layer for “Task management” set in the first laver. As shown inFIG.3B, on the item setting screen of the second layer, an interface for setting a plurality of item names in the second layer, the number of hierarchies of each item name and a common tag for each setting item of the second layer is provided. Referring toFIG.3B, five item names, that is, “Mechanical unit-related”, “Control unit”, “Operation-related”, “Screen setting”, and “Machining-related” are set for the “Task management” set in the first layer. Then, all item names are set so as not to require a third layer and subsequent layers. Further, it can be seen that the common tags (“question”, “response”) are set for the item names included in the second layer.FIGS.3A and3Billustrates a template for setting two tags, but the present invention is not limited thereto. It may be a template for setting three or more tags. Further, when a plurality of tags are set that cannot be displayed on one screen, a plurality of tags may be set by scrolling with a scroll function, for example. The examples inFIGS.3A and3Binclude setting examples of item names having a hierarchical structure up to the second layer, the present invention is not limited thereto. The number of hierarchies may be any number. Although the example of stamp setting is not shown, the template may be provided with a stamp setting screen for setting a stamp in a bottom zone of the item setting screen, for example. <Item-Specific Message Creation Unit14> As described above, the SNS message exchanging unit112can add a tag set for each set setting item to the message within the specific group (community) in which the item names or the like are set in advance, or can handle the stamp set for each set setting item as a message. Specifically, the SNS message exchanging unit112treats the users belonging to the virtual community by theme set by the item setting unit13to add a tag to the message or exchange a stamp as a message, based on a tag for classifying messages and a stamp for evaluating a message to be evaluated. For this reason, the control unit10may include a function unit (referred to as “item-specific message creation unit14”) that assigns the item name, the tag list, and the stamp list set corresponding to the specific group (community) by the item setting unit13to the common user interface screen set in advance. The SNS message exchanging unit112may cooperate with the item-specific message creation unit14. Thus, the user can exchange messages (contents) via the user interface screen to which the item name, the tag, and the stamp are assigned. When a plurality of tags and a plurality of stamps are assigned that cannot be displayed on one screen, a plurality of tags and a plurality of stamps can be selected by scrolling with a scroll function, for example. Specifically, for example, when the user transmits a message, the item-specific message creation unit14may add the tag selected by the user to the message from the tag list assigned to the user interface screen. Similarly, when the user transmits a stamp, the item-specific message creation unit14may use, as a message, the stamp selected by the user from the stamp list assigned to the user interface screen. As shown inFIG.1, the item-specific message creation unit14may be a function unit independent of the SNS message exchanging unit112. Further, the SNS message exchanging unit112may include the item-specific message creation unit14. Each ofFIGS.4,5,6, and7is a diagram showing an example of a common user interface created by the item-specific message creation unit14in “Contact management (Business contact)”, “Leave-from-company analysis” “Task management (Mechanical unit-related)”, and “Search” which are exemplified as theme-specific virtual security set by the item setting described above. Here, the processing of the item-specific message creation unit14will be described with reference to each ofFIGS.5,6, and7. As described above, the item-specific message creation unit14creates a common user interface for inputting/outputting to/from the mobile terminal3of the user belonging to the virtual community. First, cooperation processing between the SNS message exchanging unit112and the item-specific message creation unit14in “Contact management (Business contact)” will be described with reference toFIG.4. In this case, as shown inFIG.2, the SNS message exchanging unit112first outputs the screen of the first layer to the mobile terminal3of the user who belongs to the community or any low-ranking group (community), and thus causes the user to select an item. For example, when “Contact management” is selected on the screen of the first layer, the SNS message exchanging unit112outputs the screen of the second layer to the mobile terminal3of the user based on the selection input. Here, when “Business contact” is selected, the SNS message changing unit112cooperates with the item-specific message creation unit14, and as shown inFIG.4, outputs the common user interface for exchanging messages in “Contact management (Business contact)” created by the item-specific message creation unit14to the mobile terminal3of the user. In this example, a section chief informs a section staff of a company leaving date on time with a message, and thus message exchange with the section staff who is a member belonging to the community starts. Referring toFIG.4, a section staff 1 replies “I will go home first”, the SNS message exchanging unit112cooperates with the item-specific message creation unit14to display a message of the section staff 1 who is a responder and a response time. Subsequently, a section staff 2 replies “I left the company”. In this way, the section chief and the section staff belonging to the community can check the leave-from-company status while communicating with each other on the SNS. FIG.5illustrates a case in which the SNS message exchanging unit112outputs a user interface for analyzing the leave-from-company status to the mobile terminal3of the user (for example, the section chief), based on the message from the section staff collected using the messages in which the section chief informs the section staff of the company leaving date on time in the “Contact management (Business contact)” described above. In this case, as shown inFIG.2, the SNS message exchanging unit112outputs the screen of the first layer to the mobile terminal3of the user (for example, the section chief) belonging to the community. Here, when the user (for example, the section chief) selects “Leave-from-company analysis” that is an item in the first layer, the SNS message exchanging unit112cooperates with the item-specific message creation unit14, and, as shown inFIG.5, outputs the common user interface for exchanging messages in “Leave-from-company analysis” to the mobile terminal3of the user (for example, the section chief). Thus, for example, the message exchange (communication) is performed between the section chief who is a user belonging to the community and “Leave-from-company analysis service software” created in advance, and thus the message (content) regarding the leave-from-company status can be easily confirmed. Specifically, the leave-from-company status may be confirmed by, for example, “Leave-from-company analysis service software” created in advance. In that case, the “Leave-from-company analysis service software” is executed on the SNS server1to search the business contact messages comprehensively collected in “Contact management (Business contact)” of this day, and the determination is made based on information on leaving the company, the number of section staff members who have input the information, and the number of section staff members. An AI engine, which will be described below, may be used to determine leaving the company. Next, message exchange in the virtual community in “Task management” will be described.FIG.6is a diagram showing a user interface output by the item-specific message creation unit14in “Task management (Mechanical unit-related)”. As in the case of “Contact management (Business contact)”, when the user selects “Task management” on the screen of the first layer and “Mechanical unit-related” on the screen of the second layer, the SNS message exchanging unit112cooperates with the item-specific message creation unit14, and, as shown inFIG.6, outputs the common user interface for exchanging messages “Task management (Mechanical unit-related)” on the screen of the second layer created by the item-specific message creation unit14to the mobile terminal3of the user. The SNS message exchanging unit112exchanges a message “Part A being machined at Unit 3 stopped during replacement of the tool by an air pressure drop alarm when the tool was replaced” (hereinafter, referred to as “task A”) added with the tag (question) selected by the section staff 1, and thus the section staff 1 belonging to the community and another section staff start communication regarding a task A in the virtual community. As shown inFIG.6, when a message “Try setting tool information in a tool life management to 0. If it is more than the setting value, an alarm will occur” from a section staff 3 is output in a state of being added with a tag (response), the section staff 1 can evaluate the response message, for example. Here, the section staff 1 expresses an evaluation of “Recovery” by the message of the section staff 3 by a stamp (“Recovery”) selected from a stamp screen, Although not shown, a “Unit 3” tag may be added together with a “Question” tag to identify a machine. In this way, when message exchange (communication) is performed between the section staffs who are users belonging to the community, messages (contents) related to alarms of mechanical unit-related (alarms during replacement of the tool) are comprehensively collected and classified, and this makes it easy to consider how to respond to the alarm. In addition, although the virtual community among the section staffs is exemplified, message exchange may be performed in the virtual community, including the maintenance group including the call center of the manufacturer and the machining group including the machining member of the machine-tool builder, for example. InFIG.6, a case is illustrated where the machine is recovered by implementing the advice of the section staff 3, but when the machine is not recovered by implementing the advice of the section staff 3, it is possible to transmit the fact by selection of a stamp “Failure”, for example. In this way, the use of stamps or the like can be expected to have the effect of increasing familiarity among users and activating communication. FIG.7shows a display example in which an appropriate coping method is searched for based on a set of messages (contents) stored in the storage unit20in a state where the messages related to alarms are comprehensively collected and classified. As in the case of “Contact management (Business contact)”, when the user selects “Search” on the screen of the first layer, the SNS message exchanging unit112cooperates with the item-specific message creation unit14, and, as shown inFIG.7, outputs the common user interface for exchanging messages in “Search” on the screen of the first layer to the mobile terminal3of the user. As shown inFIG.7, the section staff 3 inquiries to the search system that “Alarm 601 has occurred” via the common user interface for performing message exchange in “Search”, and thus a corresponding message for alarm 610 can be obtained from the search system. In this case, the section staff 3 expresses the evaluation of “Recovery” for the corresponding message from the search system by the stamp (“Recovery”) selected from the stamp screen. The processing contents when the common user interface is created by the item-specific message creation unit14have been described above with reference to the example. As described above, the SNS message exchanging unit112cooperates with the item-specific message creation unit14, and thus it is possible to obtain the effect that all messages can be comprehensively collected based on the preset item names, tags, and stamps in the message exchange between the members of any group in the SNS. <Item-Specific Message Management Unit15> As described above, the SNS message management unit113may store the message exchanged within the preset specific group (community) by associating the tag set in advance in the item-specific message storage unit25provided for each specific group (community) with the message. Further, the SNS message management unit113may store the stamp exchanged within the preset specific group (community) in association with the target message of the stamp. For this reason, during message exchange within a specific group (community), the control unit10includes a function unit (“item-specific message management unit15”) that executes a function of storing and managing the tag set for each setting item in the item-specific message storage unit25in association with the message and a function of storing and managing the stamp set for each setting item in the item-specific message storage unit25in association with a target message of the stamp. The SNS message management unit113may realize the above functions by cooperating with the item-specific message management unit15. As shown inFIG.1, the item-specific message management unit15may be a function unit independent of the SNS message exchanging unit112. Further, the SNS message exchanging unit112may include the item-specific message management unit15. The case of “Task management (Mechanical unit-related)” described above will be described as an example with reference toFIG.6. The item-specific message management unit15stores a message with a question tag (referred to as “question message”), a message with an response tag (referred to as “response message”), and an evaluation message for the response message listed in “Task management (Mechanical unit-related)”, in the storage zone provided corresponding to “Task management Mechanical unit-related)”, by comprehensively collecting and classifying these messages. For this reason, at a later date, by searching based on the messages that are comprehensively collected and classified, the knowledge about each task can be generated by machine learning an appropriate response or an illegal response to each task. In addition, Q&A can be created by analysis of these data. Similarly, for example, response messages with respect to respective alarms included in the question message listed in “Task management (Mechanical unit-related)”, “Task management (Control unit)”, “Task management (Operation-related)”, “Task management (Screen setting)”, and “Task management. (Machining-related)” and evaluation messages for the response messages are stored in the storage zone provided corresponding to each theme, in a state of being comprehensively collected and classified. For this reason, as described with reference toFIG.7, the user can search for an appropriate response or an illegal response to an arbitrary alarm based on these content sets in “Search”. The constitution of the SNS server1has been described above. <Mobile Terminal3> The mobile terminal3is a terminal owned and used by each user. The mobile terminal3communicates with the SNS server1, and communicates with other users of the mobile terminal3according to the contents of communication processing. The mobile terminal3is, for example, an information terminal represented by a smartphone. The mobile terminal3includes a control unit30, a storage unit40, a touch panel display46, and a communication interface unit49. The control unit30is a CPU that controls the entire mobile terminal3. The control unit30appropriately reads and executes an OS and an application program (hereinafter, the application program of the mobile terminal3being also simply referred to as “application”) stored in the storage unit40and executes various functions by collaborating with the above-described hardware. The control unit30includes a log-in processing unit31, a display control unit32, and a transmission/reception unit33. These functions are known to those skilled in the art and will not be described. The storage unit40is a storage zone of a semiconductor memory element, or the like for storing programs, data, and the like necessary for the control unit30to execute various processes. The storage unit40includes a program storage unit41. The touch panel display46has a function as a display unit constituted by a liquid crystal panel or the like and a function as an input unit for detecting a touch input by a finger from a user or the like. The communication interface unit49is an interface for communicating with the SNS server1via the communication network N. <Administrator Terminal5> The administrator terminal5is a terminal used by an administrator who hosts a specific group (community) in the SNS system 100 by using the item setting unit13described above. Here, the administrator may be set for each specific group (community) to be hosted. The administrator terminal5is, for example, a mobile terminal or a PC (personal computer). The administrator terminal5includes a control unit, a storage unit, a communication interface unit, a display unit, and an input unit (all the components not being shown), for example. The constitution of each of the function units of the SNS server1according to the first embodiment has been described above. Next, an operation of the SNS server1will be described with reference to a flowchart shown inFIG.8.FIG.8is a flowchart showing a process in which the SNS server1sets item names, tags, and stamps commonly used by members belonging to a group that uses contents, based on an instruction of the administrator who is a host of the group. The flowchart shown inFIG.8shows a process of setting item names corresponding to one or more layers set in an i-th layer (i≥1), setting a tag common to the respective layers set in the i-th layer, and setting a common stamp. The SNS server1has already authenticated that the user logged in from the administrator terminal5is an administrator having common item setting authority. It should be noted that these layers have a hierarchical structure with i=0 as a root. When a lower-ranking layer is set for each (i−1)-th layer, one or more i-th layers are set. Referring toFIG.8, the process of setting the item name in the i-th layer (i≥1) is as follows. In Step S10, the SNS server1(item setting unit13) receives a setting request for one or more i-th layers that are lower-ranking layers with respect to the (i−1)-th layer from the administrator terminal5. In Step S11, the SNS server1(item setting unit13) may provide to the administrator terminal5with the item name setting screen (template screen) shown inFIGS.3A and3B, for example, in order to set item names of respective layers, the number of hierarchies located below the item names, and a common tag name and stamp name for these layers in response to the setting request for one or more i-th layers that are lower-ranking layers with respect to the (i−1)-th layer from the administrator terminal5. The user may add an input field for setting the item names and the number of hierarchies located below the item names, an input field for setting the tag name, and an input field for setting the stamp name with an instruction. When a plurality of input fields do not fit in one screen, scroll processing may be performed, for example. In Step S12, the SNS server1(item setting unit13) acquires the item names input via the administrator terminal5, the number of hierarchies located below the item names, the tag name, and the stamp name. In Step S13, the SNS server1(item setting unit13) specifies an item name, in which the number of hierarchies located below the item names is set to a value of 2 or more, from the acquired one or more item names. In Step S14, for each item name in which the number of hierarchies is set to the value of 2 or more, the process proceeds to the item name setting process in a (i+1)-th layer (i≥1). In this way, the item name setting process is performed sequentially in the order of a first layer setting process, a second layer setting process for each item name in which the number of hierarchies is set to the value of 2 or more in the first layer, a third layer setting process for each item name in which the number of hierarchies is set to the value of 2 or more in the second layer, . . . , and the item name setting process is executed until the setting process in all hierarchies is completed. Thus, the SNS server1(item setting unit13) can set the item names hierarchically. The first embodiment has been described above. Next, a second embodiment will be described. Second Embodiment In the SNS server1according to the first embodiment described above, the CPU as the control unit10executes various programs stored in the program storage unit21, and thus the SNS basic function unit11(the SNS group management unit111, the SNS message exchanging unit112, and the SNS message management unit113), the item setting unit13, the item-specific message creation unit14, and the item-specific message management unit15are implemented. On the other hand, in the second embodiment, a distributed processing system is exemplified as shown inFIG.9in which a server 1 is arranged on a cloud 1 and a server 2 is arranged on a cloud 2, for example.FIG.9is a diagram showing a constitution in which the server 1, the server 2, and one or more mobile terminals3are network-connected via a communication network N. Here, the server 1 may be a server managed by an operator who operates the SNS, and the server 2 may be a server operated by an administrator who hosts message exchange based on a setting item arbitrarily set using the SNS. Specifically, for example, an SNS server1A is constituted as a distributed processing system in which the control unit executes an SNS basic program on the server 1 to implement the SNS basic function unit11(the SNS group management unit111, the SNS message exchanging unit112, and the SNS message management unit113), and in which the control unit executes an item setting program on the server 2 to implement the item setting unit13, the item-specific message creation unit14, and the item-specific message management unit15. The server 1 may execute a basic message exchange function, a search function, or the like in the SNS service. Further, the server 2 includes, for example, a metadata storage unit24and an item-specific message storage unit25, and may store an item data dictionary set by the item setting unit13and an item-specific message managed by the item-specific message management unit15and exchanged based on the item name set by the item setting unit13. The server 2 may be a server managed by an administrator who hosts the community including the user who performs a machining process on a workpiece using the industrial machine running in the factory, the member of the industrial machine-tool builder who supports the machining process of the user, the maintenance personnel of the industrial machine, and the receptionist of the call center as exemplified in the first embodiment. In the second embodiment, the above-described community is described as an example, but the community is not limited thereto. For example, the server 2 may be a server managed for each community, which is arbitrarily set, by an administrator who hosts the community. Thus, all the messages exchanged in the community can be accumulated in the server 2 managed by the host, and information security of the message can be ensured. In addition, the accumulated messages can be comprehensively searched by tags or the like set by the item setting unit13. Hereinafter, the constitution in the second embodiment different from that in the first embodiment will be described, but the same constitution as that in the first embodiment will not be described. FIG.9is a diagram showing a constitution in which the server 1, the server 2, and one or more mobile terminals3are network-connected via the communication network N. Further, the server 2 may acquire running data output by the industrial machine, a peripheral device, a sensor device, or the like (also referred to as “edge device”) installed in the factory directly from the edge device or via a factory management device (also referred to as “edge server”), and may store it as factory running database. Thus, for example, when it is desired to insert current running information and operation information of an edge device (for example, Unit 1) installed in the factory during message exchange between the users, running status information (for example, status information during running, alarming, and stopping) and operation information (for example, alarm number, program number) of Unit 1 can be inserted by tag information such as “status of Unit 1” set in advance by the item setting unit13. In addition, a time when messages are exchanged by the SNS can be linked to a phenomenon that has occurred in the factory. Thus, it can be used for running analysis of the edge device in the factory or the like. Further, for example, characteristics of the question message collected in “Task management. (Mechanical unit-related)” shown inFIG.6are extracted using an AI function or the like, and a response message for the question message with the same characteristics is created, whereby the response message can be automatically provided to the question message, for example. Further, the storage unit provided in the server 2 may store document data such as manual data including various specification data related to the edge device. Thus, for example, when it is desired to cite the contents described in the manual during the message exchange between the users, the contents can be cited. Further, the storage unit provided in the server 2 may store a model of the common user interface for exchanging messages created by the item-specific message creation unit14as shown inFIG.6. In addition, when the function of the item set in the server 2 is a unique service such as “Leave-from-company analysis”, an application program may be arranged in the server 2 to achieve the unique service. The application program that provides such a service may be downloaded from, for example, an application sales store. The SNS server1A according to the second embodiment exemplified as the present embodiment has been described above. In other words, the message exchanging device and the message exchanging method of the present disclosure can take various embodiments as follows. (1) According to the present embodiment, the message exchanging device is a message exchanging device (for example, “SNA server1,1A”) that exchanges messages between members belonging to any group, using an SNS group function, the message exchanging device including: a storage unit (for example, “storage unit20”) that stores the messages exchanged between the members; an item setting unit (for example, “item setting unit13”) that sets items for classifying the messages exchanged between the members belonging to the group; a message exchanging unit (for example, “SNS message exchanging unit12”) that exchanges a message for each of the items set by the item setting unit; and an item-specific message management unit (for example, “item-specific message management unit15”) that stores, for each of the items, the message exchanged for each of the items in the storage unit. Thus, when the messages (contents) are exchanged in any community that uses the SNS, the classification items can be arbitrarily set in advance to be commonly used for the messages (contents) sent by all the users belonging to the community. (2) The message exchanging device (for example, “SNA server1,1A”) described in (1) above may include an item-specific message creation unit (for example, “item-specific message creation unit14”) that adds a tag, which is used for further classifying the message, to the message exchanged for each of the items, the item setting unit (for example, “item setting unit13”) may further set the tag, which is used for further classifying the message exchanged for each of the items, and the item-specific message management unit (for example, “item-specific message management unit15”) may further store the tag in the storage unit by associating the tag with the message exchanged for each of the items. Thus, when the messages (contents) are exchanged in any community that uses the SNS, a plurality of tags for classifying the contents of the messages can be arbitrarily set and can be attached to the messages to be exchanged. (3) In the message exchanging device (for example, “SNA server1,1A”) described in (2), the item setting unit (for example, “item setting unit13”) may further set a stamp used to evaluate the message, the item-specific message creation unit (for example, “item-specific message creation unit14”) may further create the stamp in association with the message to be evaluated and exchanged for each of the items, and the item-specific message management unit (for example, “item-specific message management unit15”) may further store the stamp in the storage unit (for example, “storage unit20”) in association with the message to be evaluated and exchanged for each of the items. Thus, when the messages (contents) are exchanged in any community that uses the SNS, a plurality of stamps for evaluating the quality of the contents of the message can be arbitrarily set and can impart evaluation to the message. (4) in the message exchanging device (for example, “SPA server1,1A”) described in any one of (1) to (3) above, the item setting unit (for example, “item setting unit13”) may further set the plurality of items to have a hierarchical structure. Thus, the number of items presented to the user at one time is limited to the number that can be reasonably displayed on the mobile terminal3, for example, and the user sequentially selects the items, whereby it is possible to present the virtual community by item that meets the restrictions of the display means or the input means of the mobile terminal3. (5) According to the present embodiment, the message exchanging method executed by the computer is a message exchanging method of exchanging messages between members belonging to any group, using an SNS group function, the computer including a storage unit that stores the messages exchanged between the members, the method including: an item setting step of setting items for classifying the messages exchanged between the members belonging to the group; a message exchange step of exchanging a message for each of the items set in the item setting step; and an item-specific message management step of storing, for each of the items, the message exchanged for each of the items in the storage unit. Thus, the same effect as in (1) above can be obtained. (6) The message exchanging method described in (5) above may further include an item-specific message creation step of adding a tag, which is used for further classifying the message, to the message exchanged for each of the items, the item setting step may further include a step of setting the tag, which is used for further classifying the message exchanged for each of the items, and the item-specific message management step may further include a step of storing the tag in the storage unit by associating the tag with the message. Thus, the same effect as in. (2) above can be obtained. (7) in the message exchanging method described in (6) above, the item setting step may further include a step of setting a stamp used to evaluate the message, the item-specific message creation step may further include a step of creating the stamp in association with the message to be evaluated and exchanged for each of the items, and the item-specific message management step may further include a step of storing the stamp in the storage unit in association with the message to be evaluated and exchanged for each of the items. Thus, the same effect as in (3) above can be obtained. (8) According to the present embodiment, in the message exchanging method described in any one of (5) to (7) above, the item setting step may further include a step of setting the plurality of items to have a hierarchical structure. Thus, the same effect as in (4) can be obtained. Although embodiments of the present invention have been described above, the present invention is not limited to the above-described embodiments. Further, the effects described in the present embodiments are merely a list of the most preferable effects resulting from the present invention, and the effects according to the present invention are not limited to those described in the present embodiments. EXPLANATION OF REFERENCE NUMERALS 1,1A SNS server3mobile terminal5administrator terminal10control unit11SNS basic function unit111SNS group management unit112SNS message exchanging unit113SNS message management unit13item setting unit14item-specific message creation unit15item-specific message management unit20storage unit21program storage unit22user storage unit23group storage unit24metadata storage unit25item-specific message storage unit29communication interface unit30control unit40storage unit46touch panel display49communication interface unit100SNS system | 64,593 |
11943195 | DETAILED DESCRIPTION Large employee workforces work remotely. Employees access various local and remote (e.g., cloud) company resources directly from the Internet, through a corporate network (e.g., via a virtual private network (VPN) connection), or from computing devices within the corporate network. Remote company resources may be available via internet connected endpoints, and include services and applications available through Infrastructure as a Service (IaaS), platform as a service (PaaS), business software as a service (SaaS), Security as a service (SECaaS), or consumer SaaS. Large numbers of users accessing cloud resources may strain network capacity and security stamps. Employees may access company resources in branch offices, for example, via a wide area network (WAN) connecting corporate headquarters to branch offices, which may allow compromised users/devices (e.g., attackers) to move laterally through corporate networks. Company network IT (information technology) administrators may be unable to monitor viruses and malware when employees access the Internet from outside a company network. Admin may be unable to determine compromised employee computing devices that may be used to access company resources. A compromised computing device may be used to get into company headquarters and expand an attack to other resources in the network, e.g., via e-mail servers, financial applications, servers hosting the IP of the company, etc. A public service edge, such as a secure access service edge (SASE), may be implemented to improve security for local and remote access to public and private resources. A SASE may be implemented as a set of security services enabling network access. A SASE may be used to provide Network as a Service (NaaS) and/or SaaS for a managed cloud service. A SASE security layer may broker connectivity to one or more resources (e.g., IaaS/PaaS such as Microsoft Azure® or Amazon Web Services (AWS), business SaaS applications such as Office 365®, consumer SaaS applications such as) Facebook®, regardless whether the access is by remote workers, remote operations (e.g., from branch offices), or access from within a company network. A SASE may be implemented with zero-trust network access (ZTNA). ZTNA enables users to securely access private resources when users are working off-premises (e.g., remote). On-premises users accessing the private resources may be provided with high performance access and the same security posture as off-premises users. ZTNA enables secure private access by providing application segmentation. Application segmentation provides for protection against attacks due to lateral movement and provides for per-application policies based on identity. In an example implementation of zero-trust based access from users to private applications, user access granularity may be provided at the level of an endpoint represented by an internet protocol (IP) address or a fully qualified domain name (FQDN) port pair. ZTNA may be less expensive, easier to manage, scalable with high performance, for example, compared to on premises infrastructure providing a local proxy to authenticate and route traffic. In comparison, a VPN may provide direct access to a private network while ZTNA may broker connections more securely to specific private applications in a private network. ZTNA may be implemented with several principles. For example, ZTNA may be implemented to provide a user with the least privilege. In an example, a user may attempt to connect to an application. ZTNA may connect the user (e.g., if authenticated and authorized) to the application, rather than to an entire network. ZTNA may be implemented with explicit verification to an application, a network, etc., implying or assuming no (zero) trust of a user. ZTNA may be implemented with an assumption that the access is a breach or attack that must be contained. For example, an employee's computing device (e.g., machine) may get hacked or otherwise attacked. The attacker may use the machine to move into the company network. Accordingly, under an assumption that an access is an attack, access may be limited or restricted. By analogy, a key may be provided to access a single room, rather than an entire floor or building. ZTNA may be implemented to assume that a key to a room may be used on another room mistakenly or purposefully, blocking such usage. ZTNA may be implemented with one or more zero-trust services and zero-trust clients. Security policies to access resources (e.g., applications, network) may be set on a security server (e.g., zero-trust service) providing a security service (e.g., a security engine). A ZTNA security service/engine may have a managed client layer that provides network security and access based on security policies. Security policies may indicate, for example, identity based access to private networks and applications, private application discovery, allowed traffic (e.g., what kind of traffic should go through a managed client layer for a customer's resources), etc. A managed client layer may implement default or customer-specified security policies. A managed client layer may provide threat intelligence, perform deep packet inspection DPS), traffic filtering, transport layer security (TLS) termination and inspection, cloud access security (e.g., data loss/information protection), a world wide web application firewall, and so on. A managed client layer may be implemented as a cloud service providing NaaS and SaaS to one or more customers, which may be individuals, businesses, government entities, etc. Security policies for access/connection to private networks and/or applications may be managed (e.g., through a user interface provided by a zero-trust service) by administrators (admins) for one or more resources accessed through the managed client layer. For example, an admin for company A may set security policy for access to the resources of company A through the managed client layer, an admin for company B may set security policy for access to the resources of company B through the managed client layer, and so on for access to resources owned, leased, etc. by customers. A customer's private network may be a combination of multiple networks. A customer's private network may have applications on premises and/or applications in the cloud. Some applications may be in one or more clouds (e.g., with different service operators, such as Microsoft, Google and Amazon). For example, a managed client layer may, based on respective security policies, control access/connection to private networks and/or applications on one or more clouds (e.g., Microsoft Azure, Amazon Web Services (AWS), Google Cloud) and/or one or more private on-premises (e.g., corporate) networks and/or applications (e.g., Web applications, remote server authentication access, such as remote desktop protocol (RDP) or secure shell (SSH) applications, transfer protocols, such as server message block (SMB) or file transfer protocol (FTP) applications, enterprise applications such as Systems, Applications and Products (SAP) or PeopleSoft applications, printing, etc.). A managed client layer may operate at the edge of a cloud service, e.g., wherever the cloud service may be accessible. The managed client layer may include one or more security engines, data traffic filtering, application of security policies to allowable traffic, etc. One or more security engines running in the managed client layer may enforce security policies, such as user authentication and user authorization for access to resources, traffic filtering, traffic routing, etc. Traffic from authenticated and authorized users and/or computing devices may be provided to a private network and/or application after the traffic is cleared. A connection may be established between the managed client layer and a managed service (e.g., an application) that a user/computing device is permitted to access. For example, connectors or agents may create an outbound connection from an application to the managed client layer. Connectors may be deployed in various networks with applications users can access through the managed client layer. Private access may piggyback on the outbound connection. For example, a flow may be established in the outbound connection for each application the user is authorized to and does access. A flow may encapsulate traffic in a tunnel. A connection may have multiple flows. Each flow may limit access to a respective resource (e.g., controlled access similar to a key limiting access to a specific room rather than a key to all rooms in an entire floor or building). Connectors may forward incoming traffic to intended applications. User computing devices may execute a client application. The client application may transform user computing devices into managed clients, e.g., for access to private resources, such as those of an employer. A user device may be referred to as a zero-trust client. A zero-trust client application running on a computing device may receive security policies from a zero-trust service. A zero-trust client may route traffic from a computing device to the managed client layer (e.g., in a cloud service) based on the security policies. A security policy may indicate, for example, where the zero-trust client should route traffic for an attempt to use a private application or network. A security policy may indicate that a user and/or a device must first authenticate, e.g., by entering credentials for analysis and approval by the managed client layer. In an example, conditional access to managed services (e.g., networks, applications) based on security policies in a managed client layer may be implemented by Microsoft Azure Active Directory (AAD). A connection between a zero-trust client and the managed client layer (e.g., at a cloud computing edge) may be established for authenticated and authorized users/computing devices. A flow may be established in the connection for each network and/or application the user accesses. A connection may have multiple flows. Each flow may limit access to a respective resource (e.g., controlled access similar to a key limiting access to a specific room rather than a key to all rooms in an entire floor or building). Previously, ZTNA required a dedicated DNS server proxy to ensure security when attempting to access internet connected endpoints. Embodiments illustrated herein implement an improved zero-trust client and improved zero-trust server that allow the zero-trust client to create a synthetic IP address that can be used by the zero-trust client and the zero-trust server to route data to internet connected endpoints. Referring now toFIG.1, an example embodiment is illustrated.FIG.1illustrates an environment100where zero-trust domain name resolution may be performed.FIG.1illustrates a local machine102. The local machine102may be, for example, a user's desktop or laptop computer, a tablet device, a smartphone, or other computing device. The local machine102may be a device used by an employee user to access company resources. The local machine102includes a zero-trust client104. The zero-trust client104is a security application running on the local machine102. As will be illustrated below, the zero-trust client104is configured to facilitate secure communications between the local machine102and company resources and/or other internet connected endpoints by using synthetic IP addresses generated by the zero-trust client104. The local machine102includes further includes one or more applications106having components implemented for connecting to internet connected endpoints. For example, the applications106may include routines configured to package/unpackage data for transmission/reception on networks, and interfaces to send/receive data external to the applications106. In the example illustrated inFIG.1, apps108and clients110are illustrated as examples of applications106. As further examples, a given application in the applications106may be an internet browser, a browser-based app, a rich client, a thin client, an app (e.g., a simplified application configured for mobile applications or having simplified user interface elements as compared to more feature rich versions of an application), or other application. In the example illustrated inFIG.1, the applications106may be implemented in the environment100and on the local machine102in a fashion that facilitates the applications connecting with internet connected endpoints.FIG.1illustrates a number of different internet connected endpoints112. For example,FIG.1illustrates a website internet connected endpoint112-1(e.g., an endpoint hosting commonly available websites), cloud service internet connected endpoints112-2,112-3, and112-4(e.g., SaaS, PaaS, IaaS endpoints), and a private access corporate network internet connected endpoint112-5(e.g., a corporate network). Referring now toFIG.2, in addition to reference toFIG.1, additional details are illustrated. An application106-1(i.e., one of the applications106) can send a request to the zero-trust client104for an IP address corresponding to an endpoint identifier114for an internet connected endpoint. For example, the client may send a FQDN, PQDN, IP address, GUID, text string description, or other endpoint identifier that can be used to uniquely identify the internet connected endpoint. As a specific example, the application106-1may be an internet browser. A user may interact with the application106-1by typing the FQDN “www.contoso.com” in an address bar of the application106-1. The application106-1then sends the FQDN endpoint identifier114“www.contoso.com”, in a request for an IP address, to the zero-trust client104. In particular, the local machine102may be configured to route network traffic through the zero-trust client104. The zero-trust client104receives the endpoint identifier from the application106-1. The zero-trust client104may then determine whether or not to use a synthetic IP address for the endpoint identifier114. Generally, the zero-trust client will use policy available to the zero-trust client104to determine whether or not to use a synthetic IP address for the endpoint identifier.FIG.2illustrates a policy service116, which represents the ability of the local machine102to apply policy to endpoint identifiers. While the policy service116may be implemented as an application running at the local machine102, it may be implemented in other fashions, additionally or alternatively, such as by including elements off of the local machine102. The policy service116, as explained in more detail below, can determine what types of network traffic should be collected and have a synthetic IP address applied. In the example illustrated inFIG.2, if the zero-trust client104determines to apply a synthetic IP address, then the zero-trust client104, identifies a synthetic IP address118for the endpoint identifier114. A synthetic IP address118is generally an IP address for the endpoint identifier114that is allowed to differ from an IP address that is assigned to the endpoint identifier by a trusted DNS. The synthetic IP address118has meaning within the context of a zero-trust security framework while having no, limited, or even incorrect meaning outside of the context of the zero-trust security framework. For example, the synthetic IP address may be a private IP address. As an example, in the IPv4 internet protocol, the synthetic IP address118may be assigned in the private address range 10.0.0.0 to 10.255.255.255, or some other private address range. The synthetic IP address118may be a public address but a public address that does not correspond with the endpoint identifier114. For example, the actual public IP address for www.contoso.com, available from a trustworthy DNS service may be 20.84.181.62. However, the synthetic IP address118identified by the zero-trust client104may be 13.107.6.156, which is actually the public IP address for www.office.com. Thus, in this case, if the synthetic IP address were used directly outside of the context of the zero-trust security framework, incorrect network resolution would occur. Indeed, the synthetic IP address118could even be an Internet wide reserved address such as 240.0.0.0, which has been reserved for future use. This is allowed inasmuch as the synthetic IP address118will be used as such within the context of the zero-trust security framework. In response to the request for an IP address corresponding to the endpoint identifier114, the zero-trust client104provides the synthetic IP address118for the endpoint identifier114to the application106-1.FIG.2illustrates that the zero-trust client104includes a synthetic DNS service120. The synthetic DNS service can obtain and/or produce synthetic IP addresses such as the synthetic IP address118. The synthetic DNS service is an application run on the hardware of the local machine102or in another appropriate location. The synthetic DNS further includes a synthetic DNS data store134which correlates endpoint identifiers with synthetic IP addresses. The application106-1, having the synthetic IP address can now send data traffic124, which is received by the zero-trust client104. The data traffic124is directed to the internet connected endpoint, which could be any appropriate internet connected endpoint, such as internet connected endpoints112-1through112-5. The data traffic124is associated with the synthetic IP address118by the application106-1. For example, the application106-1may create a data packet including a destination header, where the data packet includes the data traffic124and the destination header includes the synthetic IP address118. This data packet is then sent to the zero-trust client104. The zero-trust client104then sends the data traffic124to a zero-trust service126. The synthetic IP address118is also sent to the zero-trust service126. Typically, this is done by sending the packet from the application106-1to the zero-trust service126. Often, the data traffic124and synthetic IP address118are sent through an encrypted tunnel128, which may be, for example, a conventional encrypted tunnel. The zero-trust client104also sends the endpoint identifier114to the zero-trust service in a fashion that allows the synthetic IP address118to be correlated to the endpoint identifier114at the zero-trust service126. For example, this could be done by correlating a session used for sending the packet containing the synthetic IP address118with communications containing the endpoint identifier114. Note that in other embodiments, this may be done in a stateless fashion but with identifiers in the various communications allowing them to be correlated. Note that the endpoint identifier114is typically sent, out of band, on a side channel130. As will be discussed in more detail below, the zero-trust service126directs the data traffic124to the internet connected endpoint by using the endpoint identifier114to obtain a globally valid IP address from a trusted DNS service132. The zero-trust service126may then receive response data traffic131directed from the internet connected endpoint to the application106-1. Thus, the local machine102may receive response data traffic131in response to the data traffic124. The response data traffic131may be, for example, web pages, cloud service data, user interface code, acknowledgments, or other types of data provided by internet connected endpoints112. The response data traffic131is associated with the synthetic IP address118. For example, the zero-trust service126may create a data packet having the response data traffic131in a data field of the packet, and the synthetic IP address in a destination field of the data packet. Thus, in some embodiments the zero-trust client104uses the synthetic IP address to provide the response data traffic131to the application106-1. As noted previously, the zero-trust client104can evaluate policy. In such embodiments, the zero-trust client104identifies the synthetic IP address118as a result of the endpoint identifier114meeting a particular condition of the policy. Thus, for example, the policy service116may compare endpoint identifiers to determine if communications from application should be captured and subjected to the protections of the zero-trust security framework. For example, policy may indicate that any endpoint identifiers to websites should be captured by the zero-trust client104and synthetic IP addresses should be identified for those endpoint identifiers. Alternatively, or additionally, policy may indicate that endpoint identifiers to particular types of websites (e.g., social networking, adult content, containing violence, containing or encouraging illegal activity, etc.) should be captured by the zero-trust client104and synthetic IP address should be identified for those endpoint identifiers. Alternatively, or additionally, policy may indicate that endpoint identifiers to corporate network internet connected endpoints (e.g., corporate applications, network nodes, services, etc.) should be captured by the zero-trust client104and synthetic IP addresses should be identified for those endpoint identifiers. Alternatively, or additionally, policy may indicate that endpoint identifiers to cloud service internet connected endpoints, or certain cloud service internet connected endpoints should be captured by the zero-trust client104and synthetic IP addresses should be identified for those endpoint identifiers. Note that conditions of policy may be role based such that internet connected endpoints are treated differently depending on the role of the user sending data to the internet connected endpoint. For example, individuals in a corporate legal department may have different network traffic captured and synthetic IP addresses identified as compared to individuals in a corporate accounting department. In some embodiments, the zero-trust client104may receive information from the zero-trust service126identifying what suffixes of endpoint identifiers are supported by internet connected entities accessible to the zero-trust service. For example, such suffixes may identify the types of services, Using the information such embodiments identify what suffixes of endpoint identifiers are supported by the internet connected entities accessible to the zero-trust service to evaluate the policy. In some embodiments, the zero-trust client104may store a correlation of endpoint identifiers with synthetic IP address.FIG.2illustrates synthetic IP address data store134. Note that in some embodiments, the data store134may be prepopulated and stored statically such that the correction remains even when the local machine102is power cycled. In some embodiments, this could further include the zero-trust client104surveying the applications106to determine possible internet connected endpoints that the applications106might attempt to contact, pre-identifying synthetic IP addresses for those endpoints, and storing correlating information in the data store134. Thus, in some embodiments, a particular synthetic IP address could be used over different sessions, power cycles, or other periods. In some embodiment, the zero-trust client104is able to obtain a DNS provided IP address136(seeFIG.3) using the endpoint identifier114and to provide the DNS provided IP address136to the zero-trust service126. In some embodiments, the zero-trust client104can send the endpoint identifier114to a DNS service138used by the local machine102. This may be a DNS service for an ISP for the network on which the local machine102is connected or some other DNS service specified for the local machine102. In some embodiments, the DNS service132is untrusted with respect to the zero-trust service126. As will be explained in more detail below, sending the DNS provided IP address136to the zero-trust service126may be useful when the trusted DNS service132is unavailable. Attention is now directed to additional details regarding the zero-trust service126. The zero-trust service126can receive from zero-trust clients at local machines, data traffic and a synthetic IP addresses. In the examples illustrated inFIGS.1-4, the zero-trust service126is also configured to receive endpoint identifiers from zero-trust clients. For example,FIG.3illustrates the zero-trust service126receiving the endpoint identifier114from the zero-trust client104at the local machine102. Recall that the endpoint identifiers are for internet connected endpoints. The endpoint identifiers are received in a fashion that allows the synthetic IP addresses to be correlated to the endpoint identifiers. The zero-trust service126is configured to obtain a trusted IP address140for the endpoint identifier114. For example,FIG.2illustrates that the zero-trust service has access to a trusted DNS service132where it can obtain a trusted IP address140for the endpoint identifier114and therefore, for the corresponding internet connected endpoint (i.e., the internet connected endpoint from among the internet connected endpoints112corresponding with the endpoint identifier114). Using the trusted IP address140, the zero-trust service126sends the data traffic124to the internet connected endpoint corresponding to endpoint identifier114.FIG.4illustrates an example where the data traffic124is sent with the trusted IP address140to the Internet connected endpoints112where the trusted IP address140is used to address the data traffic124to a particular internet connected endpoint. The zero-trust service126receives response data traffic131from the internet connected endpoint. As illustrated inFIG.4, the response data traffic131may be sent with the trusted IP address140. For example, an internet connected endpoint may return a packet where the packet includes the trusted IP address140in a header as a source address and the response data traffic131is included in a data portion of the packet. The zero-trust service126provides the response data traffic131to the local machine102with the response data traffic correlated to the synthetic IP address118. For example, the zero-trust service may create a new packet where the new packet includes the synthetic IP address as the source address in a header of the new packet, with the response data traffic131included in the data portion of the packet. As illustrated previously, inFIG.2, the data traffic124and the synthetic IP address118may be received by the zero-trust service using an encrypted tunnel128. In some embodiments, as discussed previously, and as illustrated inFIG.2, the zero-trust service126may receive the endpoint identifier114for the internet connected endpoint using a side channel130. Side channel130is ancillary to a channel (e.g., the tunnel128) used for receiving the data traffic124and the synthetic IP address118. In some embodiments, the zero-trust service126may receive a DNS provided IP address correlated to the internet connected endpoint from the zero-trust client104. For example, as illustrated inFIG.3, a DNS provided IP address136may be obtained by the zero-trust client104, and provided to the zero-trust service126. In some embodiments, the DNS provided IP address136is provided to the zero-trust service126using the side channel130, or an alternate side channel. In the example illustrated inFIG.3, the zero-trust service126uses the DNS provided IP address136as a trusted IP address to send data traffic124. In some embodiments, the zero-trust service126may receive a DNS provided IP address correlated to the internet connected endpoint from a plurality of different zero-trust clients at different local machines. The zero-trust service126can use this information to determine that the DNS provided IP address136is trusted. That is, because a number of different machines provided an IP address, presumably without coordination, some trust in the DNS provided IP address136can be inferred. Note that the DNS provided IP address136may be provided using side channels as discussed above. As a result of the DNS provided IP address136being trusted, it can be used as the trusted IP address when sending data to internet connected endpoints. In some embodiments, the zero-trust service126may receive a plurality of DNS provided IP addresses correlated to a given internet connected endpoint from a plurality of different zero-trust clients at different local machines. Note that it is likely that many or all of the DNS provided IP addresses are the same actual address. However, some of the addresses may differ in value from each other. The zero-trust service126may further receive information, from the various zero-trust clients, identifying sources of the DNS provided IP addresses. Often, different DNS services may provide the IP addresses to the different zero-trust clients. For example, work from home employees may use different internet service providers, which each in turn use different DNS services. Such DNS services include, for example, Comcast, Quest, G-Core, Cloudflare, OpenDNS, Google Public DNS, Quad9 DNS, Oracle DNS, Verisign Public DNS, Akamai, Amazon Route 53, etc. The information identifying sources of the DNS provided IP addresses can be used to evaluate the different IP addresses provided by the various zero-trust clients. Some DNS service providers may be perceived as being more reliable than others. The zero-trust service126can determine what IP address to use as a trusted IP address for the internet connected endpoint using the information about the DNS service providers. In some embodiments, the internet connected endpoint is a private internet connected endpoint. For example, the corporate network interconnected endpoint112-5and various cloud service internet connected endpoints112-2,112-3, and112-4. will often have sub-internet connected endpoints that are private to the internet connected endpoint. For example, an endpoint identifier provided by an application may be payroll.contoso.com. A public DNS service, such as the trusted DNS service132will not be able to resolve this to an IP address. To address this, the zero-trust service may include functionality for polling secure connection connectors. These secure connection connectors are implemented at a internet connected entities hosting internet connected endpoints for an IP address corresponding to the private internet connected endpoint.FIG.4illustrates connectors140-2,140-3,140-4, and140-5corresponding to the internet connected endpoints112-2,112-3,112-4, and112-5respectively. The zero-trust service126can poll the connectors140-2,140-3,140-4, and140-5to identify which connectors are used with the endpoint identifier payroll.contoso.com. In this way, the zero-trust service126can obtain a trusted IP address for an endpoint identifier as a result of polling secure connection connectors. In some such embodiments, the zero-trust service126can send an IP address corresponding to the private internet connected endpoint to the zero-trust client104on the local machine102. This may be done using the side channel130or other appropriate channel. The zero-trust service126can then receive the same IP address corresponding to the private internet connected endpoint from the zero-trust client104on the local machine102correlated to the endpoint identifier. The zero-trust service uses the IP address corresponding to the private internet connected endpoint as a trusted IP address. In some embodiments, the zero-trust service126can cache the IP address corresponding to the private internet connected endpoint for subsequent communication with zero-trust clients sending the endpoint identifier corresponding to the private internet connected endpoint. In some embodiments, the zero-trust service126may be configured to poll all known connectors for all known private internet connected endpoints and corresponding IP addresses. The IP addresses can be cached and used as appropriate. In particular, embodiments may include functionality for polling secure connection connectors implemented at a plurality of internet connected entities hosting internet connected endpoints to determine what suffixes of endpoint identifiers are supported by the internet connected entities. Information can then be provided to the zero-trust client104identifying what suffixes of endpoint identifiers are supported by the internet connected entities. The following discussion now refers to a number of methods and method acts that may be performed. Although the method acts may be discussed in a certain order or illustrated in a flow chart as occurring in a particular order, no particular ordering is required unless specifically stated, or required because an act is dependent on another act being completed prior to the act being performed. FIG.5is a flowchart of an example method for performing zero-trust domain name resolution. At step510, the method500includes identifying a synthetic IP address for an endpoint identifier for an internet connected endpoint. The synthetic IP address is an IP address that is different from an IP address that is assigned to the endpoint identifier by a trusted DNS service configured to provide globally valid IP addresses. At step520, in response to the request for an IP address corresponding to the endpoint identifier, the method500includes providing the synthetic IP address for the endpoint identifier to the application. At step530, the method includes receiving data traffic at the zero-trust client, from the application directed to the internet connected endpoint. The data traffic is associated with the synthetic IP address by the application. At step540, the method includes sending the data traffic to a zero-trust service with the synthetic IP address. At step550, the method includes sending the endpoint identifier to the zero-trust service in a fashion that allows the synthetic IP address to be correlated to the endpoint identifier at the zero-trust service. The method may also include receiving at the local machine, response data traffic in response to the data traffic. The response data traffic is associated with the synthetic IP address. In such embodiments, the method500may also include using the synthetic IP address, providing the response data traffic to the application. The method500may further include the zero-trust client evaluating policy. In such embodiments, the step of the zero-trust client identifying the synthetic IP address is performed as a result of the endpoint identifier meeting a particular condition of the policy. Additionally, in some such embodiments, the method500may further include receiving information from the zero-trust service identifying what suffixes of endpoint identifiers are supported by internet connected entities accessible to the zero-trust service. This information is used to evaluate the policy. The method500may be practiced where sending the endpoint identifier to the zero-trust service comprises sending the endpoint identifier to the zero-trust service on a side channel. The method500may further include the zero-trust client storing a static correlation of endpoint identifiers with synthetic IP address. The method500may further include the zero-trust client obtaining a DNS provided IP address using the endpoint identifier. The zero-trust client then provides the DNS provided IP address to the zero-trust service. FIG.6is a flowchart of an example method for performing zero-trust domain name resolution. At step610, the method600includes receiving data traffic and a synthetic IP address from a zero-trust client at a local machine. The synthetic IP address is an IP address that is different from an IP address that is assigned to an endpoint identifier by a trusted DNS service configured to provide globally valid IP addresses. At step620, the method600includes, receiving the endpoint identifier for an internet connected endpoint in a fashion that allows the synthetic IP address to be correlated to the endpoint identifier. At step630, the method600includes, obtaining a trusted IP address for the endpoint identifier. At step640, the method600includes using the trusted IP address, sending the data traffic to the internet connected endpoint. At step650, the method600includes, receiving response data traffic from the internet connected endpoint. At step660, the method600includes providing the response data traffic to the local machine with the response data traffic correlated to the synthetic IP address. The method600may be practiced where receiving the data traffic and the synthetic IP address is performed using an encrypted tunnel. The method600may be practiced where receiving the endpoint identifier for the internet connected endpoint is performed using a side channel ancillary to a channel used for receiving the data traffic and the synthetic IP address. The method600may be practiced where providing the response data traffic to the local machine with the response data traffic correlated to the synthetic IP address is performed using an encrypted tunnel. The method600may further include receiving a DNS provided IP address correlated to the internet connected endpoint from the zero-trust client104. In this example, using the trusted IP address comprises using the DNS provided IP address. The method600may further include receiving a DNS provided IP address correlated to the internet connected endpoint from a plurality of different zero-trust clients at different local machines. Based on receiving a DNS provided IP address from the plurality of different zero-trust clients at different local machines, The method600may further include determining that the DNS provided IP address is trusted. In this case, using the trusted IP address comprises using the DNS provided IP address. The method600may further include receiving DNS provided IP addresses correlated to the internet connected endpoint from a plurality of different zero-trust clients at different local machines. This embodiment may further include receiving information identifying sources of the DNS provided IP addresses. The information identifying sources of the DNS provided IP addresses is used to identify the trusted IP address for the internet connected endpoint. Embodiments then determine that the DNS provided IP address is trusted, based on receiving a DNS provided IP address from the plurality of different zero-trust clients at different local machines. As such, using the trusted IP address comprises using the DNS provided IP address. The method600may be practiced where the internet connected endpoint is a private internet connected endpoint. In some such embodiments, the method600may further includes polling secure connection connectors implemented at a plurality of internet connected entities hosting internet connected endpoints for an IP address corresponding to the private internet connected endpoint. In some such embodiments, obtaining the trusted IP address for the endpoint identifier is accomplished as a result of polling the secure connection connectors. Some such embodiments may further include sending the IP address corresponding to the private internet connected endpoint to the zero-trust client on the local machine. The IP address corresponding to the internet connected endpoint is received from the zero-trust client on the local machine correlated to the endpoint identifier. The IP address corresponding to the internet connected endpoint is used as the trusted IP address. The method may further include caching the IP address corresponding to the private internet connected endpoint for subsequent communication with zero-trust clients sending the endpoint identifier corresponding to the private internet connected endpoint. The method600may further include polling secure connection connectors implemented at a plurality of internet connected entities hosting internet connected endpoints to determine what suffixes of endpoint identifiers are supported by the internet connected entities. Information is provided to the zero-trust client identifying what suffixes of endpoint identifiers are supported by the internet connected entities. Further, the methods may be practiced by a computer system including one or more processors and computer-readable media such as computer memory. In particular, the computer memory may store computer-executable instructions that when executed by one or more processors cause various functions to be performed, such as the acts recited in the embodiments. Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are physical storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: physical computer-readable storage media and transmission computer-readable media. Physical computer-readable storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage (such as CDs, DVDs, etc.), magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above are also included within the scope of computer-readable media. Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission computer-readable media to physical computer-readable storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer-readable physical storage media at a computer system. Thus, computer-readable physical storage media can be included in computer system components that also (or even primarily) utilize transmission media. Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims. Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The present invention may be embodied in other specific forms without departing from its characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. | 45,991 |
11943196 | DETAILED DESCRIPTION The DNS resolvers herein mitigate adverse effects caused by domain hijacking. When a domain name is hijacked a computing system requesting a network address (e.g., an Internet Protocol (IP) address) associated with the domain name may not receive such a network address. Rather, a nameserver that has hijacked the domain name provides a network address to a computing system associated with the hijacking nameserver. That computing system may be malicious and, when contacted by the requesting computing system, may infect the requesting computing system with malware, may attempt a phishing attack on a user of the requesting system, or may perform some other undesired function. For example, the malicious computing system may spoof a website associated with the domain name. Then, when a user of the requesting computing system enters information (e.g., credit card information, personal information, etc.) into the spoofed website, the malicious computing system receives that information instead of the computing systems for the real website. If the requesting computing system is never provided with a network address from a nameserver that has hijacked a domain name, then the requesting computing system will not be compromised by communicating with a malicious computing server at that network address. As detailed below, the DNS resolvers herein characterize properly associated nameservers (i.e., the nameservers that a domain name owner intends to be associated with the domain name) so that DNS requests do not get resolved using a nameserver that the DNS resolvers suspect of domain hijacking. When the DNS request does not get resolved for the requesting computing system, the requesting computing system cannot contact a network address associated with a domain name in the DNS request. The requesting computing system is, therefore, prevented from accessing a potentially malicious computing system due to domain hijacking. FIG.1illustrates implementation100for detecting domain hijacking during DNS lookup. Implementation100includes DNS resolver101, requesting computing system102, and nameservers103. DNS resolver101and requesting computing system102communicate over communication link111. DNS resolver101and nameservers103communicate over communication links112. Communication links111-112are shown as direct links but may include intervening systems, networks, and/or devices. In operation, DNS resolver101is a computing system that resolves network addresses associated with domain names on behalf of requesting systems, such as requesting computing system102. Nameservers103are nameservers of the global DNS and include records, commonly referred to as A records, of network addresses associated with respective domain names. For redundancy, at least two of nameservers103are typically used to keep network address records for any given domain name. A conventional DNS resolver simply identifies and retrieves a network address from one of the nameservers associated with a domain name in a DNS request to resolve the DNS request. In contrast, if DNS resolver101determines that an identified nameserver is suspect (e.g., may be malicious or otherwise not actually intended to be used for the domain name by the domain name's owner), then DNS resolver101does not resolve the DNS request using the suspect nameserver. By not resolving the DNS request, DNS resolver101prevents the computing system that sent the request from unknowingly being directed to a computing system not actually associated with the domain name (e.g., a malicious web server). FIG.2illustrates operation200to detect domain hijacking during DNS lookup. During operation200, DNS resolver101receives a DNS request from requesting computing system102(201). The DNS request includes a domain name for which requesting computing system102is requesting a corresponding network address. Requesting computing system102may be a user system (e.g., telephone, laptop, personal computer, tablet computer, or some other type of user operable computing system) but may also be any other type of computing system capable of using DNS requests to request network addresses associated with domain names. Requesting computing system102may address the DNS request directly to DNS resolver101or, if the DNS request is directed to a different DNS resolver (e.g., a DNS resolver in an internet service provider of requesting computing system102), then an intermediate system (e.g., network firewall) may redirect the DNS request to DNS resolver101. The DNS request may be transferred to DNS resolver101in response to a user of requesting computing system102directing a web browser application executing on requesting computing system102to a particular website using a domain name for that website (e.g., the user may enter a Uniform Resource Locator (URL) with the domain name into the web browser). Although, other reasons that requesting computing system102may submit a DNS request also exist (e.g., a database system may be identified to an application on requesting computing system102using a URL with a domain name). Upon receiving the DNS request from requesting computing system102, DNS resolver101determines that a nameserver for the domain name is suspect based on satisfaction of nameserver criteria associated with the domain name (202). The nameserver criteria defines whether characteristics of a particular nameserver indicate that the nameserver is suspect or is likely a proper nameserver for the domain name. The nameserver criteria may be based on characteristics such as a name/identifier of a nameserver (e.g., ns1.hostingservice.com), a hosting service associated with the nameserver (which is many times indicated in the nameserver's name), a geographic location of the nameserver, an amount of time that the nameserver has been associated with the domain name, or any other type of information that DNS resolver101may be able to determine about the nameserver—including combinations thereof. The nameserver criteria may indicate characteristics that are allowed (i.e., are not suspect) and/or characteristics that are not allowed (i.e., are suspect). For example, the nameserver criteria may list hosting services known to be associated with the domain name. If the nameserver identified by DNS resolver101is a nameserver for a hosting service not in that list of nameservers, then DNS resolver101determines that the nameserver is suspect. In another example, the nameserver criteria may list geographic locations (e.g., countries) that are not allowed. If the nameserver identified by DNS resolver101is in one of those locations, then DNS resolver101determines that the nameserver is suspect. In some cases, the nameserver criteria may indicate combinations of characteristics that are or are not allowed. For example, the nameserver criteria may indicate that all nameservers in a particular geographic location are suspect except for nameservers associated with a particular hosting service (e.g., nameservers operated by the hosting service). The nameserver criteria associated with the domain in this example may be different from the nameserver criteria associated with another domain name because characteristics that may indicate a nameserver for one domain name is suspect may not be the same for another domain name. For instance, a nameserver associated with a hosting service may not be suspect for a first domain name since a website for the first domain name uses the hosting service. However, that same nameserver may be suspect for a second domain name since a website for the second domain name uses a different hosting service. In other examples, the nameserver criteria may defines such that it is applicable (and thereby associated with) multiple, or even all, domain names that DNS resolver501is configured to resolve. After DNS resolver101determines that the nameserver is suspect, DNS resolver101prevents the suspect nameserver from being used to resolve the request in response to determining that the nameserver is suspect (203). To prevent the suspect nameserver from being used to resolve the request, DNS resolver101may simply not contact the nameserver to request the network address associated with the domain name. Since DNS resolver101did not contact the nameserver, DNS resolver101will not have a network address for transfer to requesting computing system102to resolve the DNS request from requesting computing system102. In that case, DNS resolver101may send a message to requesting computing system102indicating that the DNS request could not be resolved or may simply allow the DNS request to timeout at requesting computing system102. In some examples, DNS resolver101may attempt to identify one or more other nameservers associated with the domain name. If one of those other nameservers is not determined to be suspect, then DNS resolver101may request the network address from the other nameserver and resolve the DNS request to requesting computing system102using the network address from the other nameserver. Alternatively, DNS resolver101may assume all nameservers associated with the domain name are suspect in response to one nameserver being suspect and not attempt to request the network address from any of the nameservers. Advantageously, requesting computing system102never receives a network address that was provided by a nameserver that DNS resolver101determined to be suspect. Requesting computing system102, consequently, is never able to contact the computing system having a network address that would have been provided by the suspect nameserver. Any potentially adverse effects from contacting that computing system are thereby avoided. FIG.3illustrates operational scenario300for detecting domain hijacking during DNS lookup. Operational scenario300is an example of how the nameserver criteria discussed above is generated. DNS resolver301is an example of DNS resolver101and nameservers303are examples of nameservers103. Administrator system304is a user system (e.g., laptop, desktop workstation, telephone, etc.) that, in this example, is operated by an administrative user for DNS resolver301. While DNS resolver301performs the steps to generate the nameserver criteria in this example, another system(s) may perform the steps in other examples and then provide the nameserver criteria for use by DNS resolver301to perform operation200. In operational scenario300, DNS resolver301receives domain name whitelist321at step1from administrator system304. The administrative user operating administrator system304may be an employee of an enterprise that is tasked with protecting computing systems in the enterprise. Domain name whitelist321is a list of domain names that the administrator believes computing systems using DNS resolver301(e.g., computing systems within the enterprise) to resolve DNS requests should be allowed to contact. For example, domain name whitelist321may list websites that are appropriate for visitation by users. Websites not on domain name whitelist321may simply be blocked by a firewall, or otherwise not allowed to be accessed by computing systems under the control of the administrative user. As such, the number of domain names for which DNS resolver301needs to generate nameserver criteria is limited by domain name whitelist321. For each domain in domain name whitelist321, DNS resolver301performs a WHOIS lookup at step2. A WHOIS lookup for nameservers will return names of the nameservers that are currently assigned to provide a network address for the domain name that is the subject of the WHOIS lookup. After identifying the names of the nameservers for a domain name, DNS resolver301then determines characteristics of proper nameservers for the domain name at step3. A proper nameserver is a nameserver that is actually supposed to be providing network addresses corresponding to the domain name (e.g., a nameserver that the domain name's owner registered). When first generating nameserver criteria, DNS resolver301may simply assume that all nameservers returned by the WHOIS lookup for a particular domain name are proper. Alternatively, DNS resolver301may use initial criteria and/or query a user, such as the administrator operating administrator system304, to determine whether the nameservers are proper. In the initial criteria examples, the initial criteria may be preinstalled into DNS resolver301, may be provided by a user, or may be obtained from some other source. The initial criteria may include generally applicable characteristics that indicate a nameserver is or is not suspect (e.g., may include nameserver names that are well known and recognized as being proper, or as being suspect, across a wide range of domain names). The nameserver characteristics for a domain name are then determined for the proper nameservers for the domain name. The characteristics may be implicit or explicit in the nameserver name itself (e.g., the name may indicate a hosting service) or DNS resolver301may query other systems, including the nameserver itself, to determine characteristics of a nameserver. For example, DNS resolver301may determine an IP address of a nameserver to determine a geographic location of the nameserver based on the IP address. DNS resolver301generates nameserver criteria for each domain name at step4from the characteristics of the proper nameservers. The nameserver criteria may build upon the initial criteria or may be independent of the initial criteria. This first iteration of nameserver criteria may be based on a single WHOIS lookup for each domain name or DNS resolver301may continue to perform WHOIS lookups over time to capture characteristics of other nameservers for a domain name since nameservers may change over time (e.g., a domain name owner may choose to switch hosting services and, thereby, switch nameservers accordingly). In some examples, characteristics of nameservers not identified for a particular domain name (e.g., those identified for other domain names) may be used when generating the nameserver criteria for the domain name. For example, if a particular nameserver is know to universally be malicious, then characteristics of that nameserver may be included in the nameserver criteria as a suspect nameserver. In some examples, steps2-4may repeat periodically to update the nameserver criteria for each domain name in domain name whitelist321. That is, at step4, rather than generating completely new criteria, DNS resolver301may update the nameserver criteria that already exists for the domain name. In further examples, a machine learning algorithm may be employed as the nameserver criteria. The machine learning algorithm may be a neural network that automatically learns to create its own nameserver criteria for a domain name. In those examples, the differentiation of nameserver criteria between domain names may be blurred because the algorithm may treat the domain name itself as one of the input factors. For example, the algorithm may be fed training input indicating various domains, such as those in domain name whitelist321, along with nameserver characteristics indicating whether each set of characteristics are for a suspect or proper nameserver for the respective domains. As more and more input sets are provided to the algorithm, the algorithm automatically learns more and more about which nameservers are proper or suspect for any given domain name. In some cases, the algorithm may be trained using domain names that are not included in domain name whitelist321to further add to its robustness. Once the algorithm has been trained on a sufficient number of domain names and nameserver combinations, DNS resolver301can provide the algorithm with a domain name and nameserver characteristics of a nameserver and the algorithm will output whether the nameserver is suspect. FIG.4illustrates operational scenario400for detecting domain hijacking during DNS lookup. In operational scenario400, DNS resolver401is an example of DNS resolver101and nameservers403are examples of nameservers103. Operational scenario400is an example of how DNS resolver401determines when nameserver criteria should be updated for a given domain name. In operational scenario400, DNS resolver401identifies nameservers431-433from nameservers403at step1as being the current nameservers for domain name411. For example, DNS resolver401may perform a WHOIS lookup to identify nameservers431-433as part of step2from operational scenario300or may perform the lookup at some other time. DNS resolver401then determines whether nameservers431-433are proper nameservers for domain name411at step2based on nameserver criteria (or the initial criteria discussed above if nameserver criteria has yet to be generated) for domain name411. Once nameservers431-433are determined to be proper, DNS resolver401generates nameserver hash421at step3from nameservers431-433. Nameserver hash421is a unique hash (i.e., number or character string) that is the output of a hash function that would be different. The hash function may receive nameserver names, network addresses, and/or some other unique identifier for nameservers431-433as input. DNS resolver401uses nameserver hash421in place of maintaining a record of nameservers431-433being associated with the domain name to determine when one or more of the nameservers for the domain name has changed. While the resources needed to maintain/process a record of nameservers associated with a single domain name are minimal, the resources needed to maintain/process records of nameservers for many domain names are not negligible. Instead, DNS resolver401stores a simple hash, like nameserver hash421, associated with each domain name. If the hash changes from one time to the next, at least one of the nameservers used to generate the hash has also changed. In this example, at a later time, DNS resolver401identifies nameservers431-432,434from nameservers403at step4as being the current nameservers for domain name411. Step4may be performed in response to a DNS request for domain name411or may be performed at some other time (e.g., during a periodic update of nameserver information for domain name411). Nameserver hash422is then generated at step5for nameservers431-432,434in the same way nameserver hash421was generated for nameservers431-433. Since nameserver434replaced nameserver433as a nameserver for domain name411, nameserver hash421does not match nameserver hash422, as DNS resolver401determines at step6. Accordingly, DNS resolver401determines whether any of nameservers431-432,434are suspect at step7based on nameserver criteria for domain name411. If all of nameservers431-432,434are not suspect then nameserver hash422replaces nameserver hash421as the hash to which subsequently generated hashes of nameservers are compared. In some cases, since at least one of nameservers431-432,434are new (i.e., nameserver434in this case), DNS resolver401may also update the nameserver criteria for domain name411using characteristics of nameserver434. If nameserver434is determined to be a suspect nameserver in this example, then DNS resolver401may still use nameserver hash422for future comparisons. Although, in this case, DNS resolver401will be looking for a change in nameservers so that it can determine whether all suspect nameservers have been removed. DNS resolver401may also update the nameserver criteria for domain name411based on nameserver434being a suspect nameserver. It should be understood that, while the examples above include DNS resolvers (i.e.,101,301, and401) performing the various operation steps, other systems may perform at least some of the steps instead. For instance, a nameserver validation system in communication with a DNS resolver may be queried by the DNS resolver to determine whether a nameserver identified by the DNS resolver is suspect. That nameserver validation system may also perform the steps necessary to generate nameserver criteria that is used to determine whether the nameserver indicated in the query is suspect. In another example, an intermediate system, such as a firewall, may be located on the data path for DNS requests. The intermediate system may determine a nameserver from which a DNS response to a DNS request is received and query the nameserver validation system about whether the nameserver is suspect. If the nameserver validation system indicates to the intermediate system that the nameserver is suspect, then the intermediate system may block the DNS response from reaching the requesting system. FIG.5illustrates implementation500for detecting domain hijacking during DNS lookup. Implementation500includes DNS resolver501, requesting computing system502, root server503, Top Level Domain (TLD) server504, Second Level Domain (SLD) nameserver505, firewall506, and Internet507. In this example, requesting computing system502is a user system operating by a user and is located behind firewall506from Internet507. Firewall506regulates network traffic exchanged with requesting computing system502. While shown separately, requesting computing system502may incorporate firewall506(e.g., via software, a hardware module on a networking card, or otherwise). Though not shown, requesting computing system502may be on a local area network (LAN) and firewall506may regulate network traffic for computing systems on the LAN in addition to requesting computing system502. Root nameserver503, TLD nameserver504, and SLD nameserver505are each different types of nameservers that are found in the global DNS accessible over Internet507. Nameservers503-505are three of many nameservers included in the global DNS. Specifically, nameservers503-505are those of the global DNS nameservers that are relevant to the domain name of operational scenario600described below. It should be understood that other nameservers may be involved in scenarios for different domain names. In fact, since two or more SLD nameservers are typically used for any given domain name, it is possible that a nameserver other than SLD nameserver505may be used instead for the same domain name from operational scenario600. Sub-domain nameservers also exist in the global DNS hierarchy below SLD nameservers but the example below does not include a sub-domain. FIG.6illustrates operational scenario600for detecting domain hijacking during DNS lookup. In operational scenario600, requesting computing system502is a user system, such as a laptop, desktop workstation, tablet computer, telephone, or other type of user operated device. The user of requesting computing system502inputs a URL for a website at step1into a web browser application executing on requesting computing system502. The URL may be manually input into requesting computing system502using a keyboard of requesting computing system502(e.g., a physical keyboard or a soft keyboard displayed on a touchscreen), the user may use a selection device of requesting computing system502(e.g., a touchscreen, mouse, or the like) to identify a link having the URL, or the user may identify the URL to requesting computing system502via some other type of input. To determine the IP address of a web server hosting the website, the web browser directs requesting computing system502to transfer DNS request601at step2to a DNS resolver. DNS request601requests that a DNS resolver determine (i.e., resolve) a network address associated with the domain name in the URL. For instance, if the URL is http://www.examplesite.com, then the domain name in the URL is “examplesite.com”. DNS request601may be explicitly addressed to DNS resolver501or, as is the case in this example, DNS request601is direct to a different DNS resolver, such as the DNS resolver of an ISP for requesting computing system502. Network traffic exchanged with requesting computing system502passes through firewall506, which allows firewall506to receive the packets carrying DNS request601on its path to the addressed DNS resolver. Firewall506analyzes the packets to recognize DNS request601being carried in the packets and determines at step3that DNS request601should be redirected to DNS resolver501instead. Firewall506may be configured to redirect all DNS requests to DNS resolver501or may use rules/criteria to determine whether certain DNS requests should be redirected. For instance, a rule may direct firewall506to only redirect DNS requests from certain systems within the LAN protected by firewall506. Since firewall506determined that DNS request601should be redirected, firewall506transfers DNS request601to DNS resolver501at step4for servicing. Upon receiving DNS request601, DNS resolver501determines whether an IP address for the domain name therein is cached and, if so, whether a time-to-live (TTL) associated with the domain has elapsed. Caching IP addresses for domain names allows DNS resolver501to more quickly respond to DNS requests (and use less resources) with cached addresses rather than having to reach out to nameservers in the global DNS for the IP addresses, especially when DNS resolver501is handling DNS requests from many different requesting computing systems. Since the IP addresses associated with domain names can change, a TTL is associated with the IP addresses for each domain to minimize the chance that an incorrect IP address for a web server is provided from the cache in response to a DNS request. In this example, the TTL for domain names having nameserver criteria used by DNS resolver501is set to a value representing a very short amount of time (e.g. less than 10 minutes rather than a more typical 12-24 hours). The value may even be set to zero so that DNS resolver501never provides an IP address from the cache. The short TTL ensures that DNS resolver501is more frequently determining whether a domain name has been hijacked based on a nameserver(s) providing an IP address(es) for the domain name. The nameserver criteria used by DNS resolver501to identify whether a domain's nameservers are suspect changes over time. As such, a nameserver that previously provided an IP address for a domain name may later be determined to be suspect by DNS resolver501. It would not be desirable for DNS resolver501to simply provide that IP address out of a cache due to the IP address having been provided by a nameserver that has since been determined to be suspect. In this example, DNS resolver501determines that the TTL for the domain name in DNS request601has expired at step5and determines to obtain an IP address associated with the domain name from the global DNS. To obtain an IP address from the global DNS, DNS resolver501performs TLD query602with root nameserver503at step6to identify a TLD nameserver for TLD of the domain name (e.g., .com, .net, .org, .uk, .au, etc.). There are relatively few root nameservers and root nameservers are assigned network addresses that do not change, or at least change very rarely, which allows DNS resolver501to store and use those addresses when resolving a domain name. During TLD query602DNS resolver501requests a TLD nameserver for the domain name in DNS request601(e.g., a TLD nameserver for .com in the domain name “examplesite.com”) and root nameserver503provides the information (e.g., name and IP address) for TLD nameserver504in response. DNS resolver501then uses the information provided by root nameserver503to contact TLD nameserver504and perform SLD query603with TLD nameserver504at step7to identify a SLD nameserver for the domain name. During SLD query603, DNS resolver501requests a SLD nameserver for the domain name in DNS request601(e.g., the “examplesite” portion of the domain name “examplesite.com”) and TLD nameserver504provides the information for SLD nameserver505in response. Upon receiving the information for SLD nameserver505, DNS resolver501determines that SLD nameserver505is suspect using nameserver criteria for the domain name in DNS request601. That is, the nameserver criteria indicates to DNS resolver501that the characteristics of SLD nameserver505do not conform to what is expected of a SLD nameserver for the domain name. In response to DNS resolver501determining that SLD nameserver505is suspect, DNS resolver501declines to resolve DNS request601using SLD nameserver505. Though not shown, DNS resolver501may transfer a response to DNS request601back to requesting computing system502indicating that an IP address for the domain name could not be resolved and may also indicate the reason for not resolving DNS request601(i.e., that SLD nameserver505is suspect). In other examples, DNS resolver501may query TLD nameserver504again to check for a different SLD nameserver and then determine whether that other SLD nameserver is suspect based on the nameserver criteria. If the other SLD nameserver is not determined to be suspect, then DNS resolver501reaches out to the other nameserver for an IP address of the web server for the domain name and resolves DNS request601with that IP address. While not resolving DNS request601prevents requesting computing system502from contacting a potentially malicious web server supplied by a suspect nameserver, DNS resolver501may go further to protect requesting computing system502or any other system protected by firewall506. For instance, DNS resolver501may still contact SLD nameserver505for an IP address but, instead of resolving DNS request601with the IP address, creates a rule in firewall506to block network traffic exchanged with the IP address. DNS resolver501may also blacklist SLD nameserver505so that anytime SLD nameserver505is identified while resolving a DNS request, the DNS request is not resolved regardless of what the nameserver criteria indicates. In one example, the nameserver criteria may indicate a confidence level that a nameserver is suspect (e.g., the nameserver has a threshold number of characteristics indicating that the nameserver is suspect). Nameservers that reach a threshold level of confidence for being suspect may be placed on the blacklist. FIG.7illustrates computing architecture700for participating in a communication session using prerecorded messages. Computing architecture700is an example computing architecture for DNS resolvers101,301,401, and501, although those resolvers may use alternative configurations. A similar architecture may also be used for other systems described herein (e.g., requesting systems, nameservers, etc.), although alternative configurations may also be used. Computing architecture700comprises communication interface701, user interface702, and processing system703. Processing system703is linked to communication interface701and user interface702. Processing system703includes processing circuitry705and memory device706that stores operating software707. Communication interface701comprises components that communicate over communication links, such as network cards, ports, RF transceivers, processing circuitry and software, or some other communication devices. Communication interface701may be configured to communicate over metallic, wireless, or optical links. Communication interface701may be configured to use TDM, IP, Ethernet, optical networking, wireless protocols, communication signaling, or some other communication format—including combinations thereof. User interface702comprises components that interact with a user. User interface702may include a keyboard, display screen, mouse, touch pad, or some other user input/output apparatus. User interface702may be omitted in some examples. Processing circuitry705comprises microprocessor and other circuitry that retrieves and executes operating software707from memory device706. Memory device706comprises a computer readable storage medium, such as a disk drive, flash drive, data storage circuitry, or some other memory apparatus. In no examples would a storage medium of memory device706be considered a propagated signal. Operating software707comprises computer programs, firmware, or some other form of machine-readable processing instructions. Operating software707includes resolver module708. Operating software707may further include an operating system, utilities, drivers, network interfaces, applications, or some other type of software. When executed by processing circuitry705, operating software707directs processing system703to operate computing architecture700as described herein. In particular, resolver module708directs processing system703to, in response to a request to resolve a network address corresponding to a domain name, determine that a nameserver for the domain name is suspect based on satisfaction of nameserver criteria associated with the domain name. Resolver module708also directs processing system703to prevent the nameserver from resolving the request in response to determining that the nameserver is suspect. The descriptions and figures included herein depict specific implementations of the claimed invention(s). For the purpose of teaching inventive principles, some conventional aspects have been simplified or omitted. In addition, some variations from these implementations may be appreciated that fall within the scope of the invention. It may also be appreciated that the features described above can be combined in various ways to form multiple implementations. As a result, the invention is not limited to the specific implementations described above, but only by the claims and their equivalents. | 33,437 |
11943197 | DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. Wherever convenient, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several examples of embodiments and features of the present disclosure are described herein, modifications, adaptations, and other implementations are possible, without departing from the spirit and scope of the present disclosure. Accordingly, the following detailed description does not limit the present disclosure. Instead, the proper scope of the disclosure is defined by the appended claims. In a simple global DNS, a DNS resolver can receive DNS resolution requests for the same domain name, such as EXAMPLE.COM, and return the same IP address for each request. This may be an adequate process in some instances when using a global top-level domain (TLD), such as .COM. However, the process may require further navigation to reach a useful resource, and such further navigation takes additional time and results in additional use of processing and networking resources. Additionally, as described above, the list of TLDs is continuously expanding, and various organizations and locations use iTLDs for local traffic. Further, the DNS landscape could expand to include dotless domain names, which are domain names that use a single label (e.g., EXAMPLE), without a “dot” to separate labels in the domain name, to offer service identifiers. The expanding list of TLDs, the use of iTLDs, and/or the use of dotless domains can result in an increased likelihood of domain name collisions, where there is an overlap between namespaces. Domain name collisions can yield unintended or harmful results, such as incorrectly navigating a device to the wrong webpage or other resource, allowing unauthorized users access to secure resources, incorrect or inefficient routing of a DNS resolution request, etc. Accordingly, in some embodiments, a user device and/or a DNS resolver (e.g., an internal or a global DNS resolver) can be configured to determine contextual information about a device and/or user of a device to augment the DNS resolution process. For example, a user device can determine a connection interface to use for transmitting a DNS request (e.g., a cellular data network interface or a WIFI interface), an internal DNS resolver can determine whether to resolve a request locally or to transmit the request to a global resolver, a global DNS resolver can determine how to route the request and/or which resource to return in response to the request, etc. In some implementations, the contextual information can be included with the DNS resolution request (e.g., encoded in the domain name (e.g., in a query name), as metadata, etc.), and a user device can generate the DNS resolution request with the contextual information and/or a resolver can extract the contextual information from a received DNS resolution request. Many devices can determine and/or maintain various contextual information about the device and/or a user of the device, such as the devices location (e.g., using a Global Positioning System (GPS) receiver, a geolocated access point, etc.), user credentials (e.g., a username, password, etc.), device credentials (e.g., a service set identifier (SSID), a certificate, a cryptographic key, a media access control (MAC) address, an IP address, etc.), and the like. Using the contextual information to augment the DNS resolution process can result in, for example: reduced usage of processing resources, networking resources, and time (e.g., by more efficient resolution request routing and/or by reduced navigation after the initial request); improved resolution results; more secure resolution results, and the like. Further, using the contextual information to augment the DNS resolution process can reduce or, in some instances, eliminate domain name collisions, in particular with regard to dotless domain names. FIG.1is a diagram illustrating an example network environment100that uses DNS, consistent with certain disclosed embodiments. In some embodiments, the environment100can include a network110, a DNS resolver120, an internal network130, an internal network140, and a computer150. In some implementations, the internal network130can include a computer132, an internal DNS resolver134, and a local resource storage136. In further implementations, the internal network140can include a computer142, an internal DNS resolver144, and a local resource storage146. In some embodiments, the network110can represent any type of one or more wired and/or wireless telecommunications networks. For example, the network110can represent the Internet and/or one or more telephone networks, one or more cellular data networks, one or more local area networks (LANs), etc. In some implementations, computing devices, such as, for example, the computer132, the computer142, and the computer150, can connect to, for example, the DNS resolver120via the network110. In further embodiments, the environment100can include a geofence138, associated with the internal network130, and a geofence148, associated with the internal network140. As used herein, a geofence can represent a virtual perimeter for a real-world geographic area. For example, a geofence can be a radius around a specific point in a real-word geographic area, can be a predefined set of boundaries (e.g., around a certain real-world property or building), and the like. In some embodiments, the DNS resolver120can represent one or more computing devices. For example, the DNS resolver120can represent one or more DNS name servers, recursive DNS name servers, domain name registry servers, database servers, web servers, mainframe computers, routers, etc. In some implementations, the DNS resolver120can be a global DNS resolver for one or more global TLDs. As used herein, a DNS resolver (internal, global, etc.) can represent any device that can receive a DNS resolution request and either forward the request to another DNS resolver or resolve the request itself by obtaining a resource (e.g., a DNS record, an IP address, a file, etc.) stored locally or on another device (e.g., a database) in response to the request. Accordingly, a DNS resolver, as used herein, can route received DNS resolution requests and not perform any actual resolutions, can route some received DNS resolution requests and can resolve other DNS resolution requests, or may not route received DNS resolution requests to other resolvers but can resolve received DNS resolution requests. In some embodiments, the DNS resolver120can extract contextual information from DNS resolution requests, and can use the contextual information to determine, for example, how to route the request and/or how to resolve the request. In some implementations, the internal network130can be the internal network of an organization (e.g., a business, an agency, an individual, etc.) or a location. For example, the internal network130can be one or more LANs that include one or more wired and/or wireless connections. In some embodiments, the internal DNS resolver134can represent one or more computing devices connected to the internal network130. For example, the internal DNS resolver134can represent one or more internal name servers, database servers, web servers, mainframe computers, routers, etc. In some implementations, the internal DNS resolver134can be an internal DNS resolver for the network130. In further embodiments, the internal DNS resolver134can extract contextual information from DNS resolution requests, and can use the contextual information to determine, for example, how to route the request (e.g., resolve locally or route to a global resolver) and/or how to resolve the request. In some implementations, the internal DNS resolver134can resolve DNS resolution requests locally by obtaining resources from the local resource storage136in response to the requests. A resource can be, for example, a file (e.g. a webpage, a document, etc.), a network address of a file or a local service, a DNS record, and the like. The local resource storage136can represent one or more computing devices connected to the internal network130. For example, the local resource storage136can represent one or more user devices (e.g., desktop computers, laptops, mobile devices (e.g., tablet devices, smartphones, and the like), database servers, mainframe computers, etc.) In some implementations, the local resource storage136can be part of the internal DNS resolver134. In some embodiments, the computer132can represent one or more end-user computing devices connected to the internal network130, such as, for example, desktop computers, laptops, mobile devices, etc. The computer132can be connected to the internal network130and can be physically located within the geofence138. In various embodiments, contextual information (e.g., GPS coordinates, signal strength from a wireless access point, etc.) about the computer132and/or a user of computer132can be included with DNS resolution requests from the computer132, and the contextual information can be used by a DNS resolver (e.g., the internal DNS resolver134) to determine how to route and/or resolve the DNS resolution requests. For example, the internal DNS resolver134may resolve DNS resolution requests locally based on determining that the computer132is within geofence138using the contextual information. In some embodiments, the computer132may be able to connect to different networks using different connection interfaces. For example, the computer132may be to connect to the internal network130using a WIFI interface and may be able to connect to the network110using a cellular data network interface. In further embodiments, the computer132can determine which connection interface to use based on contextual information. For example, the computer132can transmit a DNS resolution request to the internal DNS resolver134via a WIFI interface based on determining that the computer132is within the geofence138. As a further example, if the computer132is moved outside of the geofence138, the computer132can transmit a DNS resolution request to the DNS resolver120via a cellular data network interface based on determining that the computer132is outside of the geofence138, even if a WIFI connection to the internal network130is available. In some implementations, the internal network140can be the internal network of an organization (e.g., a business, an agency, an individual, etc.) or a location. For example, the internal network140can be one or more LANs that include one or more wired and/or wireless connections. In some embodiments, the internal DNS resolver144can represent one or more computing devices connected to the internal network140. For example, the internal DNS resolver144can represent one or more internal name servers, database servers, web servers, mainframe computers, routers, etc. In some implementations, the internal DNS resolver144can be an internal DNS resolver for the network140. In further embodiments, the internal DNS resolver144can extract contextual information from DNS resolution requests, and can use the information to determine, for example, how to route the request and/or how to resolve the request. In some implementations, the internal DNS resolver144can resolve DNS resolution requests locally by obtaining resources from the local resource storage146in response to the requests. The local resource storage146can represent one or more computing devices connected to the internal network140. For example, the local resource storage146can represent one or more user devices, database servers, mainframe computers, etc. In some embodiments, the computer142can represent one or more end-user computing devices connected to the internal network140, such as, for example, desktop computers, laptops, mobile devices, etc. The computer142can be connected to the internal network140, but may not be physically located within the geofence140. In various embodiments, contextual information (e.g., GPS coordinates, signal strength from a wireless access point, etc.) about the computer142and/or a user of computer142can be included with DNS resolution requests from the computer142, and the contextual information can be used by a DNS resolver (e.g., the internal DNS resolver144) to determine how to route and/or resolve the DNS resolution requests. For example, the internal DNS resolver144may forward received DNS resolution requests to a global resolver based on determining that the computer142is not within geofence148(using the contextual information) even though the computer142is connected to the internal network148. In some implementations, the computer150can represent one or more end-user computing devices, such as, for example, desktop computers, laptops, mobile devices, etc. The computer150can be connected to the network110. In various embodiments, contextual information about the computer150and/or a user of computer150can be included with DNS resolution requests from the computer150, and the contextual information can be used by a DNS resolver (e.g., the DNS resolver120) to determine how to route and/or resolve the DNS resolution requests. For example, the DNS resolver120may forward received DNS resolution requests to another resolver based on determining that the computer150is within a certain geofence (using the contextual information), may resolve the DNS resolution request by providing a secure webpage based on determining that the computer150and/or a user of the computer150is authorized to access the secure webpage, etc. The schematic depicted inFIG.1is merely for the purpose of illustration and is not intended to be limiting. Further, the DNS depicted is merely a simplified example of a DNS, consistent with certain disclosed embodiments, but such an example is not intended to be limiting. For example, in various embodiments, the DNS can include additional networks, servers, computers, storage devices, DNS resolvers, and/or other devices. Additionally, the described devices can be separate devices, as pictured, or various devices can be combined, consistent with certain disclosed embodiments. FIG.2is a flow diagram illustrating an example process for generating a DNS resolution request, consistent with certain disclosed embodiments. In some embodiments, the method described inFIG.2can be performed using a computing device such as, for example, an end-user computing device, a server, a database, etc. For example, the method described inFIG.2can be performed by any one of computers132,142, and150inFIG.1. The process can begin in200, when the computing device receives instructions to obtain a resource. In some embodiments, the computing device can additionally receive a domain name associated with the resource. In further embodiments, the resource can be, for example, a webpage or other type of file, while, in other embodiments, the resource can be an address (e.g., an IP address) of a webpage, a DNS record, or other type of file. For example, the instructions can be generated by a web browser when a user enters a domain name into an address bar of a web browser, a program on the computing device that attempts to access the resource using the domain name, etc. In210, the computing device can determine contextual data. In some embodiments, the contextual data can include contextual data corresponding to the computing device. For example, the contextual data can include a GPS location, a connection signal strength, a near-field communication (NFC) connection, accelerometer data, ambient light data, a timestamp, a device request history, a list of programs installed on the computing device, an operating system installed on the computing device, installed hardware identifiers, an international mobile subscriber identify (IMSI) and related key, an IP address, a MAC address, an SSID, a locally stored certificate, a locally stored cryptographic key, a radio-frequency identification (RFID), camera detection data, a screen size, a central processing unit (CPU) speed, a total storage size, a total storage available, a total memory available, a network protocol choice, a network topology of a connected network, an IP packet Time To Live (TTL), a resource whitelist, a resource blacklist, etc. In additional embodiments, the contextual data can include contextual data corresponding to a user of the computing device. For example, the contextual data can include a username, a password, demographic data, a user request history, a certificate associated with the user, a cryptographic key associated with the user, etc. In some implementations, contextual data can be determined using one or more sensors of the computing device. For example, GPS location can be determined using a GPS receiver, a connection signal strength can be determined using a WIFI interface or a cellular data network interface, ambient light data and camera detection data can be determined using a camera, etc. In further implementations, contextual data can be determined based on stored information on the computing device (e.g., a list of installed programs, a request history, hardware identifiers, etc.). In other implementations, the contextual data can be input by a user (e.g., a username, a password, etc.). In220, the computing device can generate a DNS resolution request. The DNS resolution request can be a request that includes a domain name associated with the resource that the computing device was instructed to obtain in200. In some implementations, the domain name can include a second-level domain (SLD), e.g., EXAMPLE in EXAMPLE.COM, and a TLD, e.g., COM in EXAMPLE.COM. In other implementations, the domain name can be a dotless domain name (e.g., EXAMPLE). In some embodiments, the DNS resolution request can include the contextual data that was determined in210. In some implementations, the contextual data can be included with the DNS resolution request as part of a Uniform Resource Locator (URL) included in the DNS resolution request. For example, the contextual data can be encoded using a URL encoding function, such as, the encodeURI( ) function in the JavaScript scripting language, the rawurlencode( ) function in the PHP scripting language, or the Server.URLEncode( ) function in Active Server Pages (ASP) scripting language. In other embodiments, the contextual data can be included with the DNS resolution request encoded in the domain name in the DNS resolution request. For example, the contextual data can be encoded in a query name (qname). In further implementations, the contextual data can be included with the DNS resolution request as metadata included in a DNS resolution request packet generated by the computing device. In230, the computing device can transmit the DNS resolution request to a resolver using, for example, a wired network interface, a WIFI interface, a cellular data network interface, etc. The resolver can be, for example, an internal DNS resolver (e.g., the internal DNS resolvers134or144shown inFIG.1) or a global DNS resolver (e.g., the DNS resolver120shown inFIG.1). In240, the computing device can receive a resource in response to the DNS resolution request transmitted in230. For example, the resource can be an IP address or other network address of a webpage, a DNS record, or other type of file. In various embodiments, once the IP address or other network address is received, the computing device can use the address to obtain a webpage or other type of file from a network location (e.g., a local network location or a global network location). As a further example, the resource received in240can be the webpage or other type of file. While the steps depicted inFIG.2have been described as performed in a particular order, the order described is merely an example, and various different sequences of steps can be performed, consistent with certain disclosed embodiments. For example, the computing device can generate the DNS resolution request, determine the contextual data, and then add the contextual data to the DNS resolution request. Additionally, the steps are described as discrete steps merely for the purpose of explanation, and, in some embodiments, multiple steps may be performed simultaneously and/or as part of a single computation. Further, the steps described are not intended to be exhaustive or absolute, and various steps can be inserted or removed. FIG.3is a flow diagram illustrating an example process for generating a DNS resolution request, consistent with certain disclosed embodiments. In some embodiments, the method described inFIG.3can be performed using a computing device such as, for example, an end-user computing device, a server, a database, etc. For example, the method described inFIG.2can be performed by any one of computers132,142, and150inFIG.1. The process can begin in300, when the computing device receives instructions to obtain a resource. In some embodiments, the computing device can additionally receive a domain name associated with the resource. In further embodiments, the resource can be, for example, a webpage or other type of file, while, in other embodiments, the resource can be an address of a webpage, a DNS record, or other type of file. In310, the computing device can determine contextual data. In some embodiments, the contextual data can include contextual data corresponding to the computing device. In additional embodiments, the contextual data can include contextual data corresponding to a user of the computing device. In some implementations, the contextual data can be determined using one or more sensors of the computing device. In further implementations, contextual data can be determined based on stored information on the computing device. In other implementations, the contextual data can be input by a user. In320, the computing device can generate a DNS resolution request. The DNS resolution request can be a resolution request that includes a domain name associated with the resource that the computing device was instructed to obtain in300. In some embodiments, the DNS resolution request can include contextual data that was determined in310. In some implementations, the contextual data can be included with the DNS resolution request encoded in the domain name in the DNS resolution request. In further implementations, the contextual data can be included with the DNS resolution request as metadata included in a DNS resolution request packet generated by the computing device. In some embodiments, the computing device may have multiple interfaces for transmitting DNS requests. For example, the computing device may have a wired network interface, a WIFI interface, and/or a cellular data network interface. In some instances, the computing device may have a default interface of the multiple interfaces for performing network interactions (e.g., transmitting DNS requests) and/or a hierarchy for using the multiple interfaces. For example, the computing device may have a default interface of using a WIFI interface, but when a WIFI interface is unavailable the computing device may use a cellular data network interface. In some implementations, the computing device can have contextual conditions corresponding to one or more of the multiple interfaces, and the computing device can compare current contextual data (e.g., as determined in310) to contextual conditions for an interface. If the contextual conditions for an interface are met based on the current contextual data, that interface may be used for transmitting a DNS request. If the contextual conditions are not met, the computing device may determine which interface to use based on a normal procedure (e.g., a default interface, based on hierarchy, etc.). In330, the computing device can determine whether contextual conditions for a first interface are met based on the contextual data determined in310. For example, the contextual conditions can include that a location of the computing device is within a particular geofence (e.g., geofence138or geofence148inFIG.1). A resolution rule corresponding to the contextual condition may be to use the first interface (e.g., a WIFI connection) when the computing device is located within the particular geofence. Other contextual conditions can include, but are not limited to: that the computing device is outside of a particular geofence; that a connection signal strength to a particular access point is greater than a threshold; or that an authorized username and password have been entered by a user. If, in330, the contextual rule is met (330: YES), the process can proceed to340, and the computing device can transmit the DNS resolution request to a resolver via the first interface. If, in330, the contextual rule is not met (330: NO), the process can proceed to350, and the computing device can transmit the DNS resolution request to a resolver via a second interface, which can be, for example, a default interface. In360, the computing device can receive a resource in response to the DNS resolution request transmitted in340or360. In some embodiments, the resource can be received via the same interface that transmitted the request while, in other embodiments, the resource can be received via a different interface. While the steps depicted inFIG.3have been described as performed in a particular order, the order described is merely an example, and various different sequences of steps can be performed, consistent with certain disclosed embodiments. For example, the computing device can generate the DNS resolution request, determine the contextual data, and then add the contextual data to the DNS resolution request. Moreover, the computing device may have contextual conditions for multiple interfaces, and can determine which contextual conditions are met based on the contextual data or transmit the request using a default interface if none of the contextual conditions are met. Additionally, the steps are described as discrete steps merely for the purpose of explanation, and, in some embodiments, multiple steps may be performed simultaneously and/or as part of a single computation. Further, the steps described are not intended to be exhaustive or absolute, and various steps can be inserted or removed. FIG.4is a flow diagram illustrating an example process for resolving a DNS resolution request, consistent with certain disclosed embodiments. In some embodiments, the method described inFIG.4can be performed using a computing device such as, for example, a DNS name server, an internal name server, a recursive DNS name server, a domain name registry server, a database server, a web server, a mainframe computer, a router, etc. For example, the method described inFIG.4can be performed by any one of resolvers120,134, and144inFIG.1. The process can begin in400, when the computing device receives a DNS resolution request. The DNS resolution request can be received from an end-user device or a server that generated the request or the request can be received from another resolver that forwarded the request. In some embodiments, the DNS resolution request can include contextual data, as described above. For example, the contextual data can be encoded in a domain name or can be included as metadata. In410, the computing device can determine the contextual data associated with the DNS resolution request. In some embodiments, the computing device can extract the contextual data from the URL in the request or from the DNS request packet that included the request. In420, the computing device can determine whether a contextual condition for a first resolution rule is met based on the contextual data determined in410. For example, the contextual data can include an indication of a location of the computing device, and the location can be within a particular geofence (e.g., geofence138or geofence148inFIG.1). Additionally, the contextual data may include a username and password. A contextual condition may be that the computing device is located within the particular geofence and the user is an authorized user. The corresponding resolution rule may be to resolve the request locally by obtaining the resource, that is only available to authorized users, from storage on the computing device or from storage available within an internal network that includes the computing device. If, in420, the contextual condition is met (420: YES), the process can proceed to430, and the computing device can resolve the DNS resolution request based on applying the resolution rule. If, in420, the contextual condition is not met (420: NO), the process can proceed to440. In440, the computing device can determine whether a contextual condition for a second resolution rule is met based on the contextual data determined in410. For example, the contextual data can include an indication of a location of the computing device, and the location can be within a particular geofence. Additionally, the contextual data may not include an authorized username and/or password. A contextual condition may be that the computing device is located within the particular geofence and the user is not an authorized user. The corresponding resolution rule may be to resolve the request locally by obtaining a non-sensitive resource (i.e., that can be provided to unauthorized users) from storage on the computing device or from storage available within an internal network that includes the computing device. For example, the non-sensitive resource can indicate a failure to access a requested resource, can be a DNS record and/or an address of a webpage with non-sensitive information, etc. If, in440, the contextual condition is met (440: YES), the process can proceed to450, and the computing device can resolve the DNS resolution request based on applying the second resolution rule. If, in440, the contextual condition is not met (440: NO), the process can proceed to460. Other example contextual conditions and/or resolution rules can include but are not limited to, obtaining a geo-specific resource based on a location in the contextual data, obtaining a time-specific resource based on a timestamp in the contextual data, obtaining a device-specific resource based on hardware information in the contextual data, obtaining a user-specific resource based on a username or user-browsing history in the contextual data, adding contextually-relevant advertisements to the resource, forwarding the DNS resolution request to a global resolver, forwarding the DNS resolution request to a local or internal resolver, obtaining a resource based on the resource being on a resource whitelist or not on a resource blacklist, obtaining a resource indicating a webpage could not be obtained based on a requested resource not being on a resource whitelist or being on a resource blacklist, etc. In460, the computing device can resolve the DNS resolution request based on applying a default resolution rule. In some embodiments, a default resolution rule can be to forward the DNS resolution request to another resolver. For example, the computing device can be an internal resolver for an internal network, and the computing device can forward the DNS resolution request to a global resolver for global resolution based on contextual conditions and corresponding resolution rules. As a further example, the computing device can be a global resolver, but can forward the DNS resolution request to another global resolver based on contextual conditions and corresponding resolution rules. While the steps depicted inFIG.4have been described as performed in a particular order, the order described is merely an example, and various different sequences of steps can be performed, consistent with certain disclosed embodiments. Additionally, the steps are described as discrete steps merely for the purpose of explanation, and, in some embodiments, multiple steps may be performed simultaneously and/or as part of a single computation. Further, the steps described are not intended to be exhaustive or absolute, and various steps can be inserted or removed. FIG.5is a flow diagram illustrating an example process for resolving a DNS resolution request, consistent with certain disclosed embodiments. In some embodiments, the method described inFIG.5can be performed using a computing device such as, for example, an internal name server, a router, etc. For example, the method described inFIG.5can be performed by any one of resolvers134and144inFIG.1. The process can begin in500, when the computing device receives a DNS resolution request. The DNS resolution request can be received from an end-user device or a server that generated the request or the request can be received from resolver or other device that forwarded the request. In some embodiments, the DNS resolution request can include contextual data, as described above. For example, the contextual data can be encoded in a domain name or can be included as metadata. In510, the computing device can determine the contextual data associated with the DNS resolution request. In some embodiments, the computing device can extract the contextual data from the URL in the request or from the DNS request packet that included the request. In520, the computing device can determine whether a location in the contextual data is within a geofence. For example, the geofence can be associated with the boundaries of a building belonging to an organization. Thus, employees and/or members of the organization can have DNS requests locally resolved when within the building and can have DNS requests resolved globally when outside of the building. If, in520, the location is within the geofence (520: YES), the process can proceed to530. In530, the computing device can resolve the request locally. For example, the computing device can obtain the resource from storage on the computing device or from storage available within an internal network that includes the computing device. In540, the computing device can transmit the resource back to the requestor. If, in520, the location is not within the geofence (520: NO), the process can proceed to550, and the computing device can transmit the DNS resolution request to a global resolver for global resolution. In various embodiments, the computing device may only perform510-540if a domain name associated with the DNS resolution request can be locally resolved. For example, if the computing device is in an internal network associated with a INTERNAL TLD, the computing device may perform510-540when a DNS resolution request is received that is associated with the .INTERNAL TLD (e.g., EXAMPLE.INTERNAL). If the TLD is not .INTERNAL (e.g., EXAMPLE.COM), the computing device may proceed from500to550and transmit the DNS resolution request to a global resolver. While the steps depicted inFIG.5have been described as performed in a particular order, the order described is merely an example, and various different sequences of steps can be performed, consistent with certain disclosed embodiments. Additionally, the steps are described as discrete steps merely for the purpose of explanation, and, in some embodiments, multiple steps may be performed simultaneously and/or as part of a single computation. Further, the steps described are not intended to be exhaustive or absolute, and various steps can be inserted or removed. FIG.6is a flow diagram illustrating an example process for resolving a dotless domain resolution request, consistent with certain disclosed embodiments. In some embodiments, the method described inFIG.6can be performed using a computing device such as, for example, a DNS name server, an internal name server, a recursive DNS name server, a domain name registry server, a database server, a web server, a mainframe computer, a router, etc. For example, the method described inFIG.6can be performed by any one of resolvers120,134, and144inFIG.1. The process can begin in600, when the computing device receives a dotless domain DNS resolution request. The dotless domain DNS resolution request can be received from an end-user device or a server that generated the request or the request can be received from another resolver that forwarded the request. In some embodiments, the dotless domain DNS resolution request can include contextual data, as described above. For example, the contextual data can be encoded in a domain name or can be included as metadata. In610, the computing device can determine the contextual data associated with the dotless domain DNS resolution request. In some embodiments, the computing device can extract the contextual data from the URL in the request or from the DNS request packet that included the request. In620, the computing device can compare the contextual data to contextual conditions and determine a resolution rule to use based on the contextual data. For example, the dotless domain associated with the request can be COFFEE, and the contextual data can include a location. Thus, the resolution rule can be to return a location-specific resource that results in displaying, on the requestors device, a list of local coffee shops near the location and/or a map to a local coffee shop. Or, the resolution rules can be to forward the dotless domain DNS resolution request to another resolver that resolves COFFEE requests for that particular location (e.g., a resolver maintained by a coffee store company). As a further example, the dotless domain associated with the request can be SECURITY, and the contextual data can include a location. Thus, if the computing device is an internal DNS resolver for a building and the location is within the building the resolution rule can be to provide a local resource that can be used to contact the building's security, provide hotline information, generate alert statuses for authorized users, etc. If the computing device is not an internal DNS resolver for the building, then the request can be forwarded to an internal DNS resolver for the building. If the computing device is an internal DNS resolver for the building and the location is outside of the building, the resolution rule can be to forward the request to another resolver (e.g., a global resolver), and that resolver can either forward the request or provide a resource that can be used to contact the police or other security force. In630, the computing device can resolve the request based on the determined resolution rules. Thus, dotless domains can be resolved using contextual information, which can result in avoiding domain name collisions that could otherwise occur due to the reduced namespace and the potential lack of specificity in using dotless domains. While the steps depicted inFIG.6have been described as performed in a particular order, the order described is merely an example, and various different sequences of steps can be performed, consistent with certain disclosed embodiments. Additionally, the steps are described as discrete steps merely for the purpose of explanation, and, in some embodiments, multiple steps may be performed simultaneously and/or as part of a single computation. Further, the steps described are not intended to be exhaustive or absolute, and various steps can be inserted or removed. FIG.7is a diagram illustrating an example of a hardware system for DNS resolution, consistent with certain disclosed embodiments. An example hardware system700includes example system components that may be used. The components and arrangement, however, may be varied. Computer701may include processor710, memory720, storage730, and input/output (I/O) devices (not pictured). The computer701may be implemented in various ways and can be configured to perform any of the embodiments described above. In some embodiments, computer701can be a computer of an end-user such as, for example, a desktop computer, a laptop, a mobile device (e.g., a smartphone or a tablet device), etc. In other embodiments, computer701can be a computing device such as, for example, a database server (e.g., a domain name registry and/or name server), a web server, a mainframe computer, etc. For example, computer701can be DNS resolvers120,134, and144, local resource storages136and146, and/or computers132,142, and150inFIG.1. Computer701may be standalone or may be part of a subsystem, which may, in turn, be part of a larger system. The processor710may include one or more known processing devices, such as a microprocessor from the Intel Core™ family manufactured by Intel™, the Phenom™ family manufactured by AMD™, or the like. Memory720may include one or more storage devices configured to store information and/or instructions used by processor710to perform certain functions and operations related to the disclosed embodiments. Storage730may include a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of computer-readable medium used as a storage device. In some embodiments, storage730can include, for example, domain name records or other resources, contextual information, contextual rules, and resolution rules, etc. In an embodiment, memory720may include one or more programs or subprograms including instructions that may be loaded from storage730or elsewhere that, when executed by computer701, perform various procedures, operations, or processes consistent with disclosed embodiments. For example, memory720may include DNS resolver program725for generating DNS requests, determining contextual data, routing DNS requests based on contextual data, resolving DNS requests based on contextual data, providing DNS responses, and/or receiving DNS responses, according to various disclosed embodiments. Memory720may also include other programs that perform other functions, operations, and processes, such as programs that provide communication support, Internet access, etc. The DNS resolver program725may be embodied as a single program, or alternatively, may include multiple sub-programs that, when executed, operate together to perform the function of the DNS resolver program725according to disclosed embodiments. In some embodiments, DNS resolver program725can perform all or part of the processes ofFIGS.2-6, described above. Computer701may communicate over a link with network740. For example, the link may be a direct communication link, a local area network (LAN), a wide area network (WAN), or other suitable connection. Network740may include the internet, as well as other networks, which may be connected to various systems and devices. Computer701may include one or more input/output (I/O) devices (not pictured) that allow data to be received and/or transmitted by computer701. I/O devices may also include one or more digital and/or analog communication I/O devices that allow computer701to communicate with other machines and devices. I/O devices may also include input devices such as a keyboard or a mouse, and may include output devices such as a display or a printer. Computer701may receive data from external machines and devices and output data to external machines and devices via I/O devices. The configuration and number of input and/or output devices incorporated in I/O devices may vary as appropriate for various embodiments. Example uses of the system700can be described by way of example with reference to the embodiments described above. While the teachings has been described with reference to the example embodiments, those skilled in the art will be able to make various modifications to the described embodiments without departing from the true spirit and scope. The terms and descriptions used herein are set forth by way of illustration only and are not meant as limitations. In particular, although the method has been described by examples, the steps of the method may be performed in a different order than illustrated or simultaneously. Furthermore, to the extent that the terms “including”, “includes”, “having”, “has”, “with”, or variants thereof are used in either the detailed description and the claims, such terms are intended to be inclusive in a manner similar to the term “comprising.” As used herein, the term “one or more of” with respect to a listing of items such as, for example, A and B, means A alone, B alone, or A and B. Those skilled in the art will recognize that these and other variations are possible within the spirit and scope as defined in the following claims and their equivalents. | 44,647 |
11943198 | DETAILED DESCRIPTION In order to make the objects, technical schemes and advantages of the present disclosure clear, the present disclosure will be further described in detail with reference to the drawings and embodiments. It should be understood that the embodiments described here are only used to illustrate the present disclosure, and are not intended to limit the present disclosure. It should be noted that although functional modules have been divided in the schematic diagrams of the apparatus, and logical orders have been shown in the flowcharts, in some cases, the modules may be divided in a different manner, or the steps shown or described are executed in an order different from the orders as shown in the flowcharts. The terms “first”, “second” and the like in the description, the claims, and the accompanying drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or a precedence order. The present disclosure provides a method for implementing multiple Packet Data Networks (PDN) between an Indoor Unit (IDU) and an Outdoor Unit (ODU), an ODU, an IDU, and a computer readable storage medium. The IDU may allocate multiple VLAN interfaces to its own RJ45 interface. The ODU may allocate multiple VLAN interfaces to its own RJ45 interface, and at the same time, the ODU may establish PDN data channels that are in one-to-one correspondence with the VLAN interfaces allocated, where the PDN data channels are configured for data interaction with a base station. After the IDU sends address allocation requests corresponding to the PDN data channels to the ODU, the ODU may allocate IP addresses corresponding to the PDN data channel to the IDU according to the address allocation requests. Then, the IDU may route data messages to the corresponding PDN data channels through its own VLAN interfaces according to the IP addresses and then route the data messages to the base station through the PDN data channels. For this technical scheme, the ODU may establish multiple PDN data channels, and the data messages of the IDU may be routed to corresponding PDN data channels in the ODU according to different IP addresses and then routed to the base station through the PDN data channels. Therefore, the present disclosure can realize multiple PDN connections between the IDU and the ODU with one physical network interface. The embodiments of the present disclosure will be further illustrated with reference to the accompanying drawings. As shownFIG.1,FIG.1is a schematic diagram of a system architecture platform for performing a method for implementing multiple PDNs between an IDU and an ODU provided by an embodiment of the present disclosure. In an embodiment inFIG.1, the system architecture platform includes an ODU100and an IDU200, each of which is provided with a memory120and a processor110. The memory120and the processor110may be connected by a bus or other means, and connection by a bus is taken as an embodiment inFIG.1. The memory120, as a non-transient computer readable storage medium, may be used to store non-transient software programs and non-transient computer executable programs. In addition, the memory120may include a high-speed random-access memory, and may also include a non-transient memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transient solid state storage device. In some implementations, the memory120may include memories remotely located with respect to the processor110, and these remote memories may be connected to the system architecture platform through networks. Examples of the above networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network and combinations thereof. In some embodiments, in this field, in order to ensure the signal intensity and signal stability, the IDU200and ODU100are increasingly favored by major operators. With the continuous change of network topology, there are more and more demands for multiple PDNs, mainly due to the following two reasons. First, in order to reduce operation and maintenance costs, operators gradually adopt remote management devices. However, remote management may easily consume users' traffic, so that the operators adopt an independent PDN, which is free of charge. Second, at present, most Mobile Broadband (MBB) products may support Voice Over Internet Protocol (VOIP) or Voice Over Long-Term Evolution (VoLTE), in order to ensure call quality, operators generally adopt an independent Access Point Name (APN) to ensure the Quality of Service (QOS). Therefore, based on the above two reasons, there may be more and more demands for multiple PDNs. However, the existing IDU200and ODU100are mostly connected by an RJ45 interface, so it is difficult to implement multiple PDN connections between the two units. However, current MBB products are mostly composed of main processors and wireless modules, and for multiple APNs, two schemes are mostly adopted. One is that most of these products have multiple physical channels, for example, most of them have multiple RJ45 interfaces. If there are multiple APNs, all channels are in one-to-one correspondence with corresponding physical channels. The other is to connect directly by a Universal Serial Bus (USB). In this case, most main processors are used for data services, and are used for user Local Area Network (LAN) and Wireless Local Area Network (WLAN) access. Services such as TR069 and VOIP inside other devices may run in the wireless modules without extending multiple channels. For the products of the IDU200and ODU100, the products are positioned that the ODU100only processes the wireless communication protocol stack, and various services such as data service, device management and VOIP are run on the IDU200, so the above two schemes cannot be directly used. Therefore, in view of the above, in the system architecture platform provided by the example inFIG.1, the IDU200may allocate multiple second VLAN interfaces to its own second physical network interface. The ODU100may allocate multiple first VLAN interfaces to its own first physical network interface, and at the same time, the ODU100may establish PDN data channels that are in one-to-one correspondence with the first VLAN interfaces, where the PDN data channels are configured for data interaction with a base station. After the IDU200sends address allocation requests corresponding to the PDN data channels to the ODU100, the ODU100may allocate first IP addresses corresponding to the PDN data channels to the IDU200according to the address allocation requests, and the IDU200may route data messages to the corresponding PDN data channels through its own second VLAN interfaces according to the first IP addresses and then route the data messages to the base station through the PDN data channels. For this technical scheme, the ODU100may establish multiple PDN data channels, and the data messages of the IDU200may be routed to corresponding PDN data channels in the ODU100according to different IP addresses and then routed to the base station through the PDN data channels. Therefore, the present disclosure can implement multiple PDN connections between the IDU200and the ODU100with one physical network interface. Those having ordinary skills in the art can understand that the system architecture platform may be applied to 3G communication network system, LTE communication network system, 5G communication network system, subsequent evolved mobile communication network system and the like, which will be not specifically limited in the embodiments. Those having ordinary skills in the art can understand that the system architecture platform shown inFIG.1does not constitute a limitation to the embodiments of the present disclosure and may include more or fewer components than shown, or combinations of components, or different component arrangements. In the system architecture platform shown inFIG.1, the processor110may invoke a program for implementing multiple PDNs stored in the memory120to perform the method for implementing multiple PDNs between the IDU200and the ODU100. Based on the above system architecture platform, various embodiments of the method for implementing multiple PDNs between an IDU and an ODU of the present disclosure are proposed below. As shown inFIG.2,FIG.2is a flowchart of a method for implementing multiple PDNs between an IDU and an ODU provided by an embodiment of the present disclosure. The method for implementing multiple PDNs between an IDU and an ODU may be applied to the ODU, which includes, but not limited to following step S100, step S200, step S300and S400. At S100, at least two first VLAN interfaces are allocated to a first physical network interface. At S200, PDN data channels that are in one-to-one correspondence with the first VLAN interfaces are established, where the PDN data channels are configured for data interaction with a base station. At S300, address allocation requests corresponding to the PDN data channels from the IDU are acquired. At S400, first IP addresses corresponding to the PDN data channels are allocated to the IDU according to the address allocation requests, to allow data messages from the IDU to be routed to the corresponding PDN data channels according to the first IP addresses. In an embodiment, the IDU is configured for running various application services and providing LAN or WLAN services for the downstream, to realize data processing and services, and a second physical network interface on the IDU is configured for connecting with the ODU. The ODU is configured for 2G/3G/4G wireless protocol processing and receives user data messages sent by the IDU through a first physical network interface corresponding to the second physical network interface. Then, the ODU converts the user data messages into radio waves and send the radio waves to a mobile communication base station, and converts the radio waves received from the mobile base station into user data messages and then forwards the user data messages to the IDU through the first physical network interface corresponding to the second physical network interface. In the embodiment of the present disclosure, in order to implement multiple PDN connections between the IDU and the ODU, the IDU may first virtualize its own second physical network interface into a plurality of second VLAN interfaces to perform multiple PDN connections with the ODU. In addition, the ODU may first virtualize its own first physical network interface into a plurality of first VLAN interfaces to perform multiple PDN connections with the IDU. Then, the ODU may further establish PDN data channels that are in one-to-one correspondence with the first VLAN interfaces. As the number of the first VLAN interfaces is set to be multiple, the ODU may establish multiple PDN data channels correspondingly. As the first VLAN interfaces are in one-to-one correspondence with the second VLAN interfaces, the PDN data channels are also in one-to-one correspondence with the second VLAN interfaces. After the second VLAN interfaces, the first VLAN interfaces and the PDN data channels are generated, the IDU may send address allocation requests to the ODU through the second VLAN interfaces, and the ODU may allocate first IP addresses corresponding to the PDN data channels to the second VLAN interfaces of the IDU according to the address allocation requests from the IDU. Then, data messages from the second VLAN interfaces of the IDU may be routed to the corresponding PDN data channels in the ODU according to the corresponding first IP addresses and routed to the base station through the PDN data channels. Therefore, the embodiment of the present disclosure can implement multiple PDN connections between the IDU and the ODU with one physical network interface. It is worth noting that the second VLAN interfaces of the IDU and the first VLAN interfaces of the ODU should be divided synchronously, and corresponding tags of the first and second VLAN interfaces should be consistent. In addition, for a default data path, an untag mode is adopted, so that other devices without VLAN division can also use the default data path when being directly connected to the ODU. It can be understood that the first physical network interface and the second physical network interface may be RJ45 interfaces on the IDU and the ODU, respectively. In addition, referring toFIG.3andFIG.4, in an embodiment, the ODU100includes an AP130and a MODEM140, the above-mentioned first physical network interface being arranged on the AP130. The establishing PDN data channels that are in one-to-one correspondence with the first VLAN interfaces in S200includes, but not limited to, steps S510and S520. At S510, according to the first VLAN interfaces, the AP enumerates WAN interfaces that are in one-to-one correspondence with the first VLAN interfaces, and the MODEM generates virtual network interfaces that are in one-to-one correspondence with the WAN interfaces. At S520, according to the first VLAN interfaces, the WAN interfaces and the virtual network interfaces, PDN data channels that are in one-to-one correspondence with the first VLAN interfaces are established. In an embodiment, the above-mentioned first VLAN interfaces are obtained by dividing the first physical network interface on the AP130. The AP130also creates at least a plurality of WAN interfaces according to the PDN channels established with the ODU100and the base station, where the WAN interfaces are in one-to-one correspondence with the first VLAN interfaces. In addition, the MODEM140may generate a plurality of virtual network interfaces, such that the ODU100may establish a PDN data channel according to one first VLAN interface, and one WAN interface and one virtual network interface corresponding to the first VLAN interface. Because the first VLAN interfaces, WAN interfaces and virtual network interfaces are set to be multiple in number and are in one-to-one correspondence, the ODU100may establish a plurality of PDN data channels. For all PDN data channels, data messages from corresponding second VLAN interfaces on the IDU200may be received through the first VLAN interfaces, then routed to the WAN interfaces and virtual network interfaces through the PDN data channels in sequence, and finally routed to the base station. It should be noted that the IDU200and the ODU100may be set independently or combined into a Customer Premise Equipment (CPE)300. First, the IDU200, as an indoor unit, is equivalent to a router, which can provide LAN or WLAN services for users, and can also provide a management server for the CPE300, providing services such as web page management, TR069 and the like. In order to implement multiple PDN connections, the IDU200is provided with a second PDN management module210and a plurality of second application service channels220. The second PDN management module210is configured for allocating at least two second VLAN interfaces to the second physical network interface according to application service requirements; monitoring whether the device dials successfully, applying for first IP addresses, and configuring routing policies for the second VLAN interfaces according to the applied first IP addresses. In addition, the plurality of second application service channels220are bound to the corresponding second VLAN interfaces divided from the second physical network interface by the second PDN management module210according to division results of the second VLAN interfaces and service requirements. Second, the ODU100, as an outdoor unit, is equivalent to a wireless Internet access module, which can perform 2G/3G/4G/5G wireless protocol processing, convert user data sent by the IDU200into radio waves and send the radio waves to a mobile communication base station, and then convert the radio waves received from the mobile communication base station into user data and forward the user data to the IDU200. The AP130is an application processor, which can operate some applications on the ODU100, such as TR069 and web server. The MODEM140is a modem, which can operate the 2G/3G/4G/5G wireless protocol stack, and provide services for the whole equipment to connect with the mobile communication base station. In order to implement multiple PDNs, the AP130is provided with a first PDN management module131and a plurality of first application service channels132. The first PDN management module131is configured for allocating at least two first VLAN interfaces to the first physical network interface according to application service requirements, monitoring whether the device dials successfully, allocating first IP addresses to the IDU200, and routing data messages from the IDU200to the MODEM140. In addition, through the plurality of first application service channels132, the data messages sent through the second application service channels220may be routed to the MODEM140according to division results of the first VLAN interfaces and service requirements, and some applications on the ODU100, such as TR069 and web server are operated, such that a user can manage the ODU100. In addition, in order to implement multiple PDN connections, the MODEM140is provided with a network connection module141and a plurality of third application service channels142. The network connection module141is configured for establishing a PDN connection with the base station and acquiring a public network IP address. The plurality of third application service channels142are configured for sending the data messages sent through the first application service channels132to corresponding PDN channels and finally to the mobile communication base station. It should be noted that the network interfaces between the IDU200and the AP130are VLAN side network interfaces400virtualized from the RJ45 interfaces, and are used as data channels between the second application service channels220and the first application service channels132. It can be understood that the above-mentioned second VLAN interfaces are VLAN interfaces virtualized from the RJ45 interfaces at the IDU200side, and the above-mentioned first VLAN interfaces are VLAN interfaces virtualized from the RJ45 interfaces at the ODU100side. It should be noted that the network interfaces between the AP130and the MODEM140are WAN side network interfaces500enumerated after the ODU100successfully dials, and are used as data channels between the first application service channels132and the third application service channels142. It can be understood that the above-mentioned WAN interfaces are network interfaces enumerated at the AP130side and used for connecting to the MODEM140, and the above-mentioned virtual network interfaces are network interfaces generated by the MODEM140and used for connecting to the AP130, where the WAN interfaces of the AP130are in one-to-one correspondence with the virtual network interfaces of the MODEM140. In addition, referring toFIG.5andFIG.6, in an embodiment, S400includes, but is not limited to, a step S600. At S600, first private network IP addresses corresponding to the PDN data channels are allocated, according to the address allocation requests, to the IDU, such that data messages from the IDU are routed to corresponding first VLAN interfaces according to the first private network IP addresses. In an embodiment, the IDU allocates a plurality of second VLAN interfaces to the second physical network interface according to service requirements, where the second physical network interface of the IDU may refer to eth0 inFIG.5, and the plurality of second VLAN interfaces may refer to eth0.X, eth0.Y and eth0.Z inFIG.5. Meanwhile, the ODU allocates a plurality of first VLAN interfaces to the first physical network interface according to service requirements, where the first physical network interface of the ODU may refer to eth1 inFIG.5, and the plurality of first VLAN interfaces may refer to eth1.X, eth1.Y and eth1.Z inFIG.5. After the plurality of second VLAN interfaces and the plurality of first VLAN interfaces are virtualized, the ODU may allocate corresponding first private network IP addresses to the second VLAN interfaces of the IDU according to the address allocation requests corresponding to the PDN data channels from the IDU. For example, as shown inFIG.5, the IP address of eth0.X may be configured as 192.168.255.100, the IP address of eth0.Y may be configured as 192.168.254.100, and the IP address of eth0.Z may be configured as 192.168.253.100. Then, after routing policies are configured for the second VLAN interfaces of the IDU, the data sent from the second VLAN interfaces of the IDU may be routed, according to the obtained first private network IP addresses, to corresponding first VLAN interfaces on the ODU. It is worth noting that the second VLAN interfaces of the IDU and the first VLAN interfaces of the ODU should be synchronously divided, and tags of the first and second VLAN interfaces should be consistent, where the tags may refer to X values of eth0.X and eth1.X, Y values of eth0.Y and eth1.Y, and Z values of eth0.Z and eth1.Z inFIG.5. In an embodiment, eth0.X of eth0 and eth1.X of eth1 are in the same network segment and have identical tags, so that eth0.x and eth1.x may communicate with each other, that is, the IDU may add a routing policy to corresponding eth0.x, to route a data message sent from eth0.x to eth1.x. It can be understood that the IDU requests first private network IP addresses by starting a Dynamic Host Configuration Protocol (DHCP)/Point to Point Protocol over Ethernet (PPPoE)/IP over Ethernet (IPOE) client. Then, the ODU allocates the first private network IP addresses to the IDU in response to address allocation requests from the IDU by starting a DHCP/PPPOE/IPOE Server. In addition, referring toFIG.7, in an embodiment, the method for implementing multiple PDNs between an IDU and an ODU further includes, but not limited to, following steps S710to S740. At S710, public network IP address allocation requests are sent to the base station. At S720, first public network IP addresses allocated by the base station according to the public network IP address allocation requests are acquired. At S730, the first public network IP addresses are configured to the WAN interfaces. At S740, routing policies are configured for the first VLAN interfaces and Source Network Address Translation (SNAT) rules are added to data messages from the first VLAN interfaces, such that data messages from the first VLAN interfaces are routed to corresponding WAN interfaces, where the SNAT rules are used to translate source network addresses of the data messages from the first VLAN interfaces into the first public network IP addresses. In an embodiment, the ODU may judge whether a PDN connection between the device and the base station has been established and send the public network IP address allocation requests to the base station by dialing, and the base station may allocate first public network IP addresses to the ODU according to the public network IP address allocation requests. Then, the AP in the ODU may create the above WAN interfaces, such as rmnet_data0, rmnet_data1 and rmnet_data2 inFIG.5, and configure the first public network IP addresses to the WAN interfaces. As shown inFIG.5, the IP address of rmnet_data0 may be configured as 1.1.1.2, the IP address of rmnet_data1 may be configured as 2.2.2.2, and the IP address of rmnet_data2 may be configured as 3.3.3.3. Because the IP addresses of the first VLAN interfaces are private network IP addresses and the IP addresses of the WAN interfaces are public network IP addresses, in order to route the data messages from the first VLAN interfaces to corresponding WAN interfaces, the ODU may configure routing policies for the first VLAN interfaces and add SNAT rules to the data messages from the first VLAN interfaces, where configuring routing policies for the first VLAN interfaces enables the data messages sent from the first VLAN interface to be routed to the corresponding WAN interfaces, and adding SNAT rules to the data messages sent from the first VLAN interfaces enables source network addresses of the data messages from the first VLAN interfaces to be translated into the above-mentioned first public network IP addresses. Therefore, through the above operation, in an embodiment, a data message from eth1.x can be routed to rmnet_data0. In addition, referring toFIG.8, in an embodiment, the method for implementing multiple PDNs between an IDU and an ODU further includes, but not limited to, steps S810and S820. At S810, second IP addresses of the virtual network interfaces are calculated according to the first public network IP addresses. At S820, data messages from the WAN interfaces are routed to corresponding virtual network interfaces according to the first public network IP addresses and the second IP addresses. In an embodiment, after the ODU dials successfully, the MODEM may map a plurality of virtual network interfaces, such as modem_iface0, modem_iface1 and modem_iface2 shown inFIG.5. The network interfaces are not actual physical network interfaces, but are network interfaces virtualized when establishing PDNs. The second IP addresses of the virtual network interfaces mapped by the MODEM are calculated through the first public IP addresses of the corresponding WAN interfaces according to a certain algorithm, which are equivalent to gateway addresses at the WAN side, so that the IP address of modem_iface0 is configured as 1.1.1.1, the IP address of modem_iface1 is configured as 2.2.2.1, and the IP address of modem_iface2 is configured as 3.3.3.1. Because the WAN interfaces and the corresponding virtual network interfaces are directly connected, and both the WAN interfaces and the corresponding virtual network interfaces have raw IPs, the two may do not need an Address Resolution Protocol (ARP) process, so that the data messages from the WAN interfaces can be directly routed to the corresponding virtual network interfaces. Therefore, through the above operation, in an embodiment, a data message from rmnet_data0 can be directly routed to modem_iface0. In addition, referring toFIG.9andFIG.10, in an embodiment, S400includes, but not limited to, step S910, step S920, step S930and S940. At S910, public network IP address allocation requests are sent to the base station. At S920, second public network IP addresses allocated by the base station according to the public network IP address allocation requests are acquired. At S930, the first VLAN interfaces are bridged with corresponding WAN interfaces to obtain bridges. At S940, the second public network IP addresses corresponding to the PDN data channels are allocated, according to the address allocation requests, to the IDU, such that data messages from the IDU are routed to the virtual network interfaces corresponding to the bridges through the bridges according to the second public network IP addresses. In an embodiment, the IDU allocates a plurality of second VLAN interfaces to the second physical network interface according to service requirements, where the second physical network interface of the IDU may refer to eth0 inFIG.9, and the plurality of second VLAN interfaces may refer to eth0.X, eth0.Y and eth0.Z inFIG.9. Meanwhile, the ODU allocates a plurality of first VLAN interfaces to the first physical network interface according to service requirements, where the first physical network interface of the ODU may refer to eth1 inFIG.9, and the plurality of first VLAN interfaces may refer to eth1.X, eth1.Y and eth1.Z inFIG.9. After a plurality of second VLAN interfaces and a plurality of first VLAN interfaces are virtualized, both the IDU and the ODU may judge whether the device has established a PDN connection and send public network IP address allocation requests to the base station by dialing, and the base station may allocate second public network IP addresses to the ODU according to the public network IP address allocation requests. Then, the AP in the ODU may create the above-mentioned WAN interfaces, such as rmnet_data0, rmnet_data1 and rmnet_data2 inFIG.5, and the ODU may bridge the first VLAN interfaces on the AP with the corresponding WAN interfaces to obtain bridges, such as bridge0, bridge1 and bridge2 inFIG.5. After a plurality of bridges are generated, the ODU may allocate the second public network IP addresses to the second VLAN interfaces of the IDU according to the address allocation requests corresponding to the PDN data channels from the IDU. For example, as shown inFIG.9, the IP address of eth0.X may be configured as 1.1.1.2, the IP address of eth0.Y may be configured as 2.2.2.2, and the IP address of eth0.Z may be configured as 3.3.3.3. In addition, after successful dialing, the MODEM may map the plurality of virtual network interfaces, such as modem_iface0, modem_iface1 and modem_iface2 inFIG.9. These network interfaces are not actual physical network interfaces, but are virtual network interfaces virtualized when establishing PDNs. IP addresses of the virtual network interfaces mapped by the MODEM are calculated through the corresponding second public network IP addresses according to a certain algorithm. In an embodiment, the IP address of modem_iface0 may be configured as 1.1.1.1, the IP address of modem_iface1 may be configured as 2.2.2.1, and the IP address of modem_iface2 may be configured as 3.3.3.1. Then, after routing policies are configured for the second VLAN interfaces of the IDU, the data sent from the second VLAN interfaces of the IDU may be routed, according to the obtained second public network IP addresses, to corresponding bridges on the ODU and directly routed to corresponding virtual network interfaces on the MODEM through the bridges, and then sent to the base station through the virtual network interfaces. Therefore, through the above operation, in an embodiment, a data message from eth0.X can be directly routed to modem_iface0 through bridge0. It can be understood that the IDU requests second public network IP addresses by starting a DHCP/PPPOE/IPOE client; and then, the ODU allocates the second public network IP addresses to the IDU in response to address allocation requests from the IDU by starting a DHCP/PPPOE/IPOE Server. It should be noted that, in the embodiment of the present disclosure, private network IP addresses may be allocated to the bridges of the ODU. In an embodiment, the IP address of bridge0 may be configured as 192.168.0.1. If an application of the web server of ODU is bound to bridge0, parameter configuration may be performed on the ODU by accessing a web UI of the ODU through 192.168.0.1 on the IDU. In addition, referring toFIG.11, in an embodiment, the method further includes, but not limited to, step S1010, step S1020and S1030. At S1010, second private network IP addresses are allocated to the WAN interfaces. At S1020, internal data messages from the ODU are acquired through the WAN interfaces. At S1030, SNAT rules are added to the internal data messages at the WAN interfaces, such that source network addresses of the internal data messages are translated into the second public network IP addresses and then the second public network IP addresses are routed to the virtual network interfaces corresponding to the WAN interfaces. In an embodiment, in order to enable the applications and services inside the ODU to operate normally, a second private network IP address is allocated to the rmnet_data0 at the AP side of the ODU. For example, as shown inFIG.9, the IP address of rmnet_data0 is configured as 192.168.255.1, the IP address of rmnet_data1 is configured as 192.168.254.1, and the IP address of rmnet_data2 is configured as 192.168.253.1. Then, the WAN interfaces acquire the internal data messages of the ODU, and add SNAT rules to the internal data messages. In this way, when the internal data messages are sent from the WAN interfaces, source network addresses of the internal data messages can be translated into the second public network IP addresses and then routed to the virtual network interfaces of the MODEM, such that the internal services and applications of the ODU can be connected to the Internet normally. As shown inFIG.12,FIG.12is a flowchart of a method for implementing multiple PDNs between an IDU and an ODU provided by an embodiment of the present disclosure, which may be applied to an IDU. The method includes, but not limited to, step S1100, step S1200, step S1300and S1400. At S1100, at least two second VLAN interfaces are allocated to a second physical network interface, where the at least two second VLAN interfaces are in one-to-one correspondence with at least two PDN data channels established in an ODU, and the PDN data channels are configured for data interaction with a base station. At S1200, address allocation requests corresponding to the PDN data channels are sent to the ODU. At S1300, first IP addresses corresponding to the PDN data channels allocated by the ODU according to the address allocation requests are acquired. At S1400, data messages are routed to the corresponding PDN data channels through the second VLAN interfaces according to the first IP addresses. In an embodiment, the IDU is configured for running various application services and providing LAN or WLAN services for the downstream, to realize data processing and services. A second physical network interface on the IDU is configured for connecting with the ODU. The ODU is configured for 2G/3G/4G wireless protocol processing, and receives user data sent by the IDU through a first physical network interface corresponding to the second physical network interface. Then, the ODU converts the user data into radio waves and send the radio waves to a mobile communication base station, and converts the radio waves received from the mobile base station into user data and then forwards the user data to the IDU through the first physical network interface corresponding to the second physical network interface. In the embodiment of the present disclosure, in order to implement multiple PDN connections between the IDU and then ODU, the IDU may first virtualize its own second physical network interface into a plurality of second VLAN interfaces to perform multiple PDN connections with the ODU. In addition, the ODU may first virtualize its own first physical network interface into a plurality of first VLAN interfaces to perform multiple PDN connections with the IDU. Then, the ODU may also establish PDN data channels that are in one-to-one correspondence with the first VLAN interfaces. As the number of the first VLAN interfaces is set to be multiple, the ODU may establish multiple PDN data channels correspondingly. As the first VLAN interfaces are in one-to-one correspondence with the second VLAN interfaces, the PDN data channels are also in one-to-one correspondence with the second VLAN interfaces. After the second VLAN interfaces, the first VLAN interfaces and the PDN data channels are generated, the IDU may send address allocation requests to the ODU through the second VLAN interfaces. Then, the ODU may allocate first IP addresses corresponding to the PDN data channels to the second VLAN interfaces of the IDU according to the address allocation requests from the IDU, and then data messages from the second VLAN interfaces of the IDU may be routed to the corresponding PDN data channels in the ODU according to the corresponding first IP addresses and routed to the base station through the PDN data channels. Therefore, the embodiment of the present disclosure may implement multiple PDN connections between the DU and the ODU with one physical network interface. It is worth noting that for the implementations and corresponding technical effects of the method for implementing multiple PDNs between an IDU and an ODU in the embodiment of the present disclosure, reference may be made to the embodiments of the above method for implementing multiple PDNs between an IDU and an ODU correspondingly. As shown inFIG.13,FIG.13is a flowchart of a method for implementing multiple PDNs between an IDU and an ODU provided by an embodiment of the present disclosure. S1300includes, but not limited to, to step S1510. Accordingly, S1400includes, but not limited to, a step S1520. At S1510, first private network IP addresses corresponding to the PDN data channels allocated by the ODU according to the address allocation requests are acquired through the second VLAN interfaces, where the first private network IP addresses are generated by the ODU. At S1520, data messages are routed to the corresponding PDN data channels through the second VLAN interfaces according to the first private network IP addresses and routing policies. It is worth noting that for the implementations and corresponding technical effects of the method for implementing multiple PDNs between an IDU and an ODU in the embodiment of the present disclosure, reference may be made to the embodiments of the above-mentioned method for implementing multiple PDNs between an IDU and an ODU correspondingly. As shown inFIG.14,FIG.14is a flowchart of a method for implementing multiple PDNs between an IDU and an ODU provided by an embodiment of the present disclosure. S1300includes, but not limited to, a step S1610. Accordingly, S1400includes, but not limited to, a step S1620. At S1610, second public network IP addresses corresponding to the PDN data channels allocated by the ODU according to the address allocation requests are acquired through the second VLAN interfaces, where the second public network IP addresses are obtained by the ODU from the base station. At S1620, data messages are routed to the corresponding PDN data channels through the second VLAN interfaces according to the second public network IP addresses and the routing policies. It is worth noting that for the implementations and corresponding technical effects of the method for implementing multiple PDNs between an IDU and an ODU in the embodiment of the present disclosure, reference may be made to the embodiments of the above-mentioned method for implementing multiple PDNs between an IDU and an ODU correspondingly. Based on the above-mentioned method for implementing multiple PDNs between an IDU and an ODU, various embodiments of the ODU, IDU and a computer readable storage medium of the present disclosure are proposed below. In addition, an embodiment of the present disclosure provides an ODU. The ODU includes: a memory, a processor, and a computer program stored in the memory and executable by the processor. The processor and the memory may be connected by a bus or other means. It should be noted that the ODU in this embodiment may be applied to the system architecture platform in the embodiment shown inFIG.1. The ODU in this embodiment may form a part of the system architecture platform in the embodiment shown inFIG.1. The two both belong to the same inventive concept, and therefore have the same implementation principle and technical effect, which will not be described in detail herein. The non-transient software program and instructions required to implement the method for implementing multiple PDNs between an IDU and an ODU in the above embodiment are stored in the memory which, when executed by the processor, cause the processor to perform the method for implementing multiple PDNs between an IDU and an ODU in the above embodiment, for example, perform the above-described steps S100to S400inFIGS.2, S510to S520inFIG.4, S600inFIGS.6, S710to S740inFIGS.7, S810to S820inFIGS.8, S910to S940inFIG.10, and S1010to S1030inFIG.11. In addition, an embodiment of the present disclosure further provides an IDU. The IDU includes: a memory, a processor, and a computer program stored in the memory and executable by the processor. The processor and the memory may be connected by a bus or other means. It should be noted that the IDU in this embodiment may be applied to the system architecture platform in the embodiment shown inFIG.1. The IDU in this embodiment may form a part of the system architecture platform in the embodiment shown inFIG.1. The two both belong to the same inventive concept, and therefore have the same implementation principle and technical effect, which will not be described in detail herein. The non-transient software program and instructions required to implement the method for implementing multiple PDNs between an IDU and an ODU in the above embodiment are stored in the memory which, when executed by the processor, cause the processor to perform the method for implementing multiple PDNs between an IDU and an ODU in the above embodiment, for example, perform the steps S1100to S1400inFIGS.12, S1510to S1520inFIG.13, and S1610to S1620inFIG.14. The device embodiments described above are merely illustrative, and the units described as separate components may or may not be physically separated, that is, may be located in one place, or may be distributed onto multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiment. In addition, an embodiment of the present disclosure further provides a computer-readable storage medium storing a computer executable instruction which, when executed by a processor, causes the processor to implement the above method for implementing multiple PDNs. For example, the computer executable instruction, when executed by a processor in the above ODU embodiments, causes the processor to implement the method for implementing multiple PDNs in the above embodiments, for example, implement the steps S1010to S400inFIG.2, S510to S520inFIG.4, S600inFIG.6, S710to S740inFIG.7, S810to S820inFIG.8, S910to S99inFIG.10, and S1010to S1030inFIG.11. For another example, the computer executable instruction, when executed by a processor in the above IDU embodiments, causes the processor to implement the method for implementing multiple PDNs in the above embodiments, for example, implement the steps S1100to S1400inFIG.12, S1510to S1520inFIG.13, and S1610to S1620inFIG.14. An embodiment of the present invention includes: allocating, by an ODU, at least two first VLAN interfaces to a first physical network interface, and establishing PDN data channels that are in one-to-one correspondence with the first VLAN interfaces, where the PDN data channels are configured for data interaction with a base station; and acquiring, by the ODU, address allocation requests corresponding to the PDN data channels from an IDU, and allocating first IP addresses corresponding to the PDN data channels to the IDU according to the address allocation requests, such that data messages from the IDU are routed to the corresponding PDN data channels according to the first IP addresses. According to a scheme provided by the embodiment of the present disclosure, the ODU may divide its own first physical network interface into a plurality of first VLAN interfaces, and then establish PDN data channels that are in one-to-one correspondence with the first VLAN interfaces. Because the number of the first VLAN interfaces is multiple, a plurality of PDN data channels may be established. After the ODU allocates a plurality of first IP addresses that are in one-to-one correspondence with the PDN data channels to the IDU, data messages from the IDU may be routed to the corresponding PDN data channels in the IDU according to different first IP addresses, and then routed to the base station through the PDN data channels. Therefore, the embodiment of the present disclosure can implement multiple PDN connections between the IDU and the ODU with one physical network interface. As will be understood by those having ordinary skills in the art that all or some of the steps, and systems in the method disclosed above may be implemented as software, firmware, hardware and appropriate combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, a digital signal processor or a microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium). As is well known to those having ordinary skills in the art, the term computer storage medium includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules or other data. A computer storage medium may include RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassette, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other media that can be used to store desired information and can be accessed by a computer. In addition, it is well known to those having ordinary skills in the art that the communication medium may generally include computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium. The above is a detailed description of some implementations of the present disclosure, but the present disclosure is not limited thereto. Those having ordinary skills in the art can also make various equivalent modifications or substitutions without violating the sharing conditions of the scope of the present disclosure, and these equivalent modifications or substitutions are included in the scope defined by the claims of the present disclosure. | 46,157 |
11943199 | DETAILED DESCRIPTION FIG.1is a block diagram illustrating an example environment100in which various embodiments of systems and methods for a computer network security manager device118may be implemented, according to one non-limiting embodiment. It is to be appreciated thatFIG.1illustrates just one example of a customer premises116environment and that the various embodiments discussed herein are not limited to use of such systems. Customer premises116can include a variety of communication systems and can use a variety of devices, including computers, peripheral devices, communication devices, media devices, mobile devices, home entertainment systems, receiving devices, home automation devices, home security devices and home appliances. All or some of such devices are represented by device A130, device B132and device C134, and may be network addressable and in operable communication with each other and/or other devices over various networks, such as the Internet108, via modem138, router136and security manager device118. For example, router136may be a wireless router that connects directly to modem138by a cable. This allows router136to receive information from, and transmit information to, the Internet108. The router136then creates and communicates with a local area network (LAN), such as a Wi-Fi (IEEE 802.11) network of the customer premises116, which may include device A130, device B132, device C134, security manager device118, mesh network(s), other LANs or networks, etc., using built-in antennas. As a result, device A130, device B132and device C134and security manager device118all have access to the Internet108. The security manager device118may manage network communications from and to device A130, device B132and device C134, including routing network communications between such devices (which may include mesh network devices), routing network communications being sent to and from such devices over a local network and/or over the Internet108via router136and modem138, and managing network security. Devices as described above which may be connected to security manager device118, such as device A130, device B132and/or device C134, may include, but are not limited to: computing devices, smart phones, tablets, cameras, smart home devices, motion sensors, light sensors, other sensors, locks, lights, thermostats, security devices, entertainment systems, devices that provide media by satellite, cable and/or Internet streaming services, home automation devices, appliances, gaming devices, toys, wearable devices, watches, televisions, other IoT devices, mesh network devices, nodes, etc. Furthermore, home automation service providers, such as, but not limited to, home security service and data service providers, provide their customers a multitude of home automation and/or security services. Such services may include remote monitoring of various home automation devices over telecommunication channels, the Internet108or other communication channels and may also include providing equipment and installation of equipment for the service provider and/or user to configure, manage and control the devices. According to one embodiment, an example of such equipment is the security manager device118. Examples of such home automation devices may include, but are not limited to, one or more of, or any combination of: a camera, a thermostat, a light fixture, a door sensor, a window sensor, a power outlet, a light switch, a doorbell, a doorbell sensor, a light bulb, a motion sensor, an electrical switch, an appliance switch, a window covering control device, an alarm, a dimmer switch and a door lock. Such devices as described above, for example, are represented by device A130, device B132and/or device C134. In various embodiments, there may be additional or fewer devices than that shown inFIG.1. Also, in some embodiments, the functionality of router136and modem138may be combined into one device. Monitoring and control of device A130, device B132and/or device C134, and other network services, may be provided by use of the security manager device118, which is communicatively coupled to Internet router136that provides access to the Internet108via modem138. Security manager device118interconnects wirelessly to one or more devices represented by device A130, device B132and/or device C134. In some embodiments, there may be a wired connection to a plurality of such devices via security manager device118. Security manager device118may receive various commands input by a user on the customer premises116and/or from a remote monitoring system over the Internet108, such as from a home automation service provider, home security service, satellite television service provider, cable TV service provider or other data service provider. These commands control the functions of security manager device118which in turn configures, controls and manages all inbound and outbound network communications of device A130, device B132and device C134. According to one embodiment, security manager device118connects to first wireless router136and then connects to a plurality of devices, such as device A130, device B132and device C134in the present example. The security manager device118then performs device agnostic activation of device A130, device B132and device C134to enable device A130, device B132and device C134to perform respective functions of each device. The security manager device118prevents device A130, device B132and device C134from connecting directly to the first wireless router136and only allows other devices on the Internet108to communicate with device A130, device B132and device C134according to specific firewall rules. In response to receiving an indication that the first wireless router136to which the security manager device118is connected is out of service or no longer exists, the security manager device118prevents other devices on the Internet108from being able to communicate with device A130, device B132and device C. Also, according to one embodiment, the security manager device118provides a service to manage migration from one Internet router, such as router136to another Internet router, such as a new router (not shown) that is to replace router136. The security manager device118provides Internet connectivity to device A130, device B132and device C134after the migration to the new router replacing router136without reconnection, reactivation or reconfiguration of those devices during the migration. For example, despite that router136may become out of service during the migration, the security manager device118keeps each network connection from device A130, device B132and device C134to the security manager device118in a manner that is unaffected by router136being down or no longer existing, other than device A130, device B132and device C134experiencing a temporary Internet service interruption until the new router is in place. In one example embodiment, this is due to the fact that device A130, device B132and device C134are all communicating on a separate network created by security manager device118, which handles all Internet communications, including, for example, Hyper Text Transfer Protocol (HTTP) over Transmission Control Protocol/Internet Protocol (TCP/IP) packets, to and from device A130, device B132and device C134. In one embodiment, security manager device118creates a separate wireless network including device A130, device B132and device C134and the Internet Gateway of security manager device118is set to the IP address of router136. Thus, device A130, device B132and device C134may remain activated and configured to be connected to security manager device118, even during migration of router136to a new router. In one embodiment, the migration to the new router may include merely updating the Internet Gateway of security manager device118to the IP address of the new router, rather than individually reconnecting, reactivating and reconfiguring device A130, device B132and device C134to connect to new router. Device A130, device B132and device C134continue to communicate with security manager device118on a separate local area network created by security manager device118and do not need to know any network information or other configuration information about the new router before, during and after migration to the new router replacing router136. This results from device A130, device B132and device C134also not needing to know any network information or other configuration information about the previous router136due to their previous connection to the Internet via the same security manager device118. Such network management, security and other functions may be performed based on a set of conditions or rules implemented and/or stored by the security manager device118and/or remote monitoring system. In some embodiments, the connection provided by security manager device118between the security manager device118and device A130, device B132and device C, includes a wireless connection. This wireless connection may, for example, be a ZigBee® network connection based on the IEEE 802.15.4 specification, a Z-Wave® connection, a Wi-Fi connection based on the IEEE 802.11 specifications or a Bluetooth® connection, and/or another wireless connection based on protocols for communication among devices used for home automation, including those that use radio frequency (RF) for signaling and control. In some embodiments, different devices may have different types of wireless connections to the security manager device118. Often, such wireless connections involve a network pairing between the security manager device118and the various devices, such as device A130, device B132and device C. The connection provided by security manager device118between security manager device118and the various devices, such as device A130, device B132and device C, may also or instead include one or more wired networking interfaces such as, for example, 10-baseT specified in the IEEE 802.3 standard, 10/100 Ethernet, or Gigabit Ethernet (GbE or 1 GigE) as defined by the IEEE 802.3-2008 standard. The security manager device118may include, be part of, or be operably connected to devices such as a “smartphone,” “tablet device,” “television converter,” “receiver,” “set-top box,” “television,” “television receiver,” “television recording device,” “satellite set-top box,” “satellite receiver,” “cable set-top box,” “cable receiver,” “media player,” “Internet streaming device” “mesh network node.” and/or “television tuner.” The computer security manager device118may be any suitable device or electronic equipment that is operable to control, configure, provide network services to, provide network security to, and/or manage connected devices, such as device A130, device B132and device C. Further, the security manager device118may itself include user interface devices, such as buttons, switches and displays. In many applications, a remote-control device or mobile device (not shown) is operable to control the security manager device118, device A130, device B132and/or device C. Other examples of device A130, device B132and/or device C include, but are not limited to: a Network Addressable Storage (NAS) device, a tablet computer, a smart phone, a printer, a television (“TV”), a personal computer (“PC”), a sound system receiver, a digital video recorder (“DVR”), game system, a presentation device, or the like. Presentation devices may employ a display, one or more speakers (not shown), and/or other output devices to communicate video and/or audio content to a user. In many implementations, one or more presentation devices reside in or near a customer's premises116and are communicatively coupled, directly or indirectly, to the security manager device118. Further, the security manager device118and the presentation device may be integrated into a single device, such as a cellular telephone or other mobile device. Such a single device may have the functionality of the security manager device118described herein and the presentation device, or may even have additional functionality. Security manager device118may be, enable and/or create a communication system or networked system, to which device A, device B, device C, router136and/or a variety of other auxiliary devices (collectively referred to herein as endpoint devices) are connected. Non-limiting examples of such a networked system or communication system include, but are not limited to, an Ethernet system, twisted pair Ethernet system, an intranet, a local area network (“LAN”) system, short range wireless network (e.g., Bluetooth®), a personal area network (e.g., a ZigBee network based on the IEEE 802.15.4 specification), a Z-Wave® network, a Consumer Electronics Control (CEC) communication system or the like. One or more endpoint devices, such as IoT devices, PCs, data storage devices, TVs, game systems, sound system receivers, network attached storage (NAS) devices, tablet computers, smart phones, printers or the like, may be communicatively coupled to the security manager device118so that the plurality of endpoint devices are communicatively coupled together. Thus, such a network allows the interconnected endpoint devices, and the security manager device118, to communicate with each other directly and/or to other devices over the Internet108via router136. The above description of the customer premises116, and the various devices therein, is intended as a broad, non-limiting overview of an example environment in which various embodiments of systems and methods for a computer network security manager device may be implemented. The customer premises116and the various devices therein may contain additional or other devices, systems and/or media not specifically described herein. Example embodiments described herein provide applications, tools, data structures and other support to implement systems and methods for a computer network security manager device118. In the following description, numerous specific details are set forth, such as data formats, code sequences, and the like, in order to provide a thorough understanding of the described techniques. The embodiments described also can be practiced without some of the specific details described herein, or with other specific details, such as changes with respect to the ordering of the code flow, different code flows, and the like. Thus, the scope of the techniques and/or functions described are not limited by the particular order, selection, or decomposition of steps described with reference to any particular module, component, or routine. FIG.2is a block diagram illustrating elements of an example computer network security manager device118, according to one non-limiting embodiment. In one embodiment, security manager device118includes a computer network router configured to provide network routing and security services to network addressable devices operably connected to security manager device118, as shown inFIG.2. In some embodiments, the security manager device118is part of a presentation device, such as a smartphone, mobile device, other portable computing device, television and/or set-top box device. For example, components are shown of the security manager device118that may be incorporated in a specialized device (e.g., a smartphone, mobile device, other portable computing device, television, set-top box device, specialized network device, server device, or other specialized computing device) on which the systems and methods described herein may operate or be implemented, according to various embodiments described herein. While security manager device118configured as described herein is typically used to support the operation of the systems described herein, the system may be implemented using devices of various types and configurations and having various components which, when configured to perform the operations and processes described herein, are specialized non-generic devices. A hardware component such as a processor may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines, or specific components of a machine, uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Also, security manager device118may operate on an open platform system or closed platform system. In a closed platform system, an entity providing the security manager device118, such as the home automation or data service provider, has, via software and/or hardware security controls, control over all applications, content or media stored on the security manager device118, or otherwise restricts access to change the operation or configuration of the security manager device118. This is in contrast to an open platform, where end users and customers generally have unrestricted access to applications, content, configuration and operation of the computer network security manager device. In either case, security manager device118may be a device trusted by the other devices connected to the security manager device118, or have an increased trust level with respect to such devices, to facilitate the security manager device118providing the network and security functions described herein. In addition, in various embodiments, the security manager device118may comprise one or more distinct computing systems/devices and may span distributed locations. Furthermore, each block shown may represent one or more such blocks as appropriate to a specific embodiment or may be combined with other blocks. Also, the security manager device controller200of the security manager device118may be implemented in software, hardware, firmware, or in some combination to achieve the capabilities described herein. In the embodiment shown, security manager device118comprises a computer memory (“memory”)201, a display202, one or more Central Processing Units (“CPU”)203, Input/Output devices204(e.g., keyboard, mouse, RF or infrared receiver, light emitting diode (LED) panel or liquid crystal display (LCD), USB ports, other communication ports, and the like), other computer-readable media205, and network connections206. The operation rules stored in the operation rules store216and security manager device controller200portions are shown residing in memory201. In other embodiments, some portion of the contents, and some, or all, of the components of operation rules stored in the operation rules store216and security manager device controller200may be stored on the other computer-readable media205. The operation rules stored in the operation rules store216and security manager device controller200components of the security manager device118preferably execute on one or more CPUs203and facilitate the network, communication routing, router migration management, security services and other functionality as described herein. The security manager device controller200also facilitates communication with peripheral devices and remote systems via the I/O devices204and network connections206. For example, the security manager device controller200may also interact via the Internet108with other devices and systems, which may be a system of an entity providing the security manager device118, such as a home automation or data service provider or the like. According to one embodiment, security manager device controller200provides network security and network routing for device A130, device B132and device C134, and can also manage migration from one Internet router, such as router136to another Internet router, with the security manager device acting as an intermediate router. The device activation and protection module217performs device agnostic activation of device A130, device B132and device C134to enable device A130, device B132and device C134to perform respective functions of each device. For example, each of device A130, device B132and device C134may each be associated with a different an application layer activation protocol unique to the device with respect to other devices. The device agnostic activation of device A130, device B132and device C134may include, for each of those devices, at an application layer protocol for the device that is different than an application layer protocol of the other devices, enabling the device to perform a function of the device according to the application layer protocol for the device. This enables devices of different manufacturers and brands, which may provide different services and communicate according various different standards and protocols, to all be activated by the device activation and protection module217and managed by the device manager module234after being activated. To maintain security and control over all network communications device A130, device B132and device C134and other devices on the Internet108, the device activation and protection module217prevents device A130, device B132and device C134from connecting directly to the first wireless router136and only allows other devices on the Internet108to communicate with device A130, device B132and device C134according to specific firewall rules. Additionally, in some embodiments, the router service detection module236may receive an indication that router136to which the network security manager device118is connected is out of service or no longer exists. In response to receiving the indication that the wireless router136to which the security manager device118is connected is out of service or no longer exists, the device activation and protection module217prevents other devices on the Internet108from being able to communicate with device A130, device B132and device C. For example, the device activation and protection module217may close all ports of device A130, device B132and device C134for incoming network communications from devices on the Internet108other than the network security manager device118. In some embodiments, the device activation and protection module217may drop or block all network communications to the plurality of devices from devices on the Internet other than the network security manager device118. There may be various additional conditions in response to which the device activation and protection module217prevents other devices on the Internet108from being able to communicate with device A130, device B132and device C, including, but not limited to: detected security threats via the Internet108, detected intrusions via the Internet108, computer virus detection, device malfunction detection, a number of failed device login attempts, etc. In response to the router service detection module236receiving an indication that the wireless router136to which the network security manager device118is back in service or that a new wireless router is connected to the computer network security manager device118and is in service, the device activation and protection module217may again allow other devices on the Internet108to communicate with the plurality of devices according to firewall rules. Also, the device activation and protection module217may again allow other devices on the Internet to communicate with the plurality of devices according to firewall rules once a security threat or other condition is no longer detected to be present. In one embodiment, the firewall rules include firewall rules that are, for each device A130, device B132and device C, specific to individual services or applications of the device that are unique to the device with respect to other devices and that match network communications against firewall rules specific to the device. In some embodiments, the device activation and protection module217may also provide Internet connectivity to device A130, device B132and device C134after the migration to a new router replacing router136without reconnection, reactivation or reconfiguration of those devices during the migration. For example, despite that router136may become out of service during the migration, the device activation and protection module217keeps each network connection from device A130, device B132and device C134to the security manager device118in a manner that is unaffected by router136being down or no longer existing, other than device A130, device B132and device C134experiencing a temporary Internet service interruption until the new router is in place. In some embodiments, security manager device118first connects to router136before device A130, device B132and device C134are turned on or are otherwise active. Also, device A130, device B132and device C134may connect to security manager device118while router136is out of service or is turned off. Thus, device A130, device B132and device C134connect to the security manager device118and are configured by the device manager module234to be prevented from connecting to router136before any of those devices have an opportunity to connect to router136. Device connection module232, upon initial connection of device A130, device B132and device C134to security manager device118, may send a signal, message, command or otherwise cause device A130, device B132and device C134to not connect to, be pingable by, or otherwise be directly reachable by other devices or routers, unless and until allowed to do so by security manager device118. In one embodiment, this may be accomplished by security manager device118sending a signal, message, command or otherwise causing device A130, device B132and device C134to close all ports for incoming network requests and communications from devices or routers other than security manager device118, unless and until they are allowed to be opened by security manager device118. For example, upon initial connection to security manager device118, the device connection module232may send a signal, message, command or otherwise cause device A130, device B132and device C134to use security manager device118as the single access point to the Internet, and set the Internet Gateway of security manager device118to the IP address of router136. In some embodiments, device connection module232may disable automatic channel selection in one or both of security manager device118and router136and set specific communication channels on security manager device118and router136that do not conflict with each other. In various embodiments, continuing control by the security manager device118of network communications for device A130, device B132and device C134may be performed by the device manager module234at the physical, data link, network, transport, session, presentation, and/or application layer of the Open Systems Interconnection (OSI) network model. The device manager module234receives outgoing Internet network communications from device A130, device B132and device C134and routes the outgoing Internet network communications to router136via network connections206. The device manager module234also receives, from the modem138via router136, incoming Internet network communications addressed to the plurality of devices and routes the incoming Internet network communications to device A130, device B132and device C134. In some embodiments, the device manager module234may prevent the plurality of devices from connecting directly to router136(or any other router than security manager device118). During migration to new router that replaces router136, the router service detection module236may receive an indication that router136is out of service or no longer exists. This may be due to the router service detection module236losing connection with router136as indicated by a lack of acknowledgement in response to a TCP/IP packet, a request timed out response, an unknown host response, a destination host unreachable response, or other lack of response to a TCP/IP, HTTP or other network connection request or ping command. Despite router136being out of service or no longer existing, the device activation and protection module217keeps each network connection from device A130, device B132and device C134to the security manager device118in a manner that is unaffected by the router136being down or no longer existing (other than device A130, device B132and device C134experiencing a temporary Internet service interruption). The device activation and protection module217then connects to the new wireless router to replace router136that is out of service or no longer exists. The device activation and protection module217provides, via the connection to the new router, Internet connectivity to device A130, device B132and device C134connected to the security manager device118without reconnection, reactivation or reconfiguration of device A130, device B132and device C134to obtain the Internet connectivity. In particular, device A130, device B132and device C134may remain activated and configured to be connected to security manager device118, even during migration of router136to a new router. Thus, the migration to the new router may include the device activation and protection module217merely updating the Internet Gateway of security manager device118to the IP address of the new router, rather than individually reconnecting, reactivating and reconfiguring device A130, device B132and device C134to each connect to new router. Such network management, security and other functions may be performed based on a set of conditions or rules stored in operation rules216and/or in a remote storage system. After migration to the new router and providing, via the connection to the new router, Internet connectivity to device A130, device B132and device C134without reconnection, reactivation or reconfiguration of those devices to obtain the Internet connectivity, device manager module234receives additional outgoing Internet network communications from device A130, device B132and device C134. Device manager module234then routes the additional outgoing Internet network communications to new router that is connected to the modem138that provides the new router access to the Internet108. Security manager device118may also provide an interactive user interface to manage the networked devices connected to it, such as device A130, device B132and device C, that is controlled by an interactive graphical user interface of a device that comprises or is in operable communication with the security manager device118via network connections206and/or an interface of a remote control device is in operable communication with the security manager device118via other I/O devices204(not shown). This interactive user interface may be communicated to and displayed on display202and/or a display of a device in operable communication with the security manager device118(e.g., on a monitor and/or on a display of a mobile device) to enable the user to configure, control and manage the network connections to such devices via the device manager module234of the security manager device118. The various rules of operations that implement the functionality of the security manager device controller200described herein and selectable options of the security manager device controller200may be stored in the operation rules store216and updated locally or remotely. Other code or programs230(e.g., routing or other network management software, and the like), and potentially other data repositories, such as other data store220, which may store other network routing and management data, such as routing tables, also reside in the memory201, and preferably execute on one or more CPUs203. Of note, one or more of the components inFIG.2may not be present in any specific implementation. For example, some embodiments may not provide other computer-readable media205or a display202. In some embodiments, the security manager device controller200includes an application program interface (“API”) that provides programmatic access to one or more functions of the security manager device controller200. Such an API may provide a programmatic interface to one or more functions of the security manager device controller200that may be invoked by one of the other programs230or some other module. In this manner, the API enables software, such as user interfaces, plug-ins and adapters to integrate functions of the security manager device controller200into desktop computer or mobile device applications, and the like. The API may be, in at least some embodiments, invoked or otherwise accessed via the security manager device controller200, or remote entities, to access various functions of the security manager device controller200. For example, a user may perform particular configurations of the security manager device118or remotely control the security manager device118via the API. In an example embodiment, components/modules of the security manager device controller200are implemented using standard programming techniques. For example, the operation rules stored in the operation rules store216and the various modules of the security manager device controller200may be implemented as a “native” executable running on the CPU203, along with one or more static or dynamic libraries. In other embodiments, the operation rules stored in the operation rules store216and the various modules of the security manager device controller200may be implemented as instructions processed by a virtual machine that executes as one of the other programs230. In general, a range of programming languages known in the art may be employed for implementing such example embodiments, including representative implementations of various programming language paradigms, including but not limited to, object-oriented (e.g., Java, C++, C#, Visual Basic.NET, Smalltalk, and the like), functional (e.g., ML, Lisp, Scheme, and the like), procedural (e.g., C, Pascal, Ada, Modula, and the like), scripting (e.g., Perl, Ruby, Python, JavaScript, VBScript, and the like), or declarative (e.g., SQL, Prolog, and the like). In a software or firmware implementation, instructions stored in a memory configure, when executed, one or more processors of the security manager device118to perform the functions of the security manager device controller200described herein. In one embodiment, instructions cause the CPU203or some other processor, such as an I/O controller/processor, to perform operations described herein and implement the functionality of the security manager device controller200described herein. Similarly, the CPU203or other processor may be configured to perform other operations such as to perform other network management, security and routing services. The embodiments described above may also use well-known or other synchronous or asynchronous client-server computing techniques. However, the various components may be implemented using more monolithic programming techniques as well; for example, as an executable running on a single CPU computer system, or alternatively decomposed using a variety of structuring techniques known in the art, including but not limited to, multiprogramming, multithreading, client-server, or peer-to-peer, and running on one or more computer systems each having one or more CPUs. Some embodiments may execute concurrently and asynchronously, and communicate using message passing techniques. Equivalent synchronous embodiments are also supported by a security manager device controller200implementation. Also, other functions could be implemented and/or performed by each component/module, and in different orders, and by different components/modules, yet still achieve the functions of the security manager device controller200. In addition, programming interfaces to the data stored as part of the security manager device controller200, can be available by standard mechanisms such as through C, C++, C#, and Java APIs; libraries for accessing files, databases, or other data repositories; scripting languages such as XML; or Web servers, FTP servers, or other types of servers providing access to stored data. The operation rules store216and other data store220may be implemented as one or more database systems, file systems, or any other technique for storing such information, or any combination of the above, including implementations using distributed computing techniques. Different configurations and locations of programs and data are contemplated for use with techniques described herein. A variety of distributed computing techniques are appropriate for implementing the components of the illustrated embodiments in a distributed manner including but not limited to TCP/IP sockets, remote procedure call (RPC), remote method invocation (RMI), HTTP, and Web Services (XML-RPC, JAX-RPC, SOAP, and the like). Other variations are possible. Other functionality could also be provided by each component/module, or existing functionality could be distributed amongst the components/modules in different ways, yet still achieve the functions of the security manager device controller200. Furthermore, in some embodiments, some or all of the components of the security manager device controller200may be implemented or provided in other manners, such as at least partially in firmware and/or hardware, including, but not limited to one or more application-specific integrated circuits (“ASICs”), standard integrated circuits, controllers (e.g., by executing appropriate instructions and including microcontrollers and/or embedded controllers), field-programmable gate arrays (“FPGAs”), complex programmable logic devices (“CPLDs”), and the like. Some or all of the system components and/or data structures may also be stored as contents (e.g., as executable or other machine-readable software instructions or structured data) on a computer-readable medium (e.g., as a hard disk; a memory; or other non-transitory computer-readable storage medium to be read by an appropriate drive or via an appropriate connection, such as a DVD, random access memory (RAM) or flash memory device) so as to enable or configure the computer-readable medium and/or one or more associated computing systems or devices to execute or otherwise use or provide the contents to perform at least some of the described techniques. A transitory computer-readable medium as used herein means a signal transmission itself (for example, a propagating electrical or electromagnetic signal itself) and not the hardware medium on which information is stored. Some or all of the system components and data structures may also be stored as data signals (e.g., by being encoded as part of a carrier wave or included as part of an analog or digital propagated signal) on a variety of computer-readable transmission mediums, which are then transmitted, including across wireless-based and wired/cable-based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). Such computer program products may also take other forms in other embodiments. Accordingly, embodiments of this disclosure may be practiced with other computer system configurations. FIG.3is a flow diagram of an example method300for a computer network security manager device, according to one non-limiting embodiment. At302, a security manager device, such as, for example, security manager device118shown inFIG.1, connects to a first wireless router, such as, for example, router136ofFIG.1. At304, the security manager device connecting to a plurality of devices. For example, the plurality of devices may be device A130, device B132and device C134ofFIG.1. At306, the security manager device performs device agnostic activation of the plurality of devices to enable the plurality of devices to perform respective functions of each device. At308, the security manager device prevents the plurality of devices from connecting directly to the first wireless router. At310, the security manager device allows other devices on the Internet to communicate with the plurality of devices according to firewall rules. At312, the security manager device receives an indication that the first wireless router to which the network security manager device is connected is out of service or no longer exists. At314, the security manager device, in response to receiving the indication that the first wireless router to which the network security manager device is connected is out of service or no longer exists, prevents other devices on the Internet from being able to communicate with the plurality of devices. FIG.4is a flow diagram of an example method400for a computer network security manager device upon Internet connectivity being restored, according to one non-limiting embodiment. At402, a security manager device such as, for example, security manager device118shown inFIG.1, receives an indication that a first wireless router to which the network security manager device is connected, such as, for example, router136ofFIG.1, is out of service or no longer exists. At404, the security manager device, after receiving the indication that the first wireless router to which the network security manager device is connected is out of service or no longer exists, receives an indication that the first wireless router to which the network security manager device is connected is back in service or that a new wireless router is connected to the computer network security manager device and is in service. At406, the security manager device, in response to the indication that the first wireless router to which the network security manager device is back in service or that a new wireless router is connected to the computer network security manager device and is in service to replace the first wireless router, allows other devices on the Internet to communicate with a plurality of devices connected to the computer network security manager according to firewall rules. For example, the plurality of devices may be device A130, device B132and device C134ofFIG.1. FIG.5is a flow diagram of an example method500for a computer network security manager device switching to a new router using the security manager device, according to one non-limiting embodiment. At502, a security manager device, such as, for example, security manager device118ofFIG.1, allows other devices on the Internet to communicate with a plurality of devices connected to the computer network security manager according to firewall rules. For example, the plurality of devices may be device A130, device B132and device C134ofFIG.1. At504, the security manager device receives an indication that a first wireless router, such as, for example, router136ofFIG.1, to which the network security manager device is connected is out of service or no longer exists. At506, the security manager device, in response to receiving the indication that the first wireless router to which the network security manager device is connected is out of service or no longer exists, prevents other devices on the Internet from being able to communicate with the plurality of devices. At508, the security manager device, despite that the first wireless router is out of service or no longer exists, keeps each network connection from the plurality of devices to the network security device manager in a manner that is unaffected by the first wireless router being down or no longer existing (other than the plurality of devices experiencing a temporary Internet service interruption). At510, the security manager device connects to a second wireless router to replace the first wireless router that is out of service or no longer exists. At512, the security manager device provides, via the connection to the second wireless router, Internet connectivity to the plurality of devices connected to the network security device manager without reconnection, reactivation or reconfiguration of the plurality of devices to obtain the Internet connectivity. As used herein, a “component” may refer to a device, physical entity or logic having boundaries defined by function or subroutine calls, branch points, application programming interfaces (APIs), or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. Where a phrase similar to “at least one of A, B, or C,” “at least one of A, B, and C,” “one or more A, B, or C,” or “one or more of A, B, and C” is used, it is intended that the phrase be interpreted to mean that A alone may be present in an embodiment, B alone may be present in an embodiment, C alone may be present in an embodiment, or that any combination of the elements A, B and C may be present in a single embodiment; for example, A and B, A and C, B and C, or A and B and C. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure. | 47,281 |
11943200 | DETAILED DESCRIPTION FIG.1shows an example system100for exchanging data using a virtual private network (VPN). The system100includes several client computer systems102a-102c, a VPN server computer system102d, and several internal computer systems102e-102gthat are communicatively coupled to one another through communications networks104aand104b. Further, the system100includes a VPN security system150having a computerized neural network152for detecting anomalous VPN sessions by the client computer systems102a-102c. The communications networks104aand140bcan be any communications networks through which data can be transferred and shared. For example, the communications networks104aand140bcan be local area networks (LANs) or wide-area networks (WANs), such as the Internet. The communications networks104aand140bcan be implemented using various networking interfaces, for instance wireless networking interfaces (such as Wi-Fi, Bluetooth, or infrared) or wired networking interfaces (such as Ethernet or serial connection). The communications networks104aand140balso can include combinations of more than one network, and can be implemented using one or more networking interfaces. The communications network104ais a public communications network. As an example, the public communications network104acan be the Internet. Further, the communications network104bis a private communications network. For example, the private communication network104bcan be an internal LAN that is restricted for use to a limited subset of users, such as the employees of a particular company or organization. In some implementations, the private communication network104bcan be used to exchange sensitive information and/or provide functionality that is not intended to be accessible by the public. In some implementations, network traffic into and out of the private communications network104b(for example, network traffic from the public communications network104a) can be controlled by one or more firewalls, such that certain types of traffic are blocked from entering or leaving the private communications network104b. In this example, the client computer systems102a-102care not directly connected to the private communications network104b. However, the client computer systems102a-102ccan gain access of the private communications network104b(and the internal computer systems102e-102gthat are connected to the private communications network104b) by establishing respective VPN sessions with the VPN server computer system102dusing the public communications network104a. As an example, each of the client computer systems102a-102ccan create a private network connection across one or more of the public network connections provided by the public communications network104a, such that network traffic is “tunneled” between the private communications network104band the client computer systems102a-102c. Data can be exchanged between the client computer systems102a-102cand the private communications network104busing the VPN sessions, as though the client computer systems102a-102cwere directly connected to the private communications network104b. For example, using the VPN sessions, the client computer systems102a-102ccan transmit data intended for the private communications network104bto the VPN server computer system102d. Upon receipt of the data, the VPN server computer system102dcan route the data to the private communications network104b(for example, to one or more of the internal computer system102e-102g). As another example, the internal computer systems102e-102gcan transmit data intended to the client computer systems102a-102cto the VPN server computer system102d. Upon receipt of the data, the VPN server computer system102dcan route the data to the client computer systems102a-102cusing the VPN sessions. In some implementations, each of the VPN sessions can be encrypted, such that the data exchanged between the client computer systems102a-102cand the private communications network104bis not exposed to members of the public. Further, the VPN server computer system102dcan require that each of the client computer systems102a-102cprovide security credentials when establishing a VPN session. For example, the VPN server computer system102dcan require that each of the client computer systems102a-102cprovide a valid user name and password, a security certificate or token, or some other form of authentication, such that unauthorized users cannot access the private communications network104b. However, in some implementations, malicious users may attempt to access the private communications network104busing VPN sessions. For example, a malicious user may gain access to an authorized user's security credentials, and provide the security credentials to the VPN computer system102din an attempt to establish a VPN session. As another example, a malicious user may attempt to compromise the private communications network104band/or the internal computer systems102e-102gusing a VPN sessions, such as by exploiting security vulnerabilities in the private communications network104band/or the internal computer systems102e-102g. As another example, a malicious may attempt to obtain sensitive information stored on the private communications network104band/or the internal computer systems102e-102g. As another example, a malicious may attempt to destroy information stored on the private communications network104band/or the internal computer systems102e-102g. The VPN security system150is configured to detect anomalous VPN sessions, such as VPN sessions that are used by malicious users to perform malicious activities. For example, the VPN security system1500can monitor each of the VPN sessions established by the client computer systems102a-102c, and gather information regarding the characteristics of each of the VPN sessions. Further, using the computerized neural network152, the VPN security system150can process the gathered information to determine the likelihood that the VPN sessions are associated with anomalous activity. In some implementations, the VPN security system150can automatically control VPN sessions based on the determination, such as automatically terminating VPN sessions that are likely to be associated with anomalous activity. In some implementations, the security system can present the processed data to a user to assist the user in manually controlling the VPN sessions. FIG.2shows example modules of the VPN security system150. Each of the modules can be implemented by digital electronic circuitry, computer software, firmware, or hardware, or in combinations of one or more of them. As an example, some or all of the modules can be implemented using one or more computer systems (for example, the computer system600described with respect toFIG.6). During operation, the VPN security system150obtains VPN session logs202regarding VPN sessions that are currently active or were previously active on the system100. For instance, the VPN session logs202can include information regarding the characteristics of each of the VPN sessions. Example information includes the time at which a VPN session began, a time at which a VPN session ended (if the VPN session has already ended), a name or identity of a user that initiated the VPN session, a network address associated with the VPN session (for example, an internet protocol (IP) address), and an amount of data transmitted during the VPN session. Further, the VPN security system150also obtains network security logs204regarding the communications networks of the system100. For instance, the network security logs204can include information regarding attempts to access network resources on the communications networks, such as attempts made by the client computer systems during VPN sessions. As an example, according to a network security policy, a client computer system can be allowed to access network resources having certain network addresses (for example, IP addresses) and/or network ports, and can be prevented or blocked from accessing network resources having certain other network addresses and/or network ports. The network security logs204can indicate that time that each attempt occurred, the network resource for which access had been attempted, and the outcome of the attempt (for example, whether the attempt was successful or was blocked). In some implementations, the network security logs204can be retrieved from one or more firewalls or other security systems of the private communications network104b. In some implementations, at least some of the VPN session logs202and network security logs204can be retrieved using one or more data buses206. In some implementations, at least some of the VPN session logs202and network security logs204can be stored in a log management system208(for example, a database module having one or more data storage devices) for future retrieval. The VPN security system150also includes a VPN watchdog module210that determines a state or status of each of the VPN sessions based on the VPN session logs202and the network security logs204. For example, using a query module212, the VPN watchdog module210can periodically submit queries (for example, micro-batched queries) to the log management system208, and obtain information regarding one or more of the VPN sessions that are currently active or were previously active on the system100. Based on the information, the VPN session logs can generate VPN session state data214indicating the current state or status of each of the VPN sessions. In some implementations, the VPN session state data214can indicate whether each VPN session is currently active (for example, connected and transmitting data) or inactive (for example, disconnected or terminated). In some implementations, the VPN session state data214can be stored in the form of a data or state table, and can be updated periodically by the VPN watchdog module210based on information retrieved from the log management system208and/or the data bus206. In some implementations, the VPN watchdog module210can queue information regarding VPN sessions for further analysis by a session analyzer module216. For example, for each of VPN sessions on the system100, the VPN watchdog module210can generate one or more data items that include information obtained from the VPN session logs202and network security logs204regarding that VPN session. Further, the VPN watchdog module can transmit the data items for a queue module218(for example, a data buffer or data storage device) to await further analysis. In some implementations, the VPN watchdog module210can transmit data regarding a VPN session to the queue module218when the VPN session has ended (for example, when the VPN session state data214indicates that the VPN session is no longer active). In some implementations, the VPN watchdog module210can transmit data regarding a VPN session to the queue module218even while the VPN session is still active. The session analyzer module216retrieves the data items from the queue module218(for example, according to a particular priority or order), and processes the retrieved data items. In some implementations, the session analyzer module216can generate data records summarizing each of the VPN sessions. As an example, a data record for a VPN session can include one or more of the following:An identity (for example, a user name) of the user that initiated the VPN session,A network security policy associated with that user (for example, a “user policy”),A VPN tunnel IP for the VPN session,A start time of the VPN session,An end time of the VPN session,A duration of the VPN session,An amount of data transferred during the VPN session (for example, a number of uploaded bytes and an number of a downloaded bytes), andA list of unique combinations of destination network addresses and network ports to which connection attempts were allowed during the VPN session, and a number of allowed attempts to each of those network address and network port combinations, andA list of unique combinations of destination network addresses and network ports to which connection attempts were blocked during the VPN session, and a number of blocked attempts to each of those network address and network port combinations. Example lists of connections (with their associated network address and network ports) are shown below: Allowed connection attempts to IP address:port combination, with count: 10.10.123.43:80, count=510.11.144.32:80, count=12310.10.123.123:80, count=3410.10.123.43:443, count=100 Blocked connection attempts to IP address:port combination, with count:10.10.23.45:21 count=3410.11.12.33:21 count=76 In this example, during a VPN session, five successful connection attempts were made to the destination IP address 10.10.123.43 over the network port80, 123 successful connection attempts were made to the destination IP address 10.11.144.32 over the network port80, and so forth. Further, during the VPN session, 34 connection attempts were blocked for the destination IP address 10.10.23.45 over the network port21, and 76 connection attempts were blocked for the destination IP address 10.11.12.33:21 over the network port76. In some implementations, the session analyzer module216can also retrieve data from the log management system208and/or data buses206using the query module212(including the VPN session logs202and the network security logs204), and generate data records based on the retrieved data. The session analyzer module216transmits the data records to a feature selection module220for further processing. The feature selection module220sorts, correlates, and normalizes the data records regarding the VPN sessions according to one or more dimensions. For example, the feature selection module220can identify dimensions or “features” of data based on the data records output by the session analyzer module216and business logic data that includes information regarding the “user policys” associated with the users that initiated the VPN sessions. For instance, according to the business logical data, each user can be assigned a different set of permissions, depending on their role a company or organization. For example, a “regular” user can be assigned a “regular” user policy with a first set of permissions, a system or network administrator can be assigned an “administrator” user policy with a second set of permissions, a manager can be assigned a “management” user policy with a third set of permissions, and so forth. In some implementations, the business logical data can indicate, for each user or group of users, the subnets of the private communication network104band associated network ports that the user or group of users has access to, the subnets of the private communication network104band associated network ports that the user or group of users does not have access to (for example, for which access is blocked by a firewall), the network ports that were utilized during a VPN session, and so forth. A subnet of the private communication network104bcan be, for example, a logical division, a portion, or a subset of the private communications network104b, such as a range of IP addresses. In some implementations, the feature selection module220can select features pertaining to the network addresses and ports to which a client computer is permitted to access during a VPN session (for example, in accordance with the permissions or user policy assigned to the user who initiated the VPN session). As an example, one of the features can include an indication of each subnet of the private communications network104bthat a user is permitted to access during a VPN session (for example, in accordance with the user policy associated with that user). Further, a feature can include an indication of the number of unique combinations of (i) destination network addresses (for example, IP address) in that subnet and (ii) network ports to which the user successfully attempted to connect during the VPN session. For instance, in the example list of connection attempts above, the feature for the subnet 10.10.123.0/24 can be three (for example, the user successfully connected to three unique combinations of IP addresses and port numbers in the subnet 10.10.123.0/24 during the VPN session). Further, in the example list of connection attempts above, the feature for the subnet 10.11.144.0/24 can be one (example, the user successfully connected to one unique combination of IP addresses and port numbers in the subnet 10.10.123.0/24 during the VPN session). Further, the in the example list of connection attempts above, the feature for the remaining subnets can be zero. As another example, one of the features can include, for each of the network ports, an indication of a number of unique destination network addresses to which the user successfully attempted to connect during the VPN session. For instance, in the example list of connection attempts above, the feature for the network port80can be three (for example, the user successfully connected to three unique IP addresses using the network port80during the VPN session). Further, in the example list of connection attempts above, the feature for the network port443can be one (for example, the user successfully connected to one unique IP address using the network port443during the VPN session). Further, the in the example list of connection attempts above, the feature for the remaining ports can be zero. As another example, a feature can include, for each network port, a percentage of successful connection attempts that occurred over that network port during a VPN session. For example, in the example list of connection attempts above, the feature for the network port443can be 38.16% (for example, 38.16% of the success connection attempts occurred over the network port443). Further, in the example list of connection attempts above, the feature for the network port80can be 61.84% (for example, 61.84% of the success connection attempts occurred over the network port80). Further, the in the example list of connection attempts above, the feature for the remaining ports can be zero. As another example, a feature can include, for each network port, a number of unsuccessful attempts (for example, blocked attempts) that occurred over that network port during a VPN session. For example, in the example list of connection attempts above, the feature for the network port21can be 110 (for example, 110 unsuccessful connection attempts were made over the network port21). Further, the in the example list of connection attempts above, the feature for the remaining ports can be zero. As another example, a feature can include, for each network port, a number of unique network addresses to which the user unsuccessfully attempted to connect using that network port during the VPN session. For example, in the example list of connection attempts above, the feature for the network port21can be two (for example, the user unsuccessfully attempted to connect to two different IP addresses using the network port21. Further, the in the example list of connection attempts above, the feature for the remaining ports can be zero. In some implementations, the feature selection module220can select features pertaining to network traffic associated with a VPN session. As an example, a feature can include a number of unique network address and network ports to which the user successfully connected during a VPN session. As another example, a feature can include a percentage blocked traffic during a VPN session (for example, a percentage of the total connection attempts that were blocked). As another example, a feature can include a number of allowed connection attempts during the VPN session. As another example, a feature can include a number of blocked connection attempts during the VPN session. As another example, a feature can include a rate at which connection attempts were allowed during the VPN session (for example, a number of allowed attempts per second). As another example, a feature can include a rate at which connection attempts were blocked during the VPN session (for example, a number of blocked attempts per second). As another example, a feature can include a rate at which data was uploaded from the client computer system to the VPN server during the VPN session. As another example, a feature can include a rate at which data was downloaded by the client computer system from the VPN server during the VPN session. As another example, a feature can include the amount of data that was uploaded from the client computer system to the VPN server during the VPN session. As another example, a feature can include the amount of was downloaded by the client computer system from the VPN server during the VPN session. As another example, a feature can include a duration of the VPN session (for example, in seconds). In some implementation can, the feature selection module220can collect historical information regarding several VPN sessions that were established over a period of time in the past (for example, in the previous one or more days, weeks, months, years, or some other time period). Further, the feature selection module220can periodically collect new information over time, and update the data records based on the newly collected information. At least some of the processed data records can be stored, for example, in a memory cache222and/or an intelligence database module224having one or more data storage devices for future retrieval. The VPN security system150also includes a vectorizer module226, a feature generator module228, a model handler module232, and a machine learning operational system234for training and utilizing the computerized neural network152. The vectorizer module226ingests the normalized data records provided by the feature selection module220and generates data vectors based on the data records. In some implementations, the vectorizer module226can retrieve at least some of the data records from the memory cache222and/or the intelligence database module224, and generate data vectors for the data records in real time or substantially real time. The vectorizer module226can generate data vectors representing the information contained in the data records using direct vectorization, meta-enhanced vectorization (for example, vectorization based on metadata), fuzzy vectorization, or a combination of vectorization techniques. For direct vectorization, the information contained in the data records are vectorized directly into vector data structures (for example, without the use of additional information or metadata). For meta-enhanced vectorization, metadata from known intelligence sources relating to or associating with known malicious activities or triggers can be incorporated into the features of the data records, and then vectorized into data vectors accordingly. For fuzzy vectorization, information contained in the data records can be vectorized into data vectors based on a fuzzy searches to determine fuzzy associated with known malicious activities or behaviors. At least some of the data vectors can be stored, for example, in the memory cache222and/or an intelligence database module224for future retrieval. The feature generator module228ingests the data vectors from the vectorizer module226, and generates features for ingestion by the computerized neural network152to perform predictions. In some implementations, the feature generator module228can perform autoencoding to facilitate the generation of features. In some implementations, the output of the autoencoders can be used as a “final” set of features to train the computerized neural network152. In some implementations, the autoencoders themselves can be used for anomaly detection. Several types of autoencoders can be used to perform autoencoding, such as sparse autoencoders, denoising autoencoders, contractive autoencoders, and variational autoencoders. In a sparse autoencoder, a first set of features is “hidden” for training and a second set of features is “active” for training. Accordingly, the computerized neural network152can be trained based on a limited subset of features, rather than the entirety of the features. In a denoising autoencoder, data noise is removed from the data vectors, such as noise associated with corrupted data inputs or measurements that may negatively impact the training or use of the computerized neural network152. In a contractive autoencoder, a regularizer module is used to make the computerized neural network142more robust against variations in input data. In a variational autoencoder, a generative adversarial approach is performed by using a recognition model and a generative model that utilize the features of the data vectors to compute a directed probabilistic graphical model with loss and estimator. Either of these approaches can be performed to modify the input vectors from the vectorizer module226, or to utilize the trained autoencoders themselves as trained models. In some implementations, an automated and recursive technique can be used to find the best feature set and model pairs for optimal accuracy and precision. At least some of the generated features can be stored, for example, in the memory cache222and/or an intelligence database module224for future retrieval. Further, at least some of the generated features can be queued in a queue module230for further processing by a model handler module232. The model handler module232periodically or continuously queries the queue module230for one or more sets of features generated by the feature generator module228, and selects a computation model for generating and training the computerized neural network152based on the features. For example, the model handler module232can retrieve the sets of features from the queue module230, and analyze the features to select a computational model from among a pool of candidate computational models for generating the computerized neural network152. As an example, if the set of features is relatively small in size or relatively low in complexity, the model handler module232can select a computation model that does not rely on “deep learning” techniques to reduce the expenditure of computation resources during the training process. As another example, if the set of features is relatively large in size or relatively complex, the model handler module232can select a computation model that does utilizes deep learning techniques to better identify complex trends or correlations during the training process. Further, the configuration parameters can specify a desired accuracy and a set of variations that can be used to generate and train the computerized neural network152. The desired accuracy can be expressed, for example, as a number of false positives or false negatives, a rate of false positions or false negatives, a loss or residual value between predictions from “ground truth” data, a processing speed, a distance between anomalous and “normal” samples, a percentage of anomalous samples, or any other metric. The variations can include different sets of configurations to control the structure of the computerized neural network152and tunable “hyperparameters” that control the generating and training process. The configurations and hyperparameters can be adjusted to achieve a particular desired result or goal. For example, an administrator may adjust the configurations and hyperparameters to increase the number of nodes in the computerized neural network152if the calculated loss or residuals for the computerized neural network1523are high (for example, indicating that the predictive value of computerized neural network152is low). The machine learning operational system234uses the selected computational model to generate and train the computerized neural network152. As an example, the machine learning operational system234can train the computerized neural network152in accordance with the computational model based on the features generated by the feature generator module228, such that the computerized neural network152can recognize patterns or trends in input data that are indicative of an anomalous VPN session. An example training process is discussed in greater detail with reference toFIG.4. Further, the machine learning operational system234can receive data regarding new VPN sessions (for example, data from the VPN session logs202and the network security logs204) regarding newly established VPN sessions), and input at least some of the data into the computerized neural network152for processing. In some implementations, the inputted data can have similar dimensions or “features” as the data that was used to generate and train the computerized neural network152, such that the computerized neural network152can be used to predict a particular result based on the inputted data. As an example, the inputted data can include one or more of the features described above. The output236of the computerized neural network152can indicate a security risk associated with the VPN session, such as a metric indicating a likelihood that the VPN session is associated with anomalous activity. The output236of the computerized neural network152can be stored, for example, in a memory cache238and/or a database module240having one or more data storage devices for further retrieval. Further, the VPN security system150can output information to a user that summarizes the generating, training, and use of the computerized neural network152. For an output module242can generate one or more alerts or reports244to a user summarizing the selection of a computation model, the performance of the selected computational model, and the set of configurations and hyperparameters that were used for the selected the computational model. As another example, the output module242can generate one or more alerts or reports244that identify anomalous VPN sessions that were detected by the VPN security system150using the computerized neural network152. In some implementations, the alerts or reports244can be transmitted via email or using an application programming interface (API) for a third party system, such as a security incident and event management (SIEM) system or an incident response platform (IRP). As described above, in some implementations, a VPN security system150can perform one or more machine learning or artificial intelligence processes to identify patterns or trends in input data that are indicative of an anomalous VPN session, and to identify security risks based on those patterns or trends. For example, the computerized neural network152can be trained using historical data regarding previously established VPN sessions and/or synthetically generated data (for example, data regarding simulated VPN sessions). This training data can include information regarding the characteristics of each of these VPN sessions, and information regarding whether each of these VPN sessions had been associated with an anomalous activity (for example, an attempt by a malicious user to gain access to other otherwise compromise a private communication network). Accordingly, the VPN security system150can be trained to identify new security risks based on previously identified security risks or synthetically generated security risks. In some implementations, a machine learning process can be performed using one or more computerized neural networks152. A simplified example of a computerized neural network152is shown inFIG.3. The computerized neural network300includes several nodes302(often called “neurons”) interconnected with another by interconnections304. Further, the nodes302are arranged according to multiple layers, including an input layer306a, a hidden layer306b, and an output layer306c. The arrangement of the nodes302and the interconnections304between them represent a mathematical transformation of input data (for example, as received by the nodes of the input layer306a) into corresponding output data (for example, as output by the nodes of the output layer306c). In some implementations, the input data can represent one or more data points or “features” obtained by the VPN security system150, and the output data can represent one or more corresponding outcomes or decisions generated by the VPN security system150based on the input data. The nodes302of the input layer306areceive input values and output the received input values to respective nodes of the next layer of the computerized network300. In this example, the computerized neural network300includes several inputs i1, i2, i3, and i4, each of which receives a respective input value and outputs the received value to one or more of the nodes μx1, μx2, and μx3(for example, as indicated by the interconnections304). In some implementations, at least some of the information stored by the VPN security system150(for example, information regarding a particular VPN session) can be used as inputs for the nodes of the input layer306a. For example, at least some of the information stored by the VPN security system150can be expressed numerically (for example, assigned a numerical score or value), and input into the nodes of the input layer306a. An example inputs include information from the VPN session logs202, the network security logs204, the features selected by the feature selection module220, the data vectors generated by the vectorizer module226, and/or the features generated by the feature generator module228. The nodes of the hidden layer306breceive input values (for example, from the nodes of the input layer306aor nodes of other hidden layers), applies particular transformations to the received values, and outputs the transformed values to respective nodes of the next layer of the computerized neural network300(for example, as indicated by the interconnections304). In this example, the computerized neural network300includes several nodes μx1, μx2, and μx3, each of which receives respective input values from the nodes i1, i2, i3, and i4, applies a respective transformation to the received values, and outputs the transformed values to one or more of the nodes y1and y2. In some implementations, nodes of the hidden layer306bcan receive one or more input values, and transform the one or more received values according to a mathematical transfer function. As an example, the values that are received by a node can be used as input values in particular transfer function, and the value that is output by the transfer function can be used as the output of the node. In some implementations, a transfer function can be a non-linear function. In some implementations, a transfer function can be a linear function. In some implementations, a transfer function can weight certain inputs differently than others, such that certain inputs have a greater influence on the output of the node than others. For example, in some implementations, a transfer function can weight each of the inputs by multiplying each of the inputs by a respective coefficient. Further, in some implementations, a transfer function can apply a bias to its output. For example, in some implementations, a transfer function can bias its output by a particular offset value. For instance, a transfer function of a particular node can be represented as: Y=∑i=1n(weighti*inputi)+bias, where weightiis the weight that is applied to an input inputi, bias is a bias or offset value is that is applied to the sum of the weighted inputs, and Y is the output of the node. The nodes of the output layer306creceive input values (for example from the nodes of the hidden layer306b) and output the received values. In some implementations, nodes of the output layer306ccan also receive one or more input values, and transform the one or more received values according to a mathematical transfer function (for example, in a similar manner as the nodes of the hidden layer306b). As an example, the values that are received by a node can be used as input values in particular transfer function, and the value that is output by the transfer function can be used as the output of the node. In some implementations, a transfer function can be a non-linear function. In some implementations, a transfer function can be a linear function. In this example, the computerized neural network300includes two output nodes y1and y2, each of which receives respective input values from the nodes μx1, μx2, and μx3, applies a respective transformation to the received values, and outputs the transformed values as outputs of the computerized neural network300. AlthoughFIG.3shows example nodes and example interconnections between them, this is merely an illustrative example. In practice, a computerized network can include any number of nodes that are interconnected according to any arrangement. Further, althoughFIG.3shows a computerized neural network300having a single hidden layer306b, in practice, a network can include any number of hidden layers (for example, one, two, three, four, or more), or none at all. In some implementations, the computerized neural network152can be train based on training data. An example process400for training a computerized neural network is shown inFIG.4. According to the process400, the VPN security system150receives training data (block402). For example, as described above, the training data can include historical or synthetic data or regarding one or more VPN sessions. The data can include information regarding the characteristics of each of these VPN sessions. Further, the data can include information regarding whether each of these VPN sessions had been associated with an anomalous activity (for example, an attempt by a malicious user to gain access to other otherwise compromise a private communication network). This information can be used as the corresponding “ground truth” (for example, known outcomes given certain combinations of input data, or desired decisions by the VPN security system150given certain combinations of input data). In the event that the “ground truth” is not available, an autoencoder neural network can be trained to learn the normal users' VPN sessions. As an example, future vectorized VPN sessions can be inputted to the trained autoencoder. The difference between input to the trained autoencoder and the autoencoder output can be calculated. The calculated value (for example, a deviation score) can represent how much the current VPN session being evaluated deviates from previous VPN sessions. VPN sessions with high deviation scores that exceed a threshold can be determined to be anomalous and can be terminated or reported for further analysis and investigation. The VPN security system150trains the neural network based on the training data (block404). For example, based on this training data, the VPN security system150can iteratively modify the arrangement of the nodes, the interconnections between the neural networks, and the transfer functions of each of the nodes (for example, the weights, the biases, or other aspects of the transfer function) to increase the predictive value of the computerized neural network. For instance, the VPN security system150can iteratively perform these modifications, such that when the inputs of the training data are provided to the computerized neural network, output of the computerized neural network better matches the “ground truth” indicated by the training data. After training the computerized neural network, the VPN security system150applies test data (also referred to as “validation data”) to the trained neural network (block406). As an example, the VPN security system150can reserve a portion of the training data as test data, such that it is not used for training the computerized neural network in block404. After the computerized neural network has been trained in block604, the VPN security system150can apply the test data as inputs to the trained neural network, and determine how well the neural network predicts the security risk based on the test data. The VPN security system150can calculate an error between (i) the security risk determined by the neural network based on the test data), and (ii) the known security risk specified by the test data (block408). If the error is sufficiently high (for example, greater than a threshold error value), the VPN security system150can re-train the neural network (for example, by modifying the arrangement of the nodes, the interconnections between the nodes, and the transfer functions of one of more of the nodes) (block404). In some implementations, the VPN security system150can re-train the network by obtaining additional training data, and using the additional training data to re-train the neural network. If the error is sufficiently low (for example, less than or equal to the threshold error value), the VPN security system150can apply newly acquired sample data to the trained neural network (block410). The newly acquired sample data can include, for example, information obtained during an operation of the system100, such as when new VPN sessions are established by the client computer devices102a-102c. Accordingly, the VPN security system150can be trained to identify new security risks based on previously identified security risks or synthetically generated security risks. As described above, in some implementations, the VPN security system150can be iteratively trained and re-trained with successive sets of training data (for example, additional sets of training data that are collected over time) to progressively improve its accuracy in identifying security risks. In some implementations, this training process can be performed automatically by the VPN security system150without manual user input. In some implementations, the output of a computerized neural network can be a security metric, the value for which represents a security risk associated with a particular VPN session. For example, the value of the security metric can represent a likelihood or probability that a particular VPN session is associated with a malicious activity. In some implementations, if the security metric for a particular VPN session (for example, exceeds a threshold value), the VPN security system150can automatically terminate the VPN session or notify an administrator regarding the risk. Further, if the security metric for a particular VPN session is sufficiently low (for example, is less than or equal to the threshold value), the VPN security system150can allow the VPN session to continue. In some implementations, the security metric can be expressed as one or more numerical values. The value of the security metric can be determined based one or more of the characteristics described in this disclosure. For example, when a VPN session is established, information regarding the characteristics of the VPN session can be used as inputs in a computerized neural network. Further, the output of the computerized neural network can be a numerical value that represents a security risk associated with the VPN session. As an example, a higher value can correspond to a higher security risk, whereas a lower value can correspond to a lower security risk. Based on the output, the VPN security system150can selectively allow the VPN session to continue, or terminate the VPN session. For example, if the security metric is higher than a particular threshold value (for example, indicating that the security risk is sufficiently high), the VPN security system150can selectively determinate the VPN session. Example Processes An example process500for detecting anomalous virtual private network sessions using machine learning is shown inFIG.5. In some implementations, the process500can be performed by the VPN security systems described in this disclosure (for example, the VPN security system150shown and described with reference toFIGS.1and2) using one or more processors (for example, using the processor or processors610shown inFIG.6). In the process500, one or more processors obtain first data indicating a plurality of properties of a first virtual private network (VPN) session by a computer system on a communications network (block502). In some implementations, the first data can be obtained by the one or more processors subsequent to the termination of the VPN session. The properties of the first VPN session include (i) for each of a plurality of first subnets of the communications network, a number of allowed connection attempts by the computer system to that first subnet during the first VPN session, (ii) for each of a plurality of second subnets of the communication network, a number of blocked connection attempts by the computer system to that second subnet during the first VPN session, (iii) for each of a plurality of first network ports, a number of allowed connection attempts by the computer system using that first network port during the first VPN session, and (iv) for each of a plurality of second network ports, a number of blocked connection attempts by the computer system using that second network port during the first VPN session. In some implementations, the first data can indicate additional properties of the first VPN session, either instead of or in addition to those described above. For example, the properties of the first VPN session can include a number of unique destinations for network traffic transmitted by the computer system during the first VPN session, where each destination is represented by a respective network address and a respective network port. As another example, the properties of the first VPN session can include a percentage of network traffic by the computer system that was blocked during the first VPN session. As another example, the properties of the first VPN session can include an amount of network traffic by the computer system that was allowed during the first VPN session, and an amount of network traffic by the computer system that was blocked during the first VPN session, As another example, the properties of the first VPN session can include a rate at which network traffic by the computer system was allowed during the first VPN session, and a rate at which network traffic by the computer system was blocked during the first VPN session. As another example, the properties of the first VPN session can include a time duration of the first VPN session. As another example, the properties of the first VPN session can include an upload transmission rate by the computer system during the first VPN session, and a download transmission rate by the computer system during the first VPN session. As another example, the properties of the first VPN session can include an amount of data uploaded by the computer system during the first VPN session, and an amount of data downloaded by the computer system during the first VPN session. A metric for the first VPN session is determined using a computerized neural network implemented by the one or more processors and based on the first data (block504). The metric represents an estimated likelihood that the first VPN session is associated with a malicious activity. In some implementations, the malicious activity can include accessing the communications network by an unauthorized user. The one or more processors control the first VPN session based on the metric (block506). In some implementations, controlling the first VPN session can include terminating the first VPN session. In some implementations, controlling the first VPN session can include generating a notification to a user indicating that the first VPN session is likely to be associated with the malicious activity. In some implementations, the process500can also include training the computerized neural network based on second data indicating of a plurality of properties of additional VPN sessions on the communications network. The second data can include, for each of the additional VPN sessions, an indication whether that additional VPN was associated with a malicious activity. In some implementations, the computerized neural network is selected from among a plurality of candidate computerized neural networks based on a data size of the second data. In some implementations, training the computerized neural network can include processing the second data using or more autoencoders. Example autoencoders include a sparse autoencoder, a denoising autoencoder, a contractive autoencoder, and a variational autoencoder. In some implementations, training the computerized neural network can include generating one or more data vectors based on the second data. The computerized neural network can be trained based on the one or more data vectors. Additional details regarding the training of a computerized neural network are described for example, with reference toFIGS.2-4. Example Systems Some implementations of the subject matter and operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. For example, in some implementations, one or more components of the system100and the modules of the VPN security system150can be implemented using digital electronic circuitry, or in computer software, firmware, or hardware, or in combinations of one or more of them. In another example, the process500shown inFIG.5can be implemented using digital electronic circuitry, or in computer software, firmware, or hardware, or in combinations of one or more of them. Some implementations described in this specification can be implemented as one or more groups or modules of digital electronic circuitry, computer software, firmware, or hardware, or in combinations of one or more of them. Although different modules can be used, each module need not be distinct, and multiple modules can be implemented on the same digital electronic circuitry, computer software, firmware, or hardware, or combination thereof. Some implementations described in this specification can be implemented as one or more computer programs, that is, one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus. A computer storage medium can be, or can be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (for example, multiple CDs, disks, or other storage devices). The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, for example, an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, for example, code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (for example, one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (for example, files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. Some of the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, for example, an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. A computer includes a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. A computer can also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, for example, magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (for example, EPROM, EEPROM, AND flash memory devices), magnetic disks (for example, internal hard disks, and removable disks), magneto optical disks, and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, operations can be implemented on a computer having a display device (for example, a monitor, or another type of display device) for displaying information to the user. The computer can also include a keyboard and a pointing device (for example, a mouse, a trackball, a tablet, a touch sensitive screen, or another type of pointing device) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback. Input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user. For example, a computer can send webpages to a web browser on a user's client device in response to requests received from the web browser. A computer system can include a single computing device, or multiple computers that operate in proximity or generally remote from each other and typically interact through a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (for example, the Internet), a network including a satellite link, and peer-to-peer networks (for example, ad hoc peer-to-peer networks). A relationship of client and server can arise by virtue of computer programs running on the respective computers and having a client-server relationship to each other. FIG.6shows an example computer system600that includes a processor610, a memory620, a storage device630and an input/output device640. Each of the components610,620,630and640can be interconnected, for example, by a system bus650. The processor610is capable of processing instructions for execution within the system600. In some implementations, the processor610is a single-threaded processor, a multi-threaded processor, or another type of processor. The processor610is capable of processing instructions stored in the memory620or on the storage device630. The memory620and the storage device630can store information within the system600. The input/output device640provides input/output operations for the system600. In some implementations, the input/output device640can include one or more of a network interface device, for example, an Ethernet card, a serial communication device, for example, an RS-232 port, or a wireless interface device, for example, an 802.11 card, a 3G wireless modem, a 4G wireless modem, or a 5G wireless modem, or both. In some implementations, the input/output device can include driver devices configured to receive input data and send output data to other input/output devices, for example, keyboard, printer and display devices660. In some implementations, mobile computing devices, mobile communication devices, and other devices can be used. While this specification contains many details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features specific to particular examples. Certain features that are described in this specification in the context of separate implementations can also be combined. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple embodiments separately or in any suitable sub-combination. A number of embodiments have been described. Nevertheless, various modifications can be made without departing from the spirit and scope of the invention. Accordingly, other embodiments are within the scope of the claims. | 58,369 |
11943201 | DETAILED DESCRIPTION For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the aspects illustrated in the drawings, and specific language may be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is intended. Any alterations and further modifications to the described devices, instruments, methods, and any further application of the principles of the present disclosure are fully contemplated as would normally occur to one skilled in the art to which the disclosure relates. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one aspect may be combined with the features, components, and/or steps described with respect to other aspects of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations may not be described separately. For simplicity, in some instances the same reference numbers are used throughout the drawings to refer to the same or like parts. FIG.1is an illustration of an example system100associated with an authentication procedure in a VPN, according to various aspects of the present disclosure. Example 100 shows an architectural depiction of components included in system100. In some aspects, the components may include a user device102capable of communicating with a VPN service provider (VSP) control infrastructure104and with one or more VPN servers120over a network114. The VSP control infrastructure104may be controlled by a VPN service provider and may include an application programming interface (API)106, a user database108, processing unit110, a server database112, and the one or more VPN servers120. As shown inFIG.1, the API106may be capable of communicating with the user database108and with the processing unit110. Additionally, the processing unit110may be capable of communicating with the server database, which may be capable of communicating with a testing module (not shown). The testing module may be capable of communicating with the one or more VPN servers120over the network114. The processing unit110may be capable of configuring and controlling operation of the one or more VPN servers120. As further shown inFIG.1, VPN server N120may be configured to communicate with an authentication server118over a network116. Other VPN servers, from among the one or more VPN servers120, may also be configured to communicate with the authentication server118in a similar and/or analogous manner. The processing unit110may be capable of configuring and controlling operation of the authentication server118. In some aspects, the network116may be similar to network114. The user device102may be a physical computing device capable of hosting a VPN application and of connecting to the network114. The user device102may be, for example, a laptop, a mobile phone, a tablet computer, a desktop computer, a smart device, a router, or the like. In some aspects, the user device102may include, for example, Internet-of-Things (IoT) devices such as VSP smart home appliances, smart home security systems, autonomous vehicles, smart health monitors, smart factory equipment, wireless inventory trackers, biometric cyber security scanners, or the like. The network114may be any digital telecommunication network that permits several nodes to share and access resources. In some aspects, the network114may include one or more of, for example, a local-area network (LAN), a wide-area network (WAN), a campus-area network (CAN), a metropolitan-area network (MAN), a home-area network (HAN), Internet, Intranet, Extranet, and Internetwork. The VSP control infrastructure104may include a combination of hardware and software components that enable provision of VPN services to the user device102. The VSP control infrastructure104may interface with (the VPN application on) the user device102via the API106, which may include one or more endpoints to a defined request-response message system. In some aspects, the API106may be configured to receive, via the network114, a connection request from the user device102to establish a VPN connection with a VPN server120. The connection request may include an authentication request to authenticate the user device102and/or a request for an IP address of an optimal VPN server for establishment of the VPN connection therewith. In some aspects, an optimal VPN server may be a single VPN server120or a combination of one or more VPN servers120. The API106may receive the authentication request and the request for an IP address of an optimal VPN server in a single connection request. In some aspects, the API106may receive the authentication request and the request for an IP address of an optimal VPN server in separate connection requests. The API106may further be configured to handle the connection request by mediating the authentication request. For instance, the API106may receive from the user device102credentials including, for example, a unique combination of a user ID and password for purposes of authenticating the user device102. In another example, the credentials may include a unique validation code known to an authentic user. The API106may provide the received credentials to the user database108for verification. The user database108may include a structured repository of valid credentials belonging to authentic users. In one example, the structured repository may include one or more tables containing valid unique combinations of user IDs and passwords belonging to authentic users. In another example, the structured repository may include one or more tables containing valid unique validation codes associated with authentic users. The VPN service provider may add, delete, and/or modify such valid unique combinations of user IDs and passwords from the structured repository. Based at least in part on receiving the credentials from the API106, the user database108and a processor (e.g., the processing unit110or another local or remote processor) may verify the received credentials by matching the received credentials with the valid credentials stored in the structured repository. In some aspects, the user database108and the processor may authenticate the user device102when the received credentials match at least one of the valid credentials. In this case, the VPN service provider may enable the user device102to obtain VPN services. When the received credentials fail to match at least one of the valid credentials, the user database108and the processor may fail to authenticate the user device102. In this case, the VPN service provider may decline to provide VPN services to the user device102. When the user device102is authenticated, the user device102may initiate a VPN connection and may transmit to the API106a request for an IP address of an optimal VPN server. The processing unit110included in the VSP control infrastructure may be configured to determine/identify a single VPN server120as the optimal server or a list of VPN servers. The processing unit110may utilize the API106to transmit the IP address of the optimal server or IP addresses of the VPN servers120included in the list to the user device102. In the case where the list of IP addresses of the VPN servers120is provided, the user device102may have an option to select a single VPN server120from among the listed VPN servers as the optimal server120. In some aspects, the processing unit110may be a logical unit including a scoring engine. The processing unit110may include a logical component configured to perform complex operations to compute numerical weights related to various factors associated with the VPN servers120. The scoring engine may likewise include a logical component configured to perform arithmetical and logical operations to compute a server penalty score for one or more of the VPN servers120. In some aspects, based at least in part on server penalty scores calculated utilizing the complex operations and/or the arithmetical and logical operations, the processing unit110may determine an optimal VPN server. In one example, the processing unit110may determine the VPN server120with the lowest server penalty score as the optimal VPN server. In another example, the processing unit110may determine the list of optimal VPN servers by including, for example, three (or any other number) VPN servers120with the three lowest server penalty scores. The user device102may transmit to the optimal VPN server an initiation request to establish a VPN connection (e.g., an encrypted tunnel) with the optimal VPN server. The optimal VPN server with which the user device establishes the encrypted tunnel may be referred to as a primary VPN server or an entry VPN server. Based at least in part on receiving the initiation request, the optimal VPN server may conduct a VPN authentication with the authentication server118to authenticate the user device102as a device that may receive the VPN services from the optimal VPN server. When the VPN authentication is successful, the optimal VPN server may proceed to provide the VPN services to the user device120. Alternatively, when the VPN authentication fails, the optimal VPN server may refrain from providing the VPN services to the user device120and/or may communicate with the user device120to obtain additional information to authenticate the user device102. In some aspects, a VPN server120may include a piece of physical or virtual computer hardware and/or software capable of securely communicating with (the VPN application on) the user device102for provision of VPN services. Similarly, the authentication server118may include a piece of physical or virtual computer hardware and/or software capable of securely communicating with one or more VPN servers120for provision of authentication services. One or more components (e.g., API106, user database108, processing unit110, and/or server database112) included in the VSP control infrastructure104and/or components (e.g., processing unit, memory, communication interface, etc.) included in the user device102and/or components (e.g., processing unit, memory, communication interface, etc.) may further be associated with a controller/processor, a memory, a communication interface, or a combination thereof (e.g.,FIG.9). For instance, the one or more components of the set of components may include or may be included in a controller/processor, a memory, or a combination thereof. In some aspects, the one or more of the components included in the VSP control infrastructure104may be separate and distinct from each other. Alternatively, in some aspects, one or more of the components included in the VSP control infrastructure104may be combined with one or more of other components included in the VSP control infrastructure104. In some aspects, the one or more of the components included in the VSP control infrastructure104may be local with respect to each other. Alternatively, in some aspects, one or more of the components included in the VSP control infrastructure104may be located remotely with respect to one or more of other components included in the VSP control infrastructure104. Additionally, or alternatively, one or more components of the components included in the VSP control infrastructure104may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. Additionally, or alternatively, a set of (one or more) components shown inFIG.1may be configured to perform one or more functions described as being performed by another set of components shown inFIG.1. As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. A user device may request VPN services from a VSP control infrastructure. To request the VPN services, the user device may transmit, utilizing an installed client application, a connection request and/or a user authentication request to an API associated with the VSP control infrastructure. Thereafter, the user device may undergo a user authentication process involving the API and a database associated with the VSP control infrastructure. Once authenticated, the VSP control infrastructure may determine a VPN server for providing the VPN services to the user device. The user device may utilize the client application to transmit an initiation request for establishing a VPN connection with the VPN server. Based at least in part on receiving the initiation request, the VPN server may communicate with the user device to establish the VPN connection and provide the VPN services. Prior to communicating with the user device, the VPN server may conduct a VPN authentication with an authentication server. In an example, the VPN server may communicate with the with the authentication server to authenticate credentials associated with the user device. In another example, the VPN server may communicate with the authentication server to authenticate, for example, a VPN protocol to be utilized during the VPN connection with the user device. To conduct the VPN authentication, the VPN server and the authentication server may communicate data utilizing a remote authentication dial-in user service (RADIUS) protocol. Such data may include private information associated with the user device including, for example, a username and password, an Internet protocol (IP) address, and identity of a user of the user device, location of the user device, or the like. In some cases, the private information associated with the user device may become compromised during the communication between the VPN server and the authentication server. For instance, the communication between the VPN server and the authentication server may be unencrypted and take place over the open Internet. Even when encryption is used, limited information (e.g., password) may be encrypted. On the open internet, the communication may be monitored and/or intercepted by a malicious third party. Such monitoring and/or interception may allow the malicious third party to discover and track the private information associated with the user. As a result, the private information associated with the user device may become compromised. Also, the communication between the VPN server and the authentication server may result in efficient utilization of resources. For instance, the communication may involve establishment and maintenance of a constant transmission control protocol (TCP) session between the VPN server and the authentication server for conducting VPN authentications associated with a plurality of user devices requesting VPN services from the VPN server. Alternatively, the VPN server and the authentication server may establish respective TCP sessions to conduct VPN authentications for every user device requesting the VPN services. In either situation, the VPN server and the authentication server may have to inefficiently expend a threshold amount of resources (e.g., computational resources, network bandwidth, management resources, processing resources, memory resources, power consumption, or the like) that may otherwise be utilized for performing more suitable tasks associated with providing the VPN services. Also, communicating utilizing the TCP sessions may introduce a delay because the authentication server may be responsible for participating in VPN authentications with respect to a plurality of VPN servers serving a plurality of user devices, and may take a threshold amount of time to communicate with the VPN server. For instance, the authentication server may take a threshold amount of time to transmit information to the VPN server and/or to respond to information received from the VPN server. As a result, a delay may be introduced in providing the VPN services to the user device. Various aspects of systems and techniques discussed in the present disclosure provide an authentication procedure in a VPN. In some aspects, a VSP control infrastructure may configure a VPN server and/or an authentication server to utilize the authentication procedure to conduct VPN authentication for authenticating a user device requesting VPN services. The authentication procedure may include utilizing predetermined encryption and decryption algorithms to conduct the VPN authentication. In an example, the VPN server may utilize a predetermined encryption algorithm to encrypt an entire initial authentication packet to determine an encrypted authentication packet to be transmitted to the authentication server. In an example, the VPN server may encrypt a plurality of fields included in the initial authentication packet, one or more of the plurality of fields including private information associated with the user device. The authentication server may utilize a predetermined decryption algorithm to decrypt the encrypted portion of the encrypted authentication packet. Further, the authentication server may analyze the decrypted data to determine a result of the VPN authentication associated with the user device. In this way, by utilizing the authentication procedure discussed herein, the VPN server and the authentication server may deter monitoring and tracking of the private information by a malicious third party, thereby mitigating instances of the private information associated with the user device becoming compromised. Also, by utilizing the authentication procedure, the VPN server and the authentication server may avoid having to establish and maintain a constant TCP connection or respective TCP connections, thereby enabling efficient utilization of server resources (e.g., computational resources, network bandwidth, management resources, processing resources, memory resources, power consumption, or the like) for performing more suitable tasks associated with providing the VPN services. Further, the authentication procedure may enable the authentication server to speedily communicate with the VPN server, thereby mitigating any delay in providing the VPN services to the user device. In some aspects, a system associated with a VPN environment, the system comprising a VPN server configured to: determine an encrypted authentication packet based at least in part on utilizing an encryption key and a nonce to encrypt an initial authentication packet; and transmit, to an authentication server, the encrypted authentication packet to enable VPN authentication of a device requesting VPN services from the VPN server; and the authentication server configured to: determine a response regarding the VPN authentication based at least in part on decrypting the initial authentication packet utilizing a decryption key and the nonce; and transmit, to the VPN server, the response regarding the VPN authentication. FIG.2is an illustration of an example flow200associated with providing an authentication procedure in a VPN, according to various aspects of the present disclosure.FIG.2shows a VPN server120in communication with an authentication server118. In some aspects, the communication may be related to conducting a VPN authentication (e.g., an authentication process) for authenticating a user device (e.g., user device102) requesting VPN services from the VPN server120. The VPN server120and the authentication server118may communicate over a network (e.g., network116). In some aspects, a VSP control infrastructure104may configure the VPN server120and/or the authentication server118to conduct the VPN authentication based at least in part on a RADIUS protocol. In an example, the VSP control infrastructure104may configure the VPN server120and/or the authentication server118to communicate messages similar to RADIUS messages to conduct the VPN authentication. Also, the VSP control infrastructure104may configure predetermined encryption and decryption algorithms to be utilized by the VPN server120and/or the authentication server118to conduct the VPN authentication. Further, the VSP control infrastructure104may determine a symmetric key to be utilized by the VPN server120and/or the authentication sever118to encrypt and decrypt data while utilizing the predetermined encryption and decryption algorithms. In some aspects, the symmetric key may be a 256-bit cryptographic key. Additionally, the VSP control infrastructure104may determine newly defined and/or newly introduced values of bits included in fields of communicated authentication packets to enable the VPN server120and the authentication server118to indicate information to each other. As shown by reference numeral210, based at least in part on receiving an initiation request from a user device, the VPN server120may determine an encrypted authentication packet to be transmitted to the authentication server118for the VPN authentication. In some aspects, the VPN server120may encrypt an entirety of an initial authentication packet (e.g., a standard RADIUS packet) to determine the encrypted authentication packet. In an example, the VPN server120may encrypt all fields included in the initial authentication packet to determine the encrypted authentication packet The initial authentication packet may include a packet as shown inFIG.3A. Such an initial authentication packet may comprise a plurality of fields including, for example, a code field (e.g., Code) starting at byte1, an identifier field (e.g., Identifier) starting at byte2, a data length field (e.g., Data Length) starting at byte3, an authenticator field (e.g., Authenticator) starting at byte5, and a payload field (e.g., Payload) starting at byte21. The code field may include bits, the values of which indicate a type associated with the initial authentication packet. Examples of types associated with initial authentication packets may include an access-request packet, an access-accept packet, an access-reject packet, an accounting-request packet, an accounting-response packet, and an access-challenge packet. The identifier field may include bits, the values of which indicate an identifier for matching responses received from the authentication server118with requests transmitted by the VPN server120in the form of encrypted authentication packets. The data length field may include bits, the values of which indicate a length of the initial authentication packet in bits and/or bytes. The authenticator field may include bits, the values of which indicate information that may be used to validate responses from the authentication server118. The payload field may include attribute value pairs (e.g., AVPs) carrying data associated with conducting the VPN authentication. Such data may include, for example, private information associated with the user device including, for example, account information associated with the user device such as a username and password, an Internet protocol (IP) address, identity of a user of the user device, location of the user device, or the like. In some aspects, as shown inFIG.3B, the determined encrypted authentication packet may include, for example, a crypted code field (e.g., Crypted Code) starting at byte1, a reserved field (e.g., Reserved) starting at byte2, a new data length field (e.g., New Data Length) starting at byte3, a nonce field (e.g., Nonce) starting at byte5, an authentication tag field (e.g., Authentication Tag) starting at byte17, and a crypted payload field (e.g., Crypted Payload) starting at byte33. To determine the encrypted authentication packet, the VPN server120may determine the crypted code field. In some aspects, the crypted code field may include bits having newly defined and/or newly introduced values, which indicate to the authentication server118the type associated with the encrypted authentication packet and that at least a portion of the encrypted authentication packet is encrypted. In some aspects, the newly defined and/or newly introduced values may indicate that the portion of the encrypted authentication packet that has been encrypted includes an entire initial authentication packet in encrypted form. The VPN server120may encrypt all types of packets. For instance, when the VPN server120encrypts the initial authentication packet, which may include, for example, the access-request packet, the crypted code field (e.g., Access-Request-Crypted) may include bits having newly defined and/or newly introduced values, which indicate that the encrypted authentication packet is an access-request packet with a portion of the access-request packet including encrypted data. In this case, the portion of the access-request packet may include the crypted payload field and the encrypted data may include the encrypted initial authentication packet. Similarly, when the VPN server120encrypts the initial authentication packet, which may include, for example, the accounting-request packet, the crypted code field (e.g., Accounting-Request-Crypted) may include bits having newly defined and/or newly introduced values, which indicate that the encrypted authentication packet is an accounting-request packet with a portion of the accounting-request packet including encrypted data. In this case, the portion of the accounting-request packet may include the crypted payload field and the encrypted data may include the encrypted initial authentication packet. The VPN server120may also determine a nonce by determining a random alphanumeric string. In some aspects, the random alphanumeric may be, for example, 96 bits long and may be unique to the encrypted authentication packet. The VPN server120may determine a different nonce for each determined encrypted authentication packet. The VPN server120may receive the predetermined symmetric key from the VSP control infrastructure104to be utilized for encrypting at least the portion of the initial authentication packet. In some aspects, the VPN server120may predetermine the symmetric key and share the predetermined symmetric key with the VSP control infrastructure104and/or the authentication server118. Further, the VPN server120may determine the crypted payload field. In some aspects, the crypted payload field may include the encrypted initial authentication packet. To determine the crypted payload field, the VPN server120may determine that a plurality of fields included in initial authentication packet are to be encrypted. In some aspects, the plurality of fields may include all fields (e.g., shown inFIG.3A) included in the initial authentication packet. In some aspects, the plurality of fields may include, for example, the payload field, which may include private information associated with the user device102. Based at least in part on determining the nonce, the symmetric key (e.g., encryption key), and/or the plurality of fields, the VPN server120may determine the encrypted data. In some aspects, the VPN server120may input the nonce, the symmetric key, and/or the plurality of fields into a suitable encryption algorithm (e.g., ChaCha20_Poly1305 encryption algorithm) executed by the VPN server120. The encryption algorithm may utilize the nonce and/or the symmetric key to encrypt the plurality of fields. The output of the encryption algorithm may include the encrypted data, which the VPN server120may include in the crypted payload field. The output of the encryption algorithm may also include an authentication tag. In some aspects, the authentication tag may be a random string of alphanumeric characters that may enable the authentication server118to determine whether the encrypted authentication packet has been tampered with during communication between the VPN server120and the authentication server118. In some aspects, the VPN server120may include the nonce in the nonce field and the authentication tag in the authentication tag field, as shown inFIG.3B. Based at least in part on determining the crypted payload field, the VPN server120may determine the new data length field. In an example, the new data length field may include bits, the newly defined and/or newly introduced values of which indicate a length associated with the encrypted authentication packet. To determine the new data length, the VPN server120may calculate a length associated with the encrypted authentication packet in bits and/or bytes. In an example, the new data length may include a sum of a length associated with the crypted code field, a length associated with the reserved field, a length associated with the new data length field, a length associated with the nonce field, a length associated with the authentication tag field, and a length associated with the crypted payload field. The calculated length of the encrypted authentication packet may be indicated by the newly defined and/or newly introduced value of bits included in the new data length field. In some aspects, the reserved field may be placed to start at byte2and may be one byte long to enable the new data length field to be placed to start at byte3. This may allow the authentication procedure to improve compatibility with the RADIUS protocol. Based at least in part on determining the crypted code field, the reserved field, the new data length field, the nonce field, the authentication tag field, and the crypted payload field, as discussed above, the VPN server120may determine the encrypted authentication packet, as shown inFIG.3B. Based at least in part on determining the encrypted authentication packet, as shown by reference numeral220, the VPN server120may transmit, and the authentication server118may receive, the encrypted authentication packet. As shown by reference numeral230, the authentication server118may decrypt (the encrypted portion of) the encrypted authentication packet. For instance, based at least in part on receiving the encrypted authentication packet, the authentication server118may analyze the bits included in the crypted code field. In some aspects, the authentication server118may determine that a portion of the encrypted authentication packet is encrypted based at least in part on the newly defined and/or newly introduced values of the bits included in the crypted code field. As a result, the authentication server118may determine that the encrypted portion (e.g., the crypted payload field) in the encrypted authentication packet is to be decrypted. Also, based at least in part on the newly defined and/or newly introduced values of the bits included in the crypted code field, the authentication server118may determine that the encrypted portion includes an entire initial authentication packet in encrypted form. Further, the authentication server may determine a type of the encrypted authentication packet (e.g., Access-Request-Crypted, Accounting-Request-Crypted) based at least in part on the newly defined and/or newly introduced values of the bits included in the crypted code field. To decrypt the encrypted portion, the authentication server118may receive the predetermined symmetric key from the VSP control infrastructure104or may retrieve the predetermined symmetric key from, for example, a memory associated with the authentication server118. The authentication server118may determine the nonce based at least in part on the nonce included in the nonce field of the encrypted authentication packet. In some aspects, the authentication server118may utilize the nonce included in the nonce field to decrypt the plurality of encrypted fields. The authentication server118may also determine the authentication tag based at least in part on the authentication tag included in the authentication tag field of the encrypted authentication packet. Further, the authentication server118may determine the length associated with the encrypted authentication packet based at least in part on the newly defined and/or newly introduced values of bits included in the new data length field. In some aspects, the length associated with the encrypted authentication packet may indicate the length associated with all fields (e.g., sum of lengths associated with all fields) included in the encrypted authentication packet. The authentication server118may determine (e.g., locate) the crypted payload in the encrypted authentication packet based at least in part on determining a length associated with the crypted payload field. To determine a length associated with the crypted payload field, the authentication server118may subtract the length associated with the crypted code field, the length associated with the reserved field, the length associated with the new data length field, the length associated with the nonce field, and the length associated with the authentication tag field from the length associated with the encrypted authentication packet. The authentication server118may extract the encrypted payload based at least in part on starting at the predetermined byte (e.g., byte33) at which the crypted payload is included in the encrypted authentication packet for the determined length of the crypted payload. In this way, the authentication server118may determine the symmetric key, the nonce, the authentication tag, and the crypted payload. In some aspects, the authentication server118may input the symmetric key, the nonce, the crypted payload, and the authentication tag into a suitable decryption algorithm (e.g., ChaCha20_Poly1305 decryption algorithm) executed by the authentication server118. The decryption algorithm may analyze the authentication tag to determine whether the encrypted authentication packet was tampered with during communication between the VPN server120and the authentication server118. When the decryption algorithm determines that the encrypted authentication packet was tampered with, the decryption algorithm outputs a result indicating the same to the authentication server118. In this case, as shown by reference numeral240, the authentication server118may determine a response by determining an encrypted response packet (e.g., Access-Reject-Crypted, Accounting-Reject-Crypted) indicating that the VPN authentication has failed. In some aspects, the authentication server118may determine the encrypted response packet in a similar way as the VPN server120determined the encrypted authentication packet. In an example, similar to the encrypted authentication packet, the encrypted response packet may include a crypted code field, a reserved field, a new data length field, a nonce field, an authentication tag field, and a crypted payload field. The authentication server118may determine values of bits included in such fields in a similar way as discussed above with respect to the VPN server120determining values of bits included in fields of the encrypted authentication packet. As shown by reference numeral250, the authentication server118may transmit the encrypted response packet to the VPN server120. Alternatively, when the decryption algorithm determines that the encrypted authentication packet was not tampered with, the decryption algorithm may process the symmetric key, the nonce, and the crypted payload to output a decrypted payload (e.g., the decrypted initial authentication packet). In an example, the decryption algorithm may output the initial authentication packet in decrypted form. Based at least in part on analyzing and/or processing the initial authentication packet, as shown by reference numeral240, the authentication server118may determine a response (e.g., encrypted response packet). In some aspects, as shown by reference numeral250, the authentication server118may transmit the response to accept an access request or an accounting request from the VPN server120(e.g., Access-Accept-Crypted, Accounting-Accept-Crypted), or may reject the access request or the accounting request from the VPN server120(e.g., Access-Reject-Crypted, Accounting-Reject-Crypted), or may challenge (e.g., request additional information from the VPN server120) the access request or the accounting request from the VPN server120(e.g., Access-Challenge-Crypted, Accounting-Challenge-Crypted), or may respond to the accounting request with accounting response information (e.g., Accounting-Response-Crypted). In some aspects, the authentication server118(e.g., decryption algorithm) may analyze the authentication tag and process the symmetric key, the nonce and the encrypted payload sequentially, as described above. In some aspects, the authentication server118(e.g., decryption algorithm) may analyze the authentication tag and process the symmetric key, the nonce and the encrypted payload simultaneously. In this way, by utilizing the authentication procedure discussed herein, a VPN server and an authentication server may deter monitoring and tracking of private information by a malicious third party, thereby mitigating instances of the private information associated with a user device becoming compromised. Also, by utilizing the authentication procedure, the VPN server and the authentication server may avoid having to establish and maintain a constant TCP connection or respective TCP connections, thereby enabling efficient utilization of server resources (e.g., computational resources, network bandwidth, management resources, processing resources, memory resources, power consumption, or the like) for performing more suitable tasks associated with providing the VPN services. As a result, the authentication procedure may enable the authentication server to speedily communicate with the VPN server, thereby mitigating any delay in providing the VPN services to the user device. Although the authentication process is described as being a VPN authentication process taking place between a VPN server and an authentication server in a VPN environment, the present disclosure contemplates the authentication process to include any authentication process taking place between two devices in any environment. As indicated above,FIGS.2and3A-B are provided as examples. Other examples may differ from what is described with regard toFIGS.2and3A-B. FIG.4is an illustration of an example process400associated with an authentication procedure in a VPN, according to various aspects of the present disclosure. In some aspects, the process400may be performed by a processor/controller (e.g., processor920) associated with a VPN server (e.g., VPN server120) and/or a processor/controller (e.g., processor920) associated with an authentication server (e.g., authentication server118). As shown by reference numeral410, process400includes determining, by a VPN server, an encrypted authentication packet based at least in part on utilizing an encryption key and a nonce to encrypt an initial authentication packet. For instance, the VPN server may utilize the associated processor/controller to determine an encrypted authentication packet based at least in part on utilizing an encryption key and a nonce to encrypt an initial authentication packet, as discussed elsewhere herein. As shown by reference numeral420, process400includes transmitting, by the VPN server to an authentication server, the encrypted authentication packet to enable VPN authentication of a device requesting VPN services from the VPN server. For instance, the VPN server may utilize the communication interface (e.g., communication interface970) and the associated processor/controller to transmit, to an authentication server, the encrypted authentication packet to enable VPN authentication of a device requesting VPN services from the VPN server, as discussed elsewhere herein. As shown by reference numeral430, process400includes determining, by the authentication server, a response regarding the VPN authentication based at least in part on decrypting the initial authentication packet utilizing a decryption key and the nonce. For instance, the authentication server may utilize the associated processor/controller to determine a response regarding the VPN authentication based at least in part on decrypting the initial authentication packet utilizing a decryption key and the nonce, as discussed elsewhere herein. As shown by reference numeral440, process400includes transmitting, by the authentication server to the VPN server, the response regarding the VPN authentication. For instance, the authentication server may utilize an associated communication interface (e.g., communication interface970) and the associated processor/controller to transmit, to the VPN server, the response regarding the VPN authentication, as discussed elsewhere herein. Process400may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process400, the encrypted authentication packet includes a crypted code field indicating that a portion of the encrypted authentication packet includes the initial authentication packet in encrypted form. In a second aspect, alone or in combination with the first aspect, in process400, the encrypted authentication packet includes an authentication tag to enable the authentication server to determine whether the encrypted authentication packet is tampered. In a third aspect, alone or in combination with the first through second aspects, in process400, the encrypted authentication packet includes a nonce field indicating the nonce to enable the authentication server to decrypt the initial authentication packet. In a fourth aspect, alone or in combination with the first through third aspects, in process400, determining the encrypted authentication packet includes encrypting an entirety of the initial authentication packet. In a fifth aspect, alone or in combination with the first through fourth aspects, in process400, the initial authentication packet includes a payload field including information associated with the device requesting the VPN services from the VPN server and an authenticator field including information associated with validating the response from the authentication server. In a sixth aspect, alone or in combination with the first through fifth aspects, in process400, the encrypted authentication packet includes a data length field indicating a length associated with the encrypted authentication packet. AlthoughFIG.4shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.4. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.4is provided as an example. Other examples may differ from what is described with regard toFIG.4. FIG.5is an illustration of an example process500associated with an authentication procedure in a VPN, according to various aspects of the present disclosure. In some aspects, the process500may be performed by a processor/controller (e.g., processor920) associated with a VPN server (e.g., VPN server120). As shown by reference numeral510, process500includes determining, by a first server, an encrypted authentication packet, the determining including, determining a crypted code field to indicate a type associated with the encryption authentication packet and that at least a portion of the encryption authentication packet is encrypted, and determining a crypted payload based at least in part on encrypting an initial authentication packet. For instance, the VPN server may utilize the associated processor/controller to determine an encrypted authentication packet, the determining including, determining a crypted code field to indicate a type associated with the encryption authentication packet and that at least a portion of the encryption authentication packet is encrypted, and determining a crypted payload based at least in part on encrypting an initial authentication packet, as discussed elsewhere herein. As shown by reference numeral520, process500includes transmitting, by the first server to a second server, the encrypted authentication packet to enable the first server and the second server to conduct an authentication process. For instance, the VPN server may utilize an associated communication interface (e.g., communication interface970) and the associated memory/processor to transmit, to a second server, the encrypted authentication packet to enable the first server and the second server to conduct an authentication process, as discussed elsewhere herein. Process500may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process500, determining the encrypted authentication packet includes determining a data length field indicating a length associated with the encrypted authentication packet, the length to be utilized by the second server to determine the crypted payload. In a second aspect, alone or in combination with the first aspect, in process500, determining the encrypted authentication packet includes determining an authentication tag field including information to enable the second server to determine whether the encrypted authentication packet is tampered. In a third aspect, alone or in combination with the first through second aspects, in process500, determining the encrypted authentication packet includes determining a nonce field indicating a nonce utilized to encrypt the initial authentication packet. In a fourth aspect, alone or in combination with the first through third aspects, in process500, determining the encrypted authentication packet includes placing a reserved field within the encrypted authentication packet to determine placement of a data length field within the encrypted authentication packet. In a fifth aspect, alone or in combination with the first through fourth aspects, in process500, determining the crypted payload includes encrypting the initial authentication packet based at least in part on utilizing a symmetric encryption key and a nonce. In a sixth aspect, alone or in combination with the first through fifth aspects, in process500, the initial authentication packet includes a payload field including information associated with a device requesting a service from the first server and an authenticator field including information associated with validating a response to be received from the second server. AlthoughFIG.5shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.5. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.5is provided as an example. Other examples may differ from what is described with regard toFIG.5. FIG.6is an illustration of an example process600associated with an authentication procedure in a VPN, according to various aspects of the present disclosure. In some aspects, the process600may be performed by a processor/controller (e.g., processing unit110and/or processor920) associated with a VSP control infrastructure (e.g., VSP control infrastructure104). As shown by reference numeral610, process600includes configuring a first server to determine an encrypted authentication packet, the configuring including, configuring the first server to determine a crypted code field to indicate a type associated with the encryption authentication packet and that at least a portion of the encryption authentication packet is encrypted, and configuring the first server to determine a crypted payload based at least in part on encrypting an initial authentication packet. For instance, the VSP control infrastructure may utilize the associated processor/controller to configure a first server to determine an encrypted authentication packet, the configuring including, configuring the first server to determine a crypted code field to indicate a type associated with the encryption authentication packet and that at least a portion of the encryption authentication packet is encrypted, and configuring the first server to determine a crypted payload based at least in part on encrypting an initial authentication packet, as discussed elsewhere herein. As shown by reference numeral620, process600includes configuring the first server to transmit, to a second server, the encrypted authentication packet to enable the first server and the second server to conduct an authentication process. For instance, the VSP control infrastructure may utilize the associated processor/controller to configure the first server to transmit, to a second server, the encrypted authentication packet to enable the first server and the second server to conduct an authentication process, as discussed elsewhere herein. Process600may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process600, configuring the first server to determine the encrypted authentication packet includes configuring the first server to determine a data length field indicating a length associated with the encrypted authentication packet, the length to be utilized by the second server to determine the crypted payload. In a second aspect, alone or in combination with the first aspect, in process600, configuring the first server to determine the encrypted authentication packet includes configuring the first server to determine an authentication tag field including information to enable the second server to determine whether the encrypted authentication packet is tampered. In a third aspect, alone or in combination with the first through second aspects, in process600, configuring the first server to determine the encrypted authentication packet includes configuring the first server to determine a nonce field indicating a nonce utilized to encrypt the initial authentication packet. In a fourth aspect, alone or in combination with the first through third aspects, in process600, configuring the first server to determine the encrypted authentication packet includes configuring the first server to place a reserved field within the encrypted authentication packet to determine placement of a data length field within the encrypted authentication packet. In a fifth aspect, alone or in combination with the first through fourth aspects, in process600, configuring the first server to determine the crypted payload includes configuring the first server to encrypt the initial authentication packet based at least in part on utilizing a symmetric encryption key and a nonce. In a sixth aspect, alone or in combination with the first through fifth aspects, in process600, the initial authentication packet includes a payload field including information associated with a device requesting a service from the first server and an authenticator field including information associated with validating a response to be received from the second server. AlthoughFIG.6shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.6. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.6is provided as an example. Other examples may differ from what is described with regard toFIG.6. FIG.7is an illustration of an example process700associated with an authentication procedure in a VPN, according to various aspects of the present disclosure. In some aspects, the process700may be performed by a processor/controller (e.g., processor920) associated with a authentication server (e.g., authentication server118). As shown by reference numeral710, process700includes receiving, by a first server from a second server, an encrypted authentication packet to enable the first server and the second server to conduct an authentication process, the encrypted authentication packet including a crypted code field indicating that a portion of the encrypted authentication packet is encrypted and a crypted payload including an encrypted initial authentication packet. For instance, the authentication server may utilize an associated communication interface (e.g., communication interface970) and the associated processor/controller to receive, from a second server, an encrypted authentication packet to enable the first server and the second server to conduct an authentication process, the encrypted authentication packet including a crypted code field indicating that a portion of the encrypted authentication packet is encrypted and a crypted payload including an encrypted initial authentication packet, as discussed elsewhere herein. As shown by reference numeral720, process700includes transmitting, by the first server to the second server, a response based at least in part on determining that the portion of the encrypted authentication packet is encrypted and on decrypting the encrypted initial authentication packet. For instance, the authentication server may utilize the associated communication interface and the associated processor/controller to transmit, to the second server, a response based at least in part on determining that the portion of the encrypted authentication packet is encrypted and on decrypting the encrypted initial authentication packet, as discussed elsewhere herein. Process700may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, process700may include determining, by the first server, the crypted payload based at least in part on determining a data length associated with the encrypted authentication packet. In a second aspect, alone or in combination with the first aspect, process700may include determining, by the first server, whether the encrypted authentication packet is tampered based at least in part on determining information indicated by an authentication tag included in the encrypted authentication packet. In a third aspect, alone or in combination with the first through second aspects, process700may include determining, by the first server, a nonce indicated in a nonce field included in the encrypted authentication packet, the nonce to be utilized in decrypting the encrypted initial authentication packet. In a fourth aspect, alone or in combination with the first through third aspects, in process700, decrypting the encrypted initial authentication packet includes decrypting the encrypted initial authentication packet based at least in part on utilizing a decryption key and a nonce. In a fifth aspect, alone or in combination with the first through fourth aspects, in process700, the response includes a response authentication packet including a portion that is encrypted. In a sixth aspect, alone or in combination with the first through fifth aspects, in process700, the response includes a response authentication packet including a reserved field placed in the response authentication packet to determine placement of a data length field within the response authentication packet. AlthoughFIG.7shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.7. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.7is provided as an example. Other examples may differ from what is described with regard toFIG.7. FIG.8is an illustration of an example process800associated with an authentication procedure in a VPN, according to various aspects of the present disclosure. In some aspects, the process800may be performed by a processor/controller (e.g., processing unit110and/or processor920) associated with a VSP control infrastructure (e.g., VSP control infrastructure104). As shown by reference numeral810, process800configuring a first server to receive, from a second server, an encrypted authentication packet to enable the first server and the second server to conduct an authentication process, the encrypted authentication packet including a crypted code field indicating that a portion of the encrypted authentication packet is encrypted and a crypted payload including an encrypted initial authentication packet. For instance, the VSP control infrastructure may utilize the associated processor/controller to configure a first server to receive, from a second server, an encrypted authentication packet to enable the first server and the second server to conduct an authentication process, the encrypted authentication packet including a crypted code field indicating that a portion of the encrypted authentication packet is encrypted and a crypted payload including an encrypted initial authentication packet, as discussed elsewhere herein. As shown by reference numeral820, process800includes configuring the first server to transmit, to the second server, a response based at least in part on determining that the portion of the encrypted authentication packet is encrypted and on decrypting the encrypted initial authentication packet. For instance, the VSP control infrastructure may utilize the associated processor/controller to configure the first server to transmit, to the second server, a response based at least in part on determining that the portion of the encrypted authentication packet is encrypted and on decrypting the encrypted initial authentication packet, as discussed elsewhere herein. Process800may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, process800may include configuring the first server to determine the crypted payload based at least in part on determining a data length associated with the encrypted authentication packet. In a second aspect, alone or in combination with the first aspect, process800may include configuring the first server to determine whether the encrypted authentication packet is tampered based at least in part on determining information indicated by an authentication tag included in the encrypted authentication packet. In a third aspect, alone or in combination with the first through second aspects, process800may include configuring the first server to determine a nonce indicated in a nonce field included in the encrypted authentication packet, the nonce to be utilized in decrypting the encrypted initial authentication packet. In a fourth aspect, alone or in combination with the first through third aspects, in process800, configuring the first server to decrypt the encrypted initial authentication packet includes configuring the first server to decrypt the encrypted initial authentication packet based at least in part on utilizing a decryption key and a nonce. In a fifth aspect, alone or in combination with the first through fourth aspects, in process800, the response includes a response authentication packet including a portion that is encrypted. In a sixth aspect, alone or in combination with the first through fifth aspects, in process800, the response includes a response authentication packet including a reserved field placed in the response authentication packet to determine placement of a data length field within the response authentication packet. AlthoughFIG.8shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.8. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.8is provided as an example. Other examples may differ from what is described with regard toFIG.8. FIG.9is an illustration of example devices900, according to various aspects of the present disclosure. In some aspects, the example devices900may form part of or implement the systems, environments, infrastructures, components, or the like described elsewhere herein (e.g.,FIG.1and/orFIG.2) and may be used to perform the processes described with respect toFIGS.3and4. The example devices900may include a universal bus910communicatively coupling a processor920, a memory930, a storage component940, an input component950, an output component960, and a communication interface970. Bus910may include a component that permits communication among multiple components of a device900. Processor920may be implemented in hardware, firmware, and/or a combination of hardware and software. Processor920may take the form of a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some aspects, processor920may include one or more processors capable of being programmed to perform a function. Memory930may include a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by processor920. Storage component940may store information and/or software related to the operation and use of a device900. For example, storage component940may include a hard disk (e.g., a magnetic disk, an optical disk, and/or a magneto-optic disk), a solid state drive (SSD), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive. Input component950may include a component that permits a device900to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, input component950may include a component for determining location (e.g., a global positioning system (GPS) component) and/or a sensor (e.g., an accelerometer, a gyroscope, an actuator, another type of positional or environmental sensor, and/or the like). Output component960may include a component that provides output information from device900(via, for example, a display, a speaker, a haptic feedback component, an audio or visual indicator, and/or the like). Communication interface970may include a transceiver-like component (e.g., a transceiver, a separate receiver, a separate transmitter, and/or the like) that enables a device900to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface970may permit device900to receive information from another device and/or provide information to another device. For example, communication interface970may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like. A device900may perform one or more processes described elsewhere herein. A device900may perform these processes based on processor920executing software instructions stored by a non-transitory computer-readable medium, such as memory930and/or storage component940. As used herein, the term “computer-readable medium” may refer to a non-transitory memory device. A memory device may include memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory930and/or storage component940from another computer-readable medium or from another device via communication interface970. When executed, software instructions stored in memory930and/or storage component940may cause processor920to perform one or more processes described elsewhere herein. Additionally, or alternatively, hardware circuitry may be used in place of or in combination with software instructions to perform one or more processes described elsewhere herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The quantity and arrangement of components shown inFIG.9are provided as an example. In practice, a device900may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.9. Additionally, or alternatively, a set of components (e.g., one or more components) of a device900may perform one or more functions described as being performed by another set of components of a device900. As indicated above,FIG.9is provided as an example. Other examples may differ from what is described with regard toFIG.9. Persons of ordinary skill in the art will appreciate that the aspects encompassed by the present disclosure are not limited to the particular exemplary aspects described herein. In that regard, although illustrative aspects have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the aspects without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the aspects to the precise form disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. As used herein, a processor is implemented in hardware, firmware, or a combination of hardware and software. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, or not equal to the threshold, among other examples, or combinations thereof. It will be apparent that systems or methods described herein may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems or methods is not limiting of the aspects. Thus, the operation and behavior of the systems or methods were described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems or methods based, at least in part, on the description herein. Even though particular combinations of features are recited in the claims or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. In fact, many of these features may be combined in ways not specifically recited in the claims or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. A phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (for example, a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”). | 69,705 |
11943202 | DETAILED DESCRIPTION For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the aspects illustrated in the drawings, and specific language may be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is intended. Any alterations and further modifications to the described devices, instruments, methods, and any further application of the principles of the present disclosure are fully contemplated as would normally occur to one skilled in the art to which the disclosure relates. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one aspect may be combined with the features, components, and/or steps described with respect to other aspects of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations may not be described separately. For simplicity, in some instances the same reference numbers are used throughout the drawings to refer to the same or like parts. FIG.1is an illustration of an example system100associated with utilization of multiple exit IP addresses in a VPN environment, according to various aspects of the present disclosure. Example100shows an architectural depiction of components included in system100. In some aspects, the components may include a user device102capable of communicating with a VPN service provider (VSP) control infrastructure104and with one or more VPN servers120over a network114. The VSP control infrastructure104may be controlled by a VPN service provider and may include an application programming interface (API)106, a user database108, processing unit110, a server database112, and the one or more VPN servers120. As shown inFIG.1, the API106may be capable of communicating with the user database108and with the processing unit110. Additionally, the processing unit110may be capable of communicating with the server database, which may be capable of communicating with a testing module (not shown). The testing module may be capable of communicating with the one or more VPN servers120over the network114. The processing unit110may be capable of configuring and controlling operation of the one or more VPN servers120. The VPN servers120may be configured to communicate with one or more host devices to, for example, request and retrieve data of interest. Further, as shown inFIG.1, the VPN servers120may be configured to communicate with one or more secondary servers118or routing of requests for data of interest received from the user device102. The VPN servers120may also be configured to communicate with an authentication server (not shown) over the network. The processing unit110may be capable of configuring and controlling operation of the authentication server. In some aspects, the network116may be similar to network114. The user device102may be a physical computing device capable of hosting a VPN application and of connecting to the network114. The user device102may be, for example, a laptop, a mobile phone, a tablet computer, a desktop computer, a smart device, a router, or the like. In some aspects, the user device102may include, for example, Internet-of-Things (IoT) devices such as smart home appliances, smart home security systems, autonomous vehicles, smart health monitors, smart factory equipment, wireless inventory trackers, biometric cyber security scanners, or the like. The network114and/or the network116may be any digital telecommunication network that permits several nodes to share and access resources. In some aspects, the network114and/or the network116may include one or more of, for example, a local-area network (LAN), a wide-area network (WAN), a campus-area network (CAN), a metropolitan-area network (MAN), a home-area network (HAN), Internet, Intranet, Extranet, and Internetwork. The VSP control infrastructure104may include a combination of hardware and software components that enable provision of VPN services to the user device102. The VSP control infrastructure104may interface with (the VPN application on) the user device102via the API106, which may include one or more endpoints to a defined request-response message system. In some aspects, the API106may be configured to receive, via the network114, a connection request from the user device102to establish a VPN connection with a VPN server120. The connection request may include an authentication request to authenticate the user device102and/or a request for an IP address of an optimal VPN server for establishment of the VPN connection therewith. In some aspects, an optimal VPN server may be a single VPN server120or a combination of one or more VPN servers120. The API106may receive the authentication request and the request for an IP address of an optimal VPN server in a single connection request. In some aspects, the API106may receive the authentication request and the request for an IP address of an optimal VPN server in separate connection requests. The API106may further be configured to handle the connection request by mediating the authentication request. For instance, the API106may receive from the user device102credentials including, for example, a unique combination of a user ID and password for purposes of authenticating the user device102. In another example, the credentials may include a unique validation code known to an authentic user. The API106may provide the received credentials to the user database108for verification. The user database108may include a structured repository of valid credentials belonging to authentic users. In one example, the structured repository may include one or more tables containing valid unique combinations of user IDs and passwords belonging to authentic users. In another example, the structured repository may include one or more tables containing valid unique validation codes associated with authentic users. The VPN service provider may add, delete, and/or modify such valid unique combinations of user IDs and passwords from the structured repository. Based at least in part on receiving the credentials from the API106, the user database108and a processor (e.g., the processing unit110or another local or remote processor) may verify the received credentials by matching the received credentials with the valid credentials stored in the structured repository. In some aspects, the user database108and the processor may authenticate the user device102when the received credentials match at least one of the valid credentials. In this case, the VPN service provider may enable the user device102to obtain VPN services. When the received credentials fail to match at least one of the valid credentials, the user database108and the processor may fail to authenticate the user device102. In this case, the VPN service provider may decline to provide VPN services to the user device102. When the user device102is authenticated, the user device102may initiate a VPN connection and may transmit to the API106a request for an IP address of an optimal VPN server. The processing unit110included in the VSP control infrastructure may be configured to determine/identify a single VPN server120as the optimal server or a list of VPN servers. The processing unit110may utilize the API106to transmit the IP address of the optimal VPN server. In some implementations the processing unit110may be configured to determine/identify or a list of VPN servers. The processing unit110may utilize the API106to transmit the IP address of the optimal VPN server or IP addresses of the VPN servers120is provided, the user device102may have an option to select a single VPN server120from among the listed VPN servers as the optimal VPN server. In some aspects, the processing unit110may be a logical unit including a scoring engine. The processing unit110may include a logical component configured to perform complex operations to compute numerical weights related to various factors associated with the VPN servers120. The scoring engine may likewise include a logical component configured to perform arithmetical and logical operations to compute a server penalty score for one or more of the VPN servers120. In some aspects, based at least in part on server penalty scores calculated utilizing the complex operations and/or the arithmetical and logical operations, the processing unit110may determine an optimal VPN server. In one example, the processing unit110may determine the VPN server120with the lowest server penalty score as the optimal VPN server. In another example, the processing unit110may determine the list of optimal VPN servers by including, for example, three (or any other number) VPN servers120with the three lowest server penalty scores. The user device102may transmit to the optimal VPN server an initiation request to establish a VPN connection (e.g., an encrypted tunnel) with the optimal VPN server. The optimal VPN server with which the user device establishes the encrypted tunnel may be referred to as a primary VPN server or an entry VPN server. Based at least in part on receiving the initiation request, the optimal VPN server may conduct a VPN authentication with the authentication server to authenticate the user device102as a device that may receive the VPN services from the optimal VPN server. When the VPN authentication is successful, the optimal VPN server may proceed to provide the VPN services to the user device102. Alternatively, when the VPN authentication fails, the optimal VPN server may refrain from providing the VPN services to the user device102and/or may communicate with the user device102to obtain additional information to authenticate the user device102. In some aspects, a VPN server120may include a piece of physical or virtual computer hardware and/or software capable of securely communicating with (the VPN application on) the user device102for provision of VPN services. Similarly, the authentication server may include a piece of physical or virtual computer hardware and/or software capable of securely communicating with one or more VPN servers120for provision of authentication services. The one or more host devices122may include a type of server that hosts or houses websites and/or related data, applications, and/or services. The one or more host devices122may be a remotely accessible Internet server with complete Web server functionality and resources. In some aspects, the one or more host devices122may be referred to as a Web hosting server. In some aspects, the one or more secondary servers118may include one or more VPN servers120. In some aspects, the one or more secondary servers118may include one or more servers configured and/or programmed with available exit IP addresses. The one or more secondary servers may include a processor communicatively coupled with, among other things, a volatile memory, a non-volatile memory, and a communication interface to enable a network connection. Further, the one or more secondary servers118may be configured and/or programmed to enable secure connections involving encryption and decryption of data. One or more components (e.g., API106, user database108, processing unit110, server database112, secondary server118, and/or VPN server120) included in the VSP control infrastructure104and/or components (e.g., processing unit, memory, communication interface, etc.) included in the user device102may further be associated with a controller/processor, a memory, a communication interface, or a combination thereof (e.g.,FIG.7). For instance, the one or more components may include or may be included in a controller/processor, a memory, or a combination thereof. In some aspects, the one or more components may be separate and distinct from each other. Alternatively, in some aspects, one or more components may be combined with another one of the one or more components. In some aspects, the one or more components may be local with respect to each other. Alternatively, in some aspects, the one or more components may be located remotely with respect to another one of the one or more components. Additionally, or alternatively, the one or more components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a hardware controller or a hardware processor to perform the functions or operations of the component. Additionally, or alternatively, the one or more components may be configured to perform one or more functions described as being performed by another one of the one or more components. As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. A user device may seek to obtain VPN services from a VSP control infrastructure. Based at least in part on authentication of the user device, the VSP control infrastructure may select a VPN server to provide the VPN services to the user device. In an example, the VSP control infrastructure may provide the user device with an entry IP address associated with the VPN server. The user device may utilize the entry IP address to communicate and establish a VPN connection (e.g., encrypted tunnel) with the VPN server. Based at least in part on the VPN connection being established, the VSP control infrastructure and/or the VPN server may assign an exit IP address to the user device. During the established VPN connection (e.g., while the given VPN connection remains established), the VPN server may utilize the entry IP address and the assigned exit IP address to process requests received from the user device. For instance, during the established VPN connection, the VPN server may utilize the entry IP address to receive a request from the user device for retrieving data of interest from a host device and may utilize the exit IP address to retrieve the data of interest from the host device. Utilizing the exit IP address may include the VPN server utilizing the exit IP address to communicate (e.g., transmit and/or receive communications) with the host device to retrieve the data of interest. Further, the VPN server may utilize a correlation between the exit IP address and the entry IP address to transmit the retrieved data of interest to the user device. In some cases, the host device may examine the exit IP address and determine that the exit IP address is an IP address associated with a commercial entity (e.g., the VPN server). For instance, the host device may determine that the exit IP address is an IP address assigned for commercial use by the commercial entity. In an example, the host device may determine that the exit IP address is being utilized to mask information (e.g., an identity) regarding the user device, thereby purposefully hiding such information from the host device. As a result, the host device may decline to provide the data of interest requested via utilization of the exit IP address. Further, the host device may temporarily or permanently discard all communication received via utilization of the exit IP address. In this case, the VPN server may be unable to retrieve data of interest from the host device for any user device. To receive the data of interest from the host device, the user device may terminate the established VPN connection with the VPN server and establish a new VPN connection with a new VPN server. The new VPN server may utilize a new exit IP address to again request the data of interest from the host device. Terminating the established VPN connection, establishing the new VPN connection with the new VPN server, and again requesting the data of interest utilizing the new exit IP address may inefficiently consume user device resources (e.g., processing resources, memory resources, power consumption resources, battery life, or the like) and VPN resources (computational resources, network bandwidth, management resources, processing resources, memory resources, or the like) that may otherwise be used to perform suitable tasks associated with the VPN. Various aspects of systems and techniques discussed in the present disclosure enable utilization of multiple exit IP addresses in a VPN. In some aspects, a user device may establish a VPN connection (e.g., encrypted tunnel) with a primary VPN server configured by a VSP control infrastructure to provide VPN services to the user device. During the established VPN connection, the techniques discussed herein may enable the primary VPN server (and/or an associated VSP control infrastructure) to assign, to the user device, an exit IP address that may be used to process a data request received from the user device for data of interest. Further, during the established VPN connection, when the primary VPN server determines that the assigned exit IP address is blocked, the techniques may enable the primary VPN server to assign and utilize a new exit IP address to process the data request. In an example, based at least in part on determining that the assigned exit IP address is blocked, the primary VPN server may utilize the new exit IP address to request and receive the data of interest from a host device. In some aspects, the new exit IP address may be associated with a secondary server such as, for example, a different VPN server and/or a relay server. In this case, the primary VPN server may establish a secure connection with the secondary server, and route the data request for the data of interest to the host device via the secondary server. The secondary server may utilize the new exit IP address to retrieve the data of interest from the host device and transmit the retrieved data of interest to the primary VPN server, which, in turn, may transmit the received data of interest to the user device. In some aspects, the secondary server may utilize another secondary server to retrieve the data of interest in a similar and/or analogous manner as the primary VPN server. In this case, the secondary server may establish another secure connection with the other secondary server, and route the data request for the data of interest to the host device via the other secondary server. The other secondary server may utilize another new exit IP address to retrieve the data of interest from the host device and transmit the retrieved data of interest to the secondary server, which, in turn, may transmit the received data of interest to the primary VPN server. As such, the primary VPN server may utilize multiple exit IP addresses during the established VPN connection to provide the data of interest to the user device. In this way, the VSP control infrastructure and/or the primary VPN server may enable the user device to receive the data of interest without, among other things, terminating the established VPN connection, establishing the new VPN connection with the new VPN server, and requesting the data interest utilizing the new exit IP address associated with the new VPN server. As a result, the VSP control infrastructure and/or the primary VPN server may enable efficient utilization of user device resources (e.g., processing resources, memory resources, power consumption resources, battery life, or the like) and VPN resources (computational resources, network bandwidth, management resources, processing resources, memory resources, or the like) for performing suitable tasks associated with the VPN. In some aspects, a processor (e.g., processor720) associated with the VPN server may receive, from a user device during an established VPN connection between the VPN server and the user device, a data request for the VPN server to retrieve data of interest from a host device; utilize, during the established VPN connection, a first exit internet protocol (IP) address to transmit a query to the host device for retrieving the data of interest; determine, based at least in part on transmitting the query, that the first exit IP address is blocked by the host device; and transmit, during the established VPN connection and based at least in part on determining that the first exit IP address is blocked, the data request to a secondary server to enable retransmission of the query to the host device by utilizing a second exit IP address, different from the first exit IP address. In some aspects, the VPN server may proactively, and prior to receiving a data request from the user device, utilize the first exit IP address to transmit periodic queries to determine whether the host device has blocked the first exit IP address. When the VPN server determines that the host device has blocked the first exit IP address, the VPN server may determine that a second exit IP address available to the VPN server or a secondary server utilizing a second exit IP address, different from the first exit IP address, is to be utilized to retrieve data of interest from the host device. Utilizing an exit IP address by a server (e.g., primary VPN server, secondary server, etc.) may be associated with the server selecting and/or assigning the exit IP address to a user device during an established VPN connection between the user device and the primary VPN server. The exit IP address may be selected from a pool of exit IP addresses available to the server. In some aspects, a first exit IP address may be assigned to a user device to be utilized by the primary VPN server to process a data request. Based at least in part on determining that the first exit IP address is blocked, the server may utilize a new, second exit IP address to process the data request. In some aspects, processing a data request may involve requesting and retrieving, from a host device, data of interest associated with or requested via the data request. In some aspects, the second exit IP address may be associated with a secondary server. In some aspects, the secondary server may utilize another secondary server to retrieve the data of interest in a similar and/or analogous manner as the primary VPN server. In this case, the secondary server may establish another secure connection with the other secondary server, and route the data request for the data of interest to the host device via the other secondary server. The other secondary server may utilize another new exit IP address to retrieve the data of interest from the host device and transmit the retrieved data of interest to the secondary server, which, in turn, may transmit the received data of interest to the primary VPN server. FIG.2is an illustration of an example flow200associated with utilization of multiple exit IP addresses in a VPN, according to various aspects of the present disclosure. Example flow200includes a user device102in communication with a VPN server120. In some aspects, the user device102may communicate with the VPN server120over a network (e.g., network114). In some aspects, the VPN server120may be a primary VPN server. The user device102may be in communication with the VPN server120based at least in part on utilizing an entry IP address to establish a VPN connection with the VPN server120. In some aspects, the established VPN connection may use a VPN protocol such as, for example, Wireguard, IKEv2, OpenVPN, or the like. Based at least in part on the VPN connection being established, the VPN server120may assign an exit IP address (e.g., first exit IP address) to the user device102. In some aspects, the exit IP address may be selected from among a plurality of exit IP addresses included in a pool of exit IP addresses available to the VPN server120. In some aspects, the exit IP address may be randomly selected or sequentially selected from among the plurality of exit IP addresses included in the pool of exit IP addresses. Randomly selecting or sequentially selecting an exit IP address may include selecting an exit IP address according to, for example, an inverse sequential order, a random sequential (random but higher) order, a random inverse (random but lower) order, a random non-sequential (random but not next) order, a two-step (random and then next) order, a random including current exit IP address order, a sequential discreet (at least n+2 steps, with n being an integer), and/or a random lower bound (random but only within a upper half, upper quartile, etc.) order. In some aspects, the VPN server120may utilize an nftable network filter to assign exit IP addresses. In some aspects, the VSP control infrastructure104(e.g., processing unit110) may program the nftables with respect to assigning of exit IP addresses. For instance, the VSP control infrastructure104may configure the VPN server to select and/or assign exit IP addresses randomly or to select and/or assign exit IP addresses sequentially. Based at least in part on assigning the exit IP address to the user device102, the VPN server120may store a correlation between the entry IP address and the exit IP address (that is assigned to the user device102) in a connection tracking table. During the established VPN connection, the VPN server120may receive a plurality of data requests from the user device102. For instance, as shown by reference numeral205, the user device102may utilize the entry IP address to transmit a data request to the VPN server120. In some aspects, the data request may include a request for the VPN server120to retrieve and provide data of interest to the user device102. In an example, the user device may transmit the data request by utilizing a client application configured and provided by the VSP control infrastructure or a browser installed on the user device102. The data request may be associated with initiating a connection with a website on the Internet, and may request the VPN server120to retrieve and provide data of interest from a host device (e.g., host device122) that is hosting the website. Based at least in part on receiving the data request, as shown by reference numeral210, the VPN server120may process the data request by utilizing the assigned exit IP address. To process the data request, the VPN server120may open a first communication socket between the VPN server120and the host device on the open Internet. Further, the VPN server120may utilize the assigned exit IP address to transmit a query to the host device for the purpose of retrieving the data of interest. As shown by reference numeral215, the VPN server120may determine that the assigned exit IP address is blocked by the host device. In an example, based at least in part on transmitting the query to the host device, the VPN server120may receive a response from the host device indicating that the host device declines to provide the data of interest due to the assigned exit IP address being associated with commercial use by the VPN server. In another example, based at least in part on transmitting the query to the host device, the VPN server120may receive a response from the host device indicating that the host device has blacklisted the assigned exit IP address such that communications transmitted utilizing the assigned exit IP address are discarded by the host device. In yet another example, the VPN server120may fail to receive any response from the host device, indicating that the host device has discarded the query from the VPN server. In yet another example, based at least in part on transmitting the query to the host device, the VPN server120may receive a response from the host device indicating that only a portion of the data of interest may be received utilizing the assigned exit IP address. As a result, the VPN server120may determine that the assigned exit IP address is blocked such that the data of interest may not be retrieved from the host device by utilizing the assigned exit IP address. In this case, the VPN server120may determine and store, in a local memory, a negative correlation between the assigned exit IP address and the host device to indicate that the assigned exit IP address is not to be utilized for retrieving at least part of the requested information from the host device. As shown by reference numeral220, based at least in part on determining that the assigned exit IP address is blocked by the host device, the VPN server120may automatically, and in real time, suspend utilization of the assigned exit IP address and determine that a new exit IP address is to be utilized to retrieve the data of interest from the host device. In some aspects, the VPN server120may suspend utilization of the assigned exit IP address with respect to only the host device. In other words, the VPN server may continue to utilize the assigned exit IP address to retrieve data of interest for the user device102(or for another user device) from another host device. In some aspects, the VPN server120and/or the VSP control infrastructure may store information regarding suspended utilization of the assigned exit IP address in a memory (e.g., server database112). Further, during suspension of utilization of the assigned exit IP address, the VPN server120may periodically (e.g., every 30 seconds, every 60 seconds, every 3 minutes, every numeral five minutes, every 10 minutes, every 30 minutes, every 60 minutes, etc.) transmit a query to the host device to determine whether the host device has ceased to block the assigned exit IP address. In an example, when the VPN server120receives an expected reply from the host device in response to a transmitted query, the VPN server120may determine that the host device has ceased to block the assigned exit IP address. In this case, the VPN server120may end suspension of utilization of the assigned exit IP address. In other words, the VPN server120may utilize the assigned exit IP address to retrieve data of interest for the user device102from the host device. In this case, the VPN server120may discard from the associated memory the negative correlation between the exit IP address and the host device. In some aspects, the VPN server120may proactively, and prior to receiving a data request from the user device102, utilize the assigned exit IP address to transmit periodic queries to determine whether the host device has blocked the assigned exit IP address. When the VPN server120determines that the host device has blocked the assigned exit IP address, the VPN server120may determine that secondary server utilizing second exit IP address, different from the assigned exit IP address, is to be utilized to retrieve data of interest from the host device. During suspension of utilization of the assigned exit IP address, the VPN server120may utilize one or more new exit IP address (e.g., one at a time) to retrieve the data of interest from the host device. In some aspects, during the established VPN connection between the VPN server120and the user device102, the VPN server120may communicate with a secondary server118to establish a connection between the VPN server120and the secondary server118. To enable the VPN server120to communicate with the secondary server118, the VSP control infrastructure may configure the VPN server120with communication information associated with the secondary server118. Alternatively, based at least in part on determining that the assigned exit IP address is blocked, the VPN server120may transmit to the VSP control infrastructure a request to receive the communication information associated with a secondary server118. The VPN server120may utilize the communication information to initiate communication with the secondary server118. In some aspects, the secondary server118may be another VPN server and/or a relay server configured and maintained by the VSP control infrastructure. Further, the VSP control infrastructure may select the secondary server118to be utilized by the VPN server120to retrieve the data of interest from the host device based at least in part on the secondary server118being optimal for retrieving the data of interest. In an example, the secondary server118may be optimal because the secondary server118may be reserved for the purpose of retrieving data of interest by the VPN server120. In another example, the secondary server118may be optimal because the secondary server118may be located geographically/physically closer (and therefore able to provide speedier service) to the user device102and/or the VPN server120. In yet another example, the secondary server118may be optimal because the secondary server118may currently have a highest available bandwidth to retrieve the data of interest. In yet another example, the secondary server118may be optimal because the secondary server118may be located geographically/physically closer to an international Internet exchange hub (and therefore able to provide speedier service). In some aspects the connection between the VPN server120and the secondary server118may include a secure connection (e.g., encrypted tunnel) such that data communicated between the VPN server120and the secondary server118may be encrypted and/or decrypted based at least in part on utilizing a negotiated cryptographic key. The cryptographic key may be a symmetric key determined based at least in part on combination of a public key associated with the VPN server120, a public key associated with the secondary server118, and/or a randomly generated value. Based at least in part on establishing a connection with the secondary server118, the VPN server120may modify a configuration file and/or a configuration database related to a domain name system (DNS) server included in and/or associated with the VPN server120. In some aspects, the modification to the configuration file and/or the configuration database may include an association between the host device that blocked the assigned exit IP address and the secondary server118. In an example, the VPN server120may modify the configuration file and/or the configuration database such that, when the VPN server120receives a data request for retrieving data of interest from the host device that blocked the assigned exit IP address, the DNS server returns the communication information (e.g., IP address) associated with the secondary server118rather than communication information associated with the host device. As a result, when the VPN server120receives a data request for retrieving data of interest from the host device that blocked the assigned exit IP address, the data request may be automatically, and in real time, encrypted and transmitted to the secondary server118via the secure connection between the VPN server120and the secondary server118. As shown by reference numeral230, the VPN server120may again process the data request by transmitting, during the established VPN connection between the VPN server120and the user device102, an encrypted message to the secondary server118via the secure connection. The encrypted message may include the data request identifying the data of interest to be retrieved and/or the communication information associated with the host device indicating that the data of interest is to be retrieved from the host device. In some aspects, the encrypted message may include an unsecure IP packet. In this case, the VPN server120may include the communication information in a request line included in the unsecure IP packet. The data request may be included in a payload of the unsecure IP packet. In some aspects, the message may include a secure IP packet. In this case, the VPN server120may configure the secure IP packet to include a server name indication (SNI) header, and may include the communication information within the SNI header. The data request may be included in a payload of the secure IP packet. Based at least in part on receiving the encrypted message, the secondary server118may decrypt the encrypted message to determine the IP packet. In some aspects, the secondary server118may analyze the payload of the IP packet to determine the data request identifying the data of interest. In some aspects, the secondary server118may determine whether the message includes the unsecure IP packet or a secure IP packet. When the secondary server118determines that the message includes an unsecure IP packet, the secondary server118may analyze the request line to determine the communication information associated with the host device. Alternatively, when the secondary server118determines that the message includes a secure IP packet, the secondary server118may analyze the SNI header to determine the communication information associated with the host device. Further, the secondary server118may assign one or more (e.g., one at a time) new exit IP addresses (e.g., second exit IP address) for retrieving the data of interest from the host device. The new exit IP address may be selected from a pool of exit IP addresses available to the secondary server118. In some aspects, the secondary server118may randomly or sequentially select the new exit IP address from the pool of exit IP addresses, in a similar and/or analogous manner as discussed elsewhere herein. Based at least in part on determining the data of interest and/or the communication information associated with the host device and/or the new exit IP address, the secondary server118may utilize the new exit IP address to transmit a query to the host device for the purpose of retrieving the data of interest. Because the new exit IP address associated with the secondary server118may not be blocked by the host device, the host device may provide the data of interest to the secondary server118. Based at least in part on receiving the data of interest from the host device, the secondary server118may transmit the data of interest to the VPN server120. In some aspects, the secondary server118may encrypt the data of interest prior to transmitting the data of interest to the VPN server120via the secure connection between the VPN server120and the secondary server118. In some aspects, in association with the data of interest, the secondary server118may transmit information identifying the new exit IP address utilized to retrieve the data of interest from the host device. Based at least in part on receiving the information identifying the new exit IP address, the VPN server120may add a positive correlation between the new exit IP address and the host device to indicate that the new exit IP address, associated with the secondary server118, may be utilized and/or is available to be utilized to retrieve information (e.g., data of interest, etc.) from the host device. Additionally, the VPN server120may add a correlation between the new exit IP address associated with the secondary server118and the entry IP address utilized by the user device102to transmit the data request. In some aspects, the correlation between the entry IP address and the new exit IP address may be privately or internally stored within the VPN server120in, for example, the connection tracking table. Based at least in part on the receiving the data of interest, the VPN server120may inspect the connection tracking table to determine the routing of the received data of interest. In this case, the correlation between the entry IP address and the new exit IP address may indicate that the data of interest, that was retrieved by utilizing the new exit IP address, is to be routed to the user device102, which transmitted the data request utilizing the entry IP address of the VPN server120. Further, based at least in part on a correlation between the entry IP address and the new exit IP address, as shown by reference numeral235, the VPN server120may transmit the received data of interest to the user device102. In some aspects, the VPN server120may receive a second data request from the user device102during the established VPN connection. When the second data request is associated with retrieving data of interest from the host device, and utilization of the assigned exit IP address is suspended, the VPN server120may again utilize the new exit IP address associated with the secondary server118to retrieve the data of interest from the host device, as discussed above. Alternatively, when the second data request is associated with retrieving data of interest from another host device, the VPN server120may utilize the assigned exit IP address to retrieve the data of interest from the other host device. In this way, by utilizing the new exit IP address when utilization of the assigned exit IP address is suspended, the VSP control infrastructure and/or the VPN server may enable the user device to receive the data of interest without, among other things, terminating the established VPN connection, establishing the new VPN connection with the new VPN server, and requesting the data interest utilizing the new exit IP address associated with the new VPN server. As a result, the VSP control infrastructure and/or the VPN server may enable efficient utilization of user device resources (e.g., processing resources, memory resources, power consumption resources, battery life, or the like) and VPN resources (computational resources, network bandwidth, management resources, processing resources, memory resources, or the like) for performing suitable tasks associated with the VPN. As indicated above,FIG.2is provided as an example. Other examples may differ from what is described with regard toFIG.2. FIG.3is an illustration of an example process300associated with utilization of multiple exit IP addresses in a VPN environment, according to various aspects of the present disclosure. In some aspects, the process300may be performed by an associated memory (e.g., memory730) and/or an associated processor (e.g., processor720) related to a VPN server (e.g., VPN server120) configured by an associated VSP control infrastructure. As shown by reference numeral310, process300includes receiving, at a VPN server from a user device during an established VPN connection between the VPN server and the user device, a data request for the VPN server to retrieve data of interest from a host device. For instance, the VPN server may utilize an associated communication interface (e.g., communication interface770) along with the associated memory and/or processor to receive, at a VPN server from a user device during an established VPN connection between the VPN server and the user device, a data request for the VPN server to retrieve data of interest from a host device, as discussed elsewhere herein. As shown by reference numeral320, process300includes utilizing, by the VPN server, a first exit internet protocol (IP) address to transmit a query for retrieving the data of interest to the host device during the established VPN connection. For instance, the VPN server may utilize the associated memory and/or processor to utilize a first exit internet protocol (IP) address to transmit a query for retrieving the data of interest to the host device during the established VPN connection, as discussed elsewhere herein. As shown by reference numeral330, process300includes determining, by the VPN server based at least in part on transmitting the query, that the first exit IP address is blocked by the host device. For instance, the VPN server may utilize the associated memory and/or processor to determine, based at least in part on transmitting the query, that the first exit IP address is blocked by the host device, as discussed elsewhere herein. As shown by reference numeral340, process300includes transmitting, by the VPN server during the established VPN connection and based at least in part on determining that the first exit IP address is blocked, the data request to a secondary server to enable retransmission of the query to the host device by utilizing a second exit IP address, different from the first exit IP address. For instance, the VPN server may utilize the associated communication interface, memory, and/or processor to transmit, during the established VPN connection and based at least in part on determining that the first exit IP address is blocked, the data request to a secondary server to enable retransmission of the query to the host device by utilizing a second exit IP address, different from the first exit IP address, as discussed elsewhere herein. Process300may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, process300may include establishing, by the VPN server based at least in part on determining that the first exit IP address is blocked, a secure connection between the VPN server and the secondary server. In a second aspect, alone or in combination with the first aspect, process300may include modifying, by the VPN server based at least in part on determining that the first exit IP address is blocked, a configuration of an associated domain name system (DNS) server such that the DNS server returns communication information associated with the secondary server when information is to be retrieved from the host device. In a third aspect, alone or in combination with the first through second aspects, process300may include transmitting, by the VPN server to the secondary server based at least in part on determining that the first exit IP address is blocked, communication information associated with the host device to enable retransmission of the query to the host device. In a fourth aspect, alone or in combination with the first through third aspects, process300may include receiving, by the VPN server from the secondary server during the established VPN connection, the data of interest retrieved from the host device based at least in part on utilization of the second exit IP address; and transmitting, by the VPN server during the established VPN connection, the data of interest to the user device. In a fifth aspect, alone or in combination with the first through fourth aspects, process300may include suspending, by the VPN server based at least in part on determining that the first exit IP address is blocked by the host device, utilization of the first exit IP address for retrieving information from the host device. In a sixth aspect, alone or in combination with the first through fifth aspects, process300may include transmitting, by the VPN server, a query to the host device during a suspension of the first exit IP address to determine whether the first exit IP address is currently blocked by the host device. AlthoughFIG.3shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.3. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.3is provided as an example. Other examples may differ from what is described with regard toFIG.3. FIG.4is an illustration of an example process400associated with utilization of multiple exit IP addresses in a VPN environment, according to various aspects of the present disclosure. In some aspects, the process400may be performed by an associated memory (e.g., memory730) and/or an associated processor (e.g., processor720) related to a VPN server (e.g., VPN server120) configured by an associated VSP control infrastructure. As shown by reference numeral410, process400receiving, at a VPN server from a user device during an established VPN connection between the VPN server and the user device, a data request for the VPN server to retrieve data of interest from a host device. For instance, the VPN server may utilize an associated communication interface (e.g., communication interface770) with the associated memory and/or processor to receive, from a user device during an established VPN connection between the VPN server and the user device, a data request for the VPN server to retrieve data of interest from a host device, as discussed elsewhere herein. As shown by reference numeral420, process400includes transmitting, by the VPN server to the host device during the established VPN connection, a query to retrieve the data of interest based at least in part on utilizing a first exit IP address. For instance, the VPN server may utilize the associated communication interface, memory, and processor to transmit, to the host device during the established VPN connection, a query to retrieve the data of interest based at least in part on utilizing a first exit IP address, as discussed elsewhere herein. As shown by reference numeral430, process400includes determining, by the VPN server during the established VPN connection and based at least in part on transmitting the query, that the first exit IP address is blocked by the host device. For instance, the VPN server may utilize the associated memory and processor to determine, during the established VPN connection and based at least in part on transmitting the query, that the first exit IP address is blocked by the host device, as discussed elsewhere herein. As shown by reference numeral440, process400includes retrieving, by the VPN server during the established VPN connection and based at least in part on determining that the first exit IP address is blocked by the host device, the data of interest based at least in part on utilizing a second exit IP address, different from the first exit IP address. For instance, the VPN server may utilize the associated communication interface, memory, and processor to retrieve, during the established VPN connection and based at least in part on determining that the first exit IP address is blocked by the host device, the data of interest based at least in part on utilizing a second exit IP address, different from the first exit IP address, as discussed elsewhere herein. As shown by reference numeral440, process400includes transmitting, by the VPN server to the user device during the established VPN connection, the data of interest that is retrieved based at least in part on utilizing the second exit IP address. For instance, the VPN server may utilize the associated communication interface, memory, and processor to transmit, to the user device during the established VPN connection, the data of interest that is retrieved based at least in part on utilizing the second exit IP address, as discussed elsewhere herein. Process400may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, wherein, process400may include determining, by the VPN server, that the first exit IP address is blocked by the host device based at least in part on the host device declining to provide the data of interest. In a second aspect, alone or in combination with the first aspect, process400may include determining, by the VPN server, that the first exit IP address is blocked by the host device based at least in part on determining that the query transmitted by utilizing the first exit IP address is discarded by the host device. In a third aspect, alone or in combination with the first through second aspects, process400may include automatically suspending, by the VPN server, utilization of the first exit IP address to retrieve information from the host device based at least in part on determining that the first exit IP address is blocked by the host device. In a fourth aspect, alone or in combination with the first through third aspects, process400may include transmitting, during the established VPN connection by the VPN server based at least in part on receiving another data request for the VPN server to retrieve other data of interest from another host device, another query to retrieve the other data of interest from the other host device based at least in part on utilizing the first exit IP address. In a fifth aspect, alone or in combination with the first through fourth aspects, process400includes updating, by the VPN server based at least in part on retrieving the data of interest by utilizing the second exit IP address, a local connection tracking table to include a positive correlation between the second exit IP address and the host device to indicate that the second exit IP address is available for retrieving information from the host device. In a sixth aspect, alone or in combination with the first through fifth aspects, process400may include modifying, by the VPN server based at least in part on determining that the first exit IP address is blocked, a configuration of an associated domain name system (DNS) server such that the DNS server returns communication information associated with the secondary server when information is to be retrieved from the host device. AlthoughFIG.4shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.4. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.4is provided as an example. Other examples may differ from what is described with regard toFIG.4. FIG.5is an illustration of an example process500associated with utilization of multiple exit IP addresses in a VPN environment, according to various aspects of the present disclosure. In some aspects, the process500may be performed by an associated memory (e.g., memory730) and/or an associated processor (e.g., processing unit110, processor720) related to a VSP control infrastructure configured to configure an associated VPN server (e.g., VPN server120). As shown by reference numeral510, process500includes configuring a VPN server to receive, from a user device during an established VPN connection between the VPN server and the user device, a data request for the VPN server to retrieve data of interest from a host device. For instance, the VSP control infrastructure may utilize the associated memory and/or processor to configure a VPN server to receive, from a user device during an established VPN connection between the VPN server and the user device, a data request for the VPN server to retrieve data of interest from a host device, as discussed elsewhere herein. As shown by reference numeral520, process500includes configuring the VPN server to utilize, during the established VPN connection, a first exit internet protocol (IP) address to transmit a query to the host device for retrieving the data of interest. For instance, the VSP control infrastructure may utilize the associated memory and/or processor to configure the VPN server to utilize, during the established VPN connection, a first exit internet protocol (IP) address to transmit a query to the host device for retrieving the data of interest, as discussed elsewhere herein. As shown by reference numeral530, process500includes configuring the VPN server to determine, based at least in part on transmitting the query, that the first exit IP address is blocked by the host device. For instance, the VSP control infrastructure may utilize the associated memory and/or processor to configure the VPN server to determine, based at least in part on transmitting the query, that the first exit IP address is blocked by the host device, as discussed elsewhere herein. As shown by reference numeral540, process500includes configuring the VPN server to transmit, during the established VPN connection and based at least in part on determining that the first exit IP address is blocked, the data request to a secondary server to enable retransmission of the query to the host device by utilizing a second exit IP address, different from the first exit IP address. For instance, the VSP control infrastructure may utilize the associated memory and/or processor to configure the VPN server to transmit, during the established VPN connection and based at least in part on determining that the first exit IP address is blocked, the data request to a secondary server to enable retransmission of the query to the host device by utilizing a second exit IP address, different from the first exit IP address, as discussed elsewhere herein. Process500may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, wherein, in process500, configuring a VPN server to establish, based at least in part on determining that the first exit IP address is blocked, a secure connection between the VPN server and the secondary server. In a second aspect, alone or in combination with the first aspect, process500may include configuring a VPN server to modify, based at least in part on determining that the first exit IP address is blocked, a configuration of an associated domain name system (DNS) server such that the DNS server returns communication information associated with the secondary server when information is to be retrieved from the host device. In a third aspect, alone or in combination with the first through second aspects, process500may include configuring a VPN server to transmit, to the secondary server based at least in part on determining that the first exit IP address is blocked, communication information associated with the host device to enable retransmission of the query to the host device. In a fourth aspect, alone or in combination with the first through third aspects, process500may include configuring a VPN server to receive, from the secondary server during the established VPN connection, the data of interest retrieved from the host device based at least in part on utilization of the second exit IP address; and configuring a VPN server to transmit, during the established VPN connection, the data of interest to the user device. In a fifth aspect, alone or in combination with the first through fourth aspects, process500may include configuring a VPN server to suspend, based at least in part on determining that the first exit IP address is blocked by the host device, utilization of the first exit IP address for retrieving information from the host device. In a sixth aspect, alone or in combination with the first through fifth aspects, process500may include configuring a VPN server to transmit a query to the host device during a suspension of the first exit IP address to determine whether the first exit IP address is currently blocked by the host device. AlthoughFIG.5shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.5. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.5is provided as an example. Other examples may differ from what is described with regard toFIG.5. FIG.6is an illustration of an example process600associated with utilization of multiple exit IP addresses in a VPN environment, according to various aspects of the present disclosure. In some aspects, the process600may be performed by an associated memory (e.g., memory730) and/or an associated processor (e.g., processing unit110, processor720) related to a VSP control infrastructure configured to configure an associated VPN server (e.g., VPN server120). As shown by reference numeral610, process600includes configuring a VPN server to receive, from a user device during an established VPN connection between the VPN server and the user device, a data request for the VPN server to retrieve data of interest from a host device. For instance, the VSP control infrastructure may utilize the associated memory and/or processor to configure a VPN server to receive, from a user device during an established VPN connection between the VPN server and the user device, a data request for the VPN server to retrieve data of interest from a host device, as discussed elsewhere herein. As shown by reference numeral620, process600includes configuring the VPN server to transmit, to the host device during the established VPN connection, a query to retrieve the data of interest based at least in part on utilizing a first exit IP address. For instance, the VSP control infrastructure may utilize the associated memory and/or processor to configure the VPN server to transmit, to the host device during the established VPN connection, a query to retrieve the data of interest based at least in part on utilizing a first exit IP address, as discussed elsewhere herein. As shown by reference numeral630, process600includes configuring the VPN server to determine, during the established VPN connection and based at least in part on transmitting the query, that the first exit IP address is blocked by the host device. For instance, the VSP control infrastructure may utilize the associated memory and/or processor to configure the VPN server to determine, during the established VPN connection and based at least in part on transmitting the query, that the first exit IP address is blocked by the host device, as discussed elsewhere herein. As shown by reference numeral640, process600includes configuring the VPN server to retrieve, during the established VPN connection and based at least in part on determining that the first exit IP address is blocked by the host device, the data of interest based at least in part on utilizing a second exit IP address, different from the first exit IP address. For instance, the VSP control infrastructure may utilize the associated memory and/or processor to configure the VPN server to retrieve, during the established VPN connection and based at least in part on determining that the first exit IP address is blocked by the host device, the data of interest based at least in part on utilizing a second exit IP address, different from the first exit IP address, as discussed elsewhere herein. As shown by reference numeral650, process600includes configuring the VPN server to transmit, to the user device during the established VPN connection, the data of interest that is retrieved based at least in part on utilizing the second exit IP address. For instance, the VSP control infrastructure may utilize the associated memory and/or processor to configure the VPN server to transmit, to the user device during the established VPN connection, the data of interest that is retrieved based at least in part on utilizing the second exit IP address, as discussed elsewhere herein. Process600may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, wherein, process600may include configuring the VPN server to determine that the first exit IP address is blocked by the host device based at least in part on the host device declining to provide the data of interest. In a second aspect, alone or in combination with the first aspect, process600may include configuring the VPN server to determine that the first exit IP address is blocked by the host device based at least in part on determining that the query transmitted by utilizing the first exit IP address is discarded by the host device. In a third aspect, alone or in combination with the first through second aspects, process600may include configuring the VPN server to automatically suspend utilization of the first exit IP address to retrieve information from the host device based at least in part on determining that the first exit IP address is blocked by the host device. In a fourth aspect, alone or in combination with the first through third aspects, process600may include configuring the VPN server to transmit, during the established VPN connection and based at least in part on receiving another data request for the VPN server to retrieve other data of interest from another host device, another query to retrieve the other data of interest from the other host device based at least in part on utilizing the first exit IP address. In a fifth aspect, alone or in combination with the first through fourth aspects, process600may include configuring the VPN server to update, based at least in part on retrieving the data of interest by utilizing the second exit IP address, a local connection tracking table to include a positive correlation between the second exit IP address and the host device to indicate that the second exit IP address is available for retrieving information from the host device. In a sixth aspect, alone or in combination with the first through fifth aspects, process600may include configuring the VPN server to modify, based at least in part on determining that the first exit IP address is blocked, a configuration of an associated domain name system (DNS) server such that the DNS server returns communication information associated with the secondary server when information is to be retrieved from the host device. AlthoughFIG.6shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.6. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.6is provided as an example. Other examples may differ from what is described with regard toFIG.6. FIG.7is an illustration of example devices700, according to various aspects of the present disclosure. In some aspects, the example devices700may form part of or implement the systems, environments, infrastructures, components, devices or the like described elsewhere herein (e.g., VPN server, the VSP control infrastructure, etc.) and may be utilized for performing the example processes described elsewhere herein. The example devices700may include a universal bus710communicatively coupling a processor720, a memory730, a storage component740, an input component750, an output component760, and a communication interface770. Bus710may include a component that permits communication among multiple components of a device700. Processor720may be implemented in hardware, firmware, and/or a combination of hardware and software. Processor720may take the form of a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some aspects, processor720may include one or more processors capable of being programmed to perform a function. Memory730may include a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by processor720. Storage component740may store information and/or software related to the operation and use of a device700. For example, storage component740may include a hard disk (e.g., a magnetic disk, an optical disk, and/or a magneto-optic disk), a solid state drive (SSD), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive. Input component750may include a component that permits a device700to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, input component750may include a component for determining location (e.g., a global positioning system (GPS) component) and/or a sensor (e.g., an accelerometer, a gyroscope, an actuator, another type of positional or environmental sensor, and/or the like). Output component760may include a component that provides output information from device700(via, for example, a display, a speaker, a haptic feedback component, an audio or visual indicator, and/or the like). Communication interface770may include a transceiver-like component (e.g., a transceiver, a separate receiver, a separate transmitter, and/or the like) that enables a device700to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface770may permit device700to receive information from another device and/or provide information to another device. For example, communication interface770may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like. A device700may perform one or more processes described elsewhere herein. A device700may perform these processes based on processor720executing software instructions stored by a non-transitory computer-readable medium, such as memory730and/or storage component740. As used herein, the term “computer-readable medium” may refer to a non-transitory memory device. A memory device may include memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory730and/or storage component740from another computer-readable medium or from another device via communication interface770. When executed, software instructions stored in memory730and/or storage component740may cause processor720to perform one or more processes described elsewhere herein. Additionally, or alternatively, hardware circuitry may be used in place of or in combination with software instructions to perform one or more processes described elsewhere herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The quantity and arrangement of components shown inFIG.7are provided as an example. In practice, a device700may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.7. Additionally, or alternatively, a set of components (e.g., one or more components) of a device700may perform one or more functions described as being performed by another set of components of a device700. As indicated above,FIG.7is provided as an example. Other examples may differ from what is described with regard toFIG.7. Persons of ordinary skill in the art will appreciate that the aspects encompassed by the present disclosure are not limited to the particular exemplary aspects described herein. In that regard, although illustrative aspects have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the aspects without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the aspects to the precise form disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. As used herein, a processor is implemented in hardware, firmware, or a combination of hardware and software. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, or not equal to the threshold, among other examples, or combinations thereof. It will be apparent that systems or methods described herein may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems or methods is not limiting of the aspects. Thus, the operation and behavior of the systems or methods were described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems or methods based, at least in part, on the description herein. Even though particular combinations of features are recited in the claims or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. In fact, many of these features may be combined in ways not specifically recited in the claims or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. A phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (for example, a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”). | 74,980 |
11943203 | DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. As discussed, it can be difficult to securely manage database traffic sent and received between database systems. An example networked database system includes a virtual private cloud deployment that uses cloud data storage devices and cloud compute resources dedicated to that deployment. Different deployments can be linked, and channels can be set up to send and receive data between the deployments. For example, deployment_A can be a deployment (e.g., a database management system (DBMS) running within an Amazon Web Services® (AWS) Virtual Private Cloud (VPC)) at a first region such as San Francisco, and deployment_B can be another deployment (e.g., another DBMS in different AWS VPC) at a second region, such as New York City. Deployment_A and deployment_B can create a link over which a stream of data, such as replication traffic, is sent between the deployments. For example, replication traffic of a primary database in deployment_A can be replicated to a secondary database located in deployment_B. While it may be possible to replicate the traffic from deployment_A to deployment_B it can still be difficult to ensure that the data takes a certain path or stays within a certain region while in transit between the two deployments. For instance, a database administrator may require that none of its data in its databases ever be transferred over the open Internet. Further, to comply with data governance laws, the database administrator may seek to configure their databases such that all data in the database network stays within a certain region. For example, the database administrator may seek to ensure that all data transferred between deployment_A and deployment_B remain within a given country (e.g., USA) and additionally the data may never be transferred over the open Internet (e.g., encrypted in TLS traffic over the Internet) while in the given country. Additionally, many VPCs are not configured for replication between the different VPCs and may charge egress export fees (e.g., egress fees) even though the traffic is being replicated to another deployment of the same VPC provider. Further difficulty arises when sending data between different types of database deployments securely. For example, if deployment_A is a VPC from a first provider (e.g., AWS VPC) and deployment_B is a VPC from second different provider (e.g., Google Private Cloud (GPC)), the different providers may have different and potentially incongruent security mechanisms. For instance, deployment_B may implement a hardware security module (HSM) that does not enable importing or exporting of encryption keys, thereby greatly increasing the difficulty and practicality of transferring data between the deployments. Additionally, even when the different deployments have congruent security mechanisms (e.g., each deployment has an HSM that enables import/export of keys), managing the keys as the number of replicated databases increases to enterprise levels (e.g., hundreds of thousands of database customers at the different deployments, where each replicates data to other database in other deployments) is very difficult to implement in a secure manner that scales with network growth. To address these issues, a replication manager and channel manager can be implemented in a deployment to encrypt the traffic in an approach that is agnostic to various configurations of HSMs and VPCs, and further to transfer the traffic between deployments using nodes of a private network that are external to the deployments. For example, the private network can be a virtual private network (VPN) that implements VPN nodes (e.g., AT&T® NetBond® nodes, a VPN server/node at a first location and another VPN server/node at a second location) to transfer traffic within the virtual private network. When one or more databases in deployment_A send data to another database in deployment_B, e.g., replication traffic, the channel manager can implement a cloud connection (e.g., hosted connections provided by the given VPC provider such as AWS Direct Connect®, or a physical connection such as Ethernet port) to send data from deployment_A to a node of the virtual private network. Each of the nodes of the virtual private network can be set up and positioned within a given region (e.g., in a country, or avoiding/excluding a specified country), thereby ensuring the data is not transferred outside the region and not exposed or otherwise transferred over the open Internet. The traffic continues over the VPN nodes to the destination database in deployment_B. In some example embodiments, the VPN node nearest deployment_B then imports the traffic into using a cloud connection provided by deployment_B (e.g., hosted connection of the cloud, such as AWS Direct Connect; a direct port connection such as Azure Express Route®; a physical Ethernet cord connecting the VPN node to hardware of deployment_B, etc.). Additionally, and in accordance with some example embodiments, the traffic is encrypted using internal message keys to efficiently transfer the traffic between the databases at different deployments. In some example embodiments, a replication manager can generate the messages and keys at the database application level, without requiring changes to a given VPC, HSM, or VPN node transfer network. For example, in some example embodiments, the traffic is sent in a sequence of messages using a pre-configured key encryption structure. In some example embodiments, in each message, the data is encrypted by a symmetric key (e.g., data encryption key (DEK) unique to that message). The data encryption key for the given message can be further encrypted by a wrapping replication key (WRK), which can be another symmetric key generated by the sending deployment (e.g., periodically generated by an HSM in deployment_A). In some example embodiments, the WRK is then encrypted by a key from a keypair, such as the public key of the destination deployment. In some example embodiments, the encrypted WRK to access a DEK in a given message is also stored in the given message. In other example embodiments, the WRKs are staggered between messages such that a given messages DEK is encrypted using a previously sent WRK (e.g., a WRK sent in a previously received message). Further, in some example embodiments, the WRKs are rotated based on time expiration periods or randomly to increase security of the data. In this way, the replication manager and channel manager of the database systems (e.g., database applications running on VPNs) can efficiently and securely transmit data between different clouds at the applications level over specific paths even where the cloud systems are incongruent or cannot be customized. FIG.1illustrates an example shared data processing platform100in which a network-based data warehouse system102implements database stream tracking (e.g., view streams), in accordance with some embodiments of the present disclosure. To avoid obscuring the inventive subject matter with unnecessary detail, various functional components that are not germane to conveying an understanding of the inventive subject matter have been omitted from the figures. However, a skilled artisan will readily recognize that various additional functional components may be included as part of the shared data processing platform100to facilitate additional functionality that is not specifically described herein. As shown, the shared data processing platform100comprises the network-based data warehouse system102, a cloud computing storage platform104(e.g., a storage platform, an AWS® service such as S3, Microsoft Azure®, or Google Cloud Services®), and a remote computing device106. The network-based data warehouse system102is a network-based system used for storing and accessing data (e.g., internally storing data, accessing external remotely located data) in an integrated manner, and reporting and analysis of the integrated data from the one or more disparate sources (e.g., the cloud computing storage platform104). The cloud computing storage platform104comprises a plurality of computing machines and provides on-demand computer system resources such as data storage and computing power to the network-based data warehouse system102. The remote computing device106(e.g., a user device such as a laptop computer) comprises one or more computing machines (e.g., a user device such as a laptop computer) that execute a remote software component108(e.g., browser accessed cloud service) to provide additional functionality to users of the network-based data warehouse system102. The remote software component108comprises a set of machine-readable instructions (e.g., code) that, when executed by the remote computing device106, cause the remote computing device106to provide certain functionality. The remote software component108may operate on input data and generates result data based on processing, analyzing, or otherwise transforming the input data. As an example, the remote software component108can be a data provider or data consumer that enables database tracking procedures, such as streams on shared tables and views, as discussed in further detail below. The network-based data warehouse system102comprises an access management system110, a compute service manager112, an execution platform114, and a database116. The access management system110enables administrative users to manage access to resources and services provided by the network-based data warehouse system102. Administrative users can create and manage users, roles, and groups, and use permissions to allow or deny access to resources and services. The access management system110can store share data that securely manages shared access to the storage resources of the cloud computing storage platform104amongst different users of the network-based data warehouse system102, as discussed in further detail below. The compute service manager112coordinates and manages operations of the network-based data warehouse system102. The compute service manager112also performs query optimization and compilation as well as managing clusters of computing services that provide compute resources (e.g., virtual warehouses, virtual machines, EC2 clusters). The compute service manager112can support any number of client accounts such as end users providing data storage and retrieval requests, system administrators managing the systems and methods described herein, and other components/devices that interact with compute service manager112. The compute service manager112is also coupled to database116, which is associated with the entirety of data stored on the shared data processing platform100. The database116stores data pertaining to various functions and aspects associated with the network-based data warehouse system102and its users. For example, data to be tracked via streams can be stored and accessed on the cloud computing storage platform104(e.g., on S3) or stored and accessed on the database116that is local to the network-based data warehouse system102, according to some example embodiments. In some embodiments, database116includes a summary of data stored in remote data storage systems as well as data available from one or more local caches. Additionally, database116may include information regarding how data is organized in the remote data storage systems and the local caches. Database116allows systems and services to determine whether a piece of data needs to be accessed without loading or accessing the actual data from a storage device. The compute service manager112is further coupled to an execution platform114, which provides multiple computing resources (e.g., virtual warehouses) that execute various data storage and data retrieval tasks, as discussed in greater detail below. Execution platform114is coupled to multiple data storage devices124-1to124-nthat are part of a cloud computing storage platform104. In some embodiments, data storage devices124-1to124-nare cloud-based storage devices located in one or more geographic locations. For example, data storage devices124-1to124-nmay be part of a public cloud infrastructure or a private cloud infrastructure. Data storage devices124-1to124-nmay be hard disk drives (HDDs), solid state drives (SSDs), storage clusters, Amazon S3 storage systems or any other data storage technology. Additionally, cloud computing storage platform104may include distributed file systems (such as Hadoop Distributed File Systems (HDFS)), object storage systems, and the like. The execution platform114comprises a plurality of compute nodes (e.g., virtual warehouses). A set of processes on a compute node executes a query plan compiled by the compute service manager112. The set of processes can include: a first process to execute the query plan; a second process to monitor and delete micro-partition files using a least recently used (LRU) policy, and implement an out of memory (00M) error mitigation process; a third process that extracts health information from process logs and status information to send back to the compute service manager112; a fourth process to establish communication with the compute service manager112after a system boot; and a fifth process to handle all communication with a compute cluster for a given job provided by the compute service manager112and to communicate information back to the compute service manager112and other compute nodes of the execution platform114. The cloud computing storage platform104also comprises an access management system118and a web proxy120. As with the access management system110, the access management system118allows users to create and manage users, roles, and groups, and use permissions to allow or deny access to cloud services and resources. The access management system110of the network-based data warehouse system102and the access management system118of the cloud computing storage platform104can communicate and share information so as to enable access and management of resources and services shared by users of both the network-based data warehouse system102and the cloud computing storage platform104. The web proxy120handles tasks involved in accepting and processing concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. The web proxy120provides HTTP proxy service for creating, publishing, maintaining, securing, and monitoring APIs (e.g., REST APIs). In some embodiments, communication links between elements of the shared data processing platform100are implemented via one or more data communication networks. These data communication networks may utilize any communication protocol and any type of communication medium. In some embodiments, the data communication networks are a combination of two or more data communication networks (or sub-networks) coupled to one another. In alternate embodiments, these communication links are implemented using any type of communication medium and any communication protocol. As shown inFIG.1, data storage devices124-1to124-N are decoupled from the computing resources associated with the execution platform114. That is, new virtual warehouses can be created and terminated in the execution platform114and additional data storage devices can be created and terminated on the cloud computing storage platform104in an independent manner. This architecture supports dynamic changes to the network-based data warehouse system102based on the changing data storage/retrieval needs as well as the changing needs of the users and systems accessing the shared data processing platform100. The support of dynamic changes allows network-based data warehouse system102to scale quickly in response to changing demands on the systems and components within network-based data warehouse system102. The decoupling of the computing resources from the data storage devices124supports the storage of large amounts of data without requiring a corresponding large amount of computing resources. Similarly, this decoupling of resources supports a significant increase in the computing resources utilized at a particular time without requiring a corresponding increase in the available data storage resources. Additionally, the decoupling of resources enables different accounts to handle creating additional compute resources to process data shared by other users without affecting the other users' systems. For instance, a data provider may have three compute resources and share data with a data consumer, and the data consumer may generate new compute resources to execute queries against the shared data, where the new compute resources are managed by the data consumer and do not affect or interact with the compute resources of the data provider. Compute service manager112, database116, execution platform114, cloud computing storage platform104, and remote computing device106are shown inFIG.1as individual components. However, each of compute service manager112, database116, execution platform114, cloud computing storage platform104, and remote computing device106may be implemented as a distributed system (e.g., distributed across multiple systems/platforms at multiple geographic locations) connected by APIs and access information (e.g., tokens, login data). Additionally, each of compute service manager112, database116, execution platform114, and cloud computing storage platform104can be scaled up or down (independently of one another) depending on changes to the requests received and the changing needs of shared data processing platform100. Thus, in the described embodiments, the network-based data warehouse system102is dynamic and supports regular changes to meet the current data processing needs. During typical operation, the network-based data warehouse system102processes multiple jobs (e.g., queries) determined by the compute service manager112. These jobs are scheduled and managed by the compute service manager112to determine when and how to execute the job. For example, the compute service manager112may divide the job into multiple discrete tasks and may determine what data is needed to execute each of the multiple discrete tasks. The compute service manager112may assign each of the multiple discrete tasks to one or more nodes of the execution platform114to process the task. The compute service manager112may determine what data is needed to process a task and further determine which nodes within the execution platform114are best suited to process the task. Some nodes may have already cached the data needed to process the task (due to the nodes having recently downloaded the data from the cloud computing storage platform104for a previous job) and, therefore, may be a good candidate for processing the task. Metadata stored in the database116assists the compute service manager112in determining which nodes in the execution platform114have already cached at least a portion of the data needed to process the task. One or more nodes in the execution platform114process the task using data cached by the nodes and data retrieved from the cloud computing storage platform104. It is desirable to retrieve as much data as possible from caches within the execution platform114because the retrieval speed is typically much faster than retrieving data from the cloud computing storage platform104. As shown inFIG.1, the shared data processing platform100separates the execution platform114from the cloud computing storage platform104. In this arrangement, the processing resources and cache resources in the execution platform114operate independently of the data storage devices124-1to124-nin the cloud computing storage platform104. Thus, the computing resources and cache resources are not restricted to specific data storage devices124-1to124-n. Instead, all computing resources and all cache resources may retrieve data from, and store data to, any of the data storage resources in the cloud computing storage platform104. FIG.2is a block diagram illustrating components of the compute service manager112, in accordance with some embodiments of the present disclosure. As shown inFIG.2, a request processing service202manages received data storage requests and data retrieval requests (e.g., jobs to be performed on database data). For example, the request processing service202may determine the data to process a received query (e.g., a data storage request or data retrieval request). The data may be stored in a cache within the execution platform114or in a data storage device in cloud computing storage platform104. A management console service204supports access to various systems and processes by administrators and other system managers. Additionally, the management console service204may receive a request to execute a job and monitor the workload on the system. The replication manager225manages transmission of database data, such as replicating database data to one or more secondary databases, according to some example embodiments. The channel manager227is configured to send and receive data through a private channel, such as a virtual private network, according to some example embodiments. The compute service manager112also includes a job compiler206, a job optimizer208, and a job executor210. The job compiler206parses a job into multiple discrete tasks and generates the execution code for each of the multiple discrete tasks. The job optimizer208determines the best method to execute the multiple discrete tasks based on the data that needs to be processed. The job optimizer208also handles various data pruning operations and other data optimization techniques to improve the speed and efficiency of executing the job. The job executor210executes the execution code for jobs received from a queue or determined by the compute service manager112. A job scheduler and coordinator212sends received jobs to the appropriate services or systems for compilation, optimization, and dispatch to the execution platform114. For example, jobs may be prioritized and processed in that prioritized order. In an embodiment, the job scheduler and coordinator212determines a priority for internal jobs that are scheduled by the compute service manager112with other “outside” jobs such as user queries that may be scheduled by other systems in the database but may utilize the same processing resources in the execution platform114. In some embodiments, the job scheduler and coordinator212identifies or assigns particular nodes in the execution platform114to process particular tasks. A virtual warehouse manager214manages the operation of multiple virtual warehouses implemented in the execution platform114. As discussed below, each virtual warehouse includes multiple execution nodes that each include a cache and a processor (e.g., a virtual machine, an operating system level container execution environment). Additionally, the compute service manager112includes a configuration and metadata manager216, which manages the information related to the data stored in the remote data storage devices and in the local caches (i.e., the caches in execution platform114). The configuration and metadata manager216uses the metadata to determine which data micro-partitions need to be accessed to retrieve data for processing a particular task or job. A monitor and workload analyzer218oversees processes performed by the compute service manager112and manages the distribution of tasks (e.g., workload) across the virtual warehouses and execution nodes in the execution platform114. The monitor and workload analyzer218also redistributes tasks, as needed, based on changing workloads throughout the network-based data warehouse system102and may further redistribute tasks based on a user (e.g., “external”) query workload that may also be processed by the execution platform114. The configuration and metadata manager216and the monitor and workload analyzer218are coupled to a data storage device220. The data storage device220inFIG.2represents any data storage device within the network-based data warehouse system102. For example, data storage device220may represent caches in execution platform114, storage devices in cloud computing storage platform104, or any other storage device. FIG.3is a block diagram illustrating components of the execution platform114, in accordance with some embodiments of the present disclosure. As shown inFIG.3, execution platform114includes multiple virtual warehouses, which are elastic clusters of compute instances, such as virtual machines. In the example illustrated, the virtual warehouses include virtual warehouse1, virtual warehouse2, and virtual warehouse n. Each virtual warehouse (e.g., EC2 cluster) includes multiple execution nodes (e.g., virtual machines) that each include a data cache and a processor. The virtual warehouses can execute multiple tasks in parallel by using the multiple execution nodes. As discussed herein, execution platform114can add new virtual warehouses and drop existing virtual warehouses in real time based on the current processing needs of the systems and users. This flexibility allows the execution platform114to quickly deploy large amounts of computing resources when needed without being forced to continue paying for those computing resources when they are no longer needed. All virtual warehouses can access data from any data storage device (e.g., any storage device in cloud computing storage platform104). Although each virtual warehouse shown inFIG.3includes three execution nodes, a particular virtual warehouse may include any number of execution nodes. Further, the number of execution nodes in a virtual warehouse is dynamic, such that new execution nodes are created when additional demand is present, and existing execution nodes are deleted when they are no longer necessary (e.g., upon a query or job completion). Each virtual warehouse is capable of accessing any of the data storage devices124-1to124-nshown inFIG.1. Thus, the virtual warehouses are not necessarily assigned to a specific data storage device124-1to124-nand, instead, can access data from any of the data storage devices124-1to124-nwithin the cloud computing storage platform104. Similarly, each of the execution nodes shown inFIG.3can access data from any of the data storage devices124-1to124-n. For instance, the storage device124-1of a first user (e.g., provider account user) may be shared with a worker node in a virtual warehouse of another user (e.g., consumer account user), such that the other user can create a database (e.g., read-only database) and use the data in storage device124-1directly without needing to copy the data (e.g., copy it to a new disk managed by the consumer account user). In some embodiments, a particular virtual warehouse or a particular execution node may be temporarily assigned to a specific data storage device, but the virtual warehouse or execution node may later access data from any other data storage device. In the example ofFIG.3, virtual warehouse1includes three execution nodes302-1,302-2, and302-n. Execution node302-1includes a cache304-1and a processor306-1. Execution node302-2includes a cache304-2and a processor306-2. Execution node302-nincludes a cache304-nand a processor306-n. Each execution node302-1,302-2, and302-nis associated with processing one or more data storage and/or data retrieval tasks. For example, a virtual warehouse may handle data storage and data retrieval tasks associated with an internal service, such as a clustering service, a materialized view refresh service, a file compaction service, a storage procedure service, or a file upgrade service. In other implementations, a particular virtual warehouse may handle data storage and data retrieval tasks associated with a particular data storage system or a particular category of data. Similar to virtual warehouse1discussed above, virtual warehouse2includes three execution nodes312-1,312-2, and312-n. Execution node312-1includes a cache314-1and a processor316-1. Execution node312-2includes a cache314-2and a processor316-2. Execution node312-nincludes a cache314-nand a processor316-n. Additionally, virtual warehouse3includes three execution nodes322-1,322-2, and322-n. Execution node322-1includes a cache324-1and a processor326-1. Execution node322-2includes a cache324-2and a processor326-2. Execution node322-nincludes a cache324-nand a processor326-n. In some embodiments, the execution nodes shown inFIG.3are stateless with respect to the data the execution nodes are caching. For example, these execution nodes do not store or otherwise maintain state information about the execution node, or the data being cached by a particular execution node. Thus, in the event of an execution node failure, the failed node can be transparently replaced by another node. Since there is no state information associated with the failed execution node, the new (replacement) execution node can easily replace the failed node without concern for recreating a particular state. Although the execution nodes shown inFIG.3each include one data cache and one processor, alternate embodiments may include execution nodes containing any number of processors and any number of caches. Additionally, the caches may vary in size among the different execution nodes. The caches shown inFIG.3store, in the local execution node (e.g., local disk), data that was retrieved from one or more data storage devices in cloud computing storage platform104(e.g., S3 objects recently accessed by the given node). In some example embodiments, the cache stores file headers and individual columns of files as a query downloads only columns useful for that query. To improve cache hits and avoid overlapping redundant data stored in the node caches, the job optimizer208assigns input file sets to the nodes using a consistent hashing scheme to hash over table file names of the data accessed (e.g., data in database116or database122). Subsequent or concurrent queries accessing the same table file will therefore be performed on the same node, according to some example embodiments. As discussed, the nodes and virtual warehouses may change dynamically in response to environmental conditions (e.g., disaster scenarios), hardware/software issues (e.g., malfunctions), or administrative changes (e.g., changing from a large cluster to smaller cluster to lower costs). In some example embodiments, when the set of nodes changes, no data is reshuffled immediately. Instead, the least recently used replacement policy is implemented to eventually replace the lost cache contents over multiple jobs. Thus, the caches reduce or eliminate the bottleneck problems occurring in platforms that consistently retrieve data from remote storage systems. Instead of repeatedly accessing data from the remote storage devices, the systems and methods described herein access data from the caches in the execution nodes, which is significantly faster and avoids the bottleneck problem discussed above. In some embodiments, the caches are implemented using high-speed memory devices that provide fast access to the cached data. Each cache can store data from any of the storage devices in the cloud computing storage platform104. Further, the cache resources and computing resources may vary between different execution nodes. For example, one execution node may contain significant computing resources and minimal cache resources, making the execution node useful for tasks that make use of significant computing resources. Another execution node may contain significant cache resources and minimal computing resources, making this execution node useful for tasks that may use caching of large amounts of data. Yet another execution node may contain cache resources providing faster input-output operations, useful for tasks that make use of fast scanning of large amounts of data. In some embodiments, the execution platform114implements skew handling to distribute work amongst the cache resources and computing resources associated with a particular execution, where the distribution may be further based on the expected tasks to be performed by the execution nodes. For example, an execution node may be assigned more processing resources if the tasks performed by the execution node become more processor-intensive. Similarly, an execution node may be assigned more cache resources if the tasks performed by the execution node may use a larger cache capacity. Further, some nodes may be executing much slower than others due to various issues (e.g., virtualization issues, network overhead). In some example embodiments, the imbalances are addressed at the scan level using a file stealing scheme. In particular, whenever a node process completes scanning its set of input files, it requests additional files from other nodes. If the one of the other nodes receives such a request, the node analyzes its own set (e.g., how many files are left in the input file set when the request is received), and then transfers ownership of one or more of the remaining files for the duration of the current job (e.g., query). The requesting node (e.g., the file stealing node) then receives the data (e.g., header data) and downloads the files from the cloud computing storage platform104(e.g., from data storage device124-1), and does not download the files from the transferring node. In this way, lagging nodes can transfer files via file stealing in a way that does not worsen the load on the lagging nodes. Although virtual warehouses1,2, and n are associated with the same execution platform114, the virtual warehouses may be implemented using multiple computing systems at multiple geographic locations. For example, virtual warehouse1can be implemented by a computing system at a first geographic location, while virtual warehouses2and n are implemented by another computing system at a second geographic location. In some embodiments, these different computing systems are cloud-based computing systems maintained by one or more different entities. Additionally, each virtual warehouse is shown inFIG.3as having multiple execution nodes. The multiple execution nodes associated with each virtual warehouse may be implemented using multiple computing systems at multiple geographic locations. For example, an instance of virtual warehouse1implements execution nodes302-1and302-2on one computing platform at a geographic location and implements execution node302-nat a different computing platform at another geographic location. Selecting particular computing systems to implement an execution node may depend on various factors, such as the level of resources needed for a particular execution node (e.g., processing resource requirements and cache requirements), the resources available at particular computing systems, communication capabilities of networks within a geographic location or between geographic locations, and which computing systems are already implementing other execution nodes in the virtual warehouse. Execution platform114is also fault tolerant. For example, if one virtual warehouse fails, that virtual warehouse is quickly replaced with a different virtual warehouse at a different geographic location. A particular execution platform114may include any number of virtual warehouses. Additionally, the number of virtual warehouses in a particular execution platform is dynamic, such that new virtual warehouses are created when additional processing and/or caching resources are needed. Similarly, existing virtual warehouses may be deleted when the resources associated with the virtual warehouse are no longer necessary. In some embodiments, the virtual warehouses may operate on the same data in cloud computing storage platform104, but each virtual warehouse has its own execution nodes with independent processing and caching resources. This configuration allows requests on different virtual warehouses to be processed independently and with no interference between the requests. This independent processing, combined with the ability to dynamically add and remove virtual warehouses, supports the addition of new processing capacity for new users without impacting the performance observed by the existing users. FIG.4shows an example database architecture400for transmission of database data over a channel (e.g., private channel), according to some example embodiments. As discussed above, an HSM is a hardware security module, which is a physical computing device that safeguards and manages digital keys for strong authentication. Example HSMs can be implemented as a plug-in card or server rack module that attaches directly to a computer or network service running within the deployment's cloud execution instances (e.g., within the VPN of the cloud platform, such as AWS). In some example embodiments, a given deployment's HSM is provided by the cloud provider as a network service, along with the provided execution units (e.g., Amazon S3, Google Cloud, Microsoft Azure each offer HSM services for their cloud compute units, e.g., virtual machines). In some example embodiments, the encryption keys are generated and managed by the HSMs in each deployment. As discussed above, if two deployments are being connected (e.g., a mesh of deployments), this can make use of exporting encryption keys (e.g., symmetric key, private keys, public key, key pairs) out of one deployment's HSM and importing the key data into another deployment's HSM (e.g., a new deployment that is being added to the mesh). For example, to safeguard data, an existing deployment can be replicated, wherein a new deployment is created, the data from the existing deployment is copied or otherwise replicated over to the new deployment, the existing deployment's HSM exports the key, and the new deployment's HSM imports the key. After creation and exporting/importing of the key, the new deployment can function as a secondary or replication deployment that stores data replicated from the existing deployment, which then functions as a “primary” or source deployment. While HSMs provide secure encryption functions, HSM processing does not scale well and can increase the processing overhead as more deployments are added to a given networked system. Thus, there is an existing demand for using non-HSM operations where possible, so long as the non-HSM processing can be performed securely. Furthermore, not all HSMs provide key importing or exporting functions, which inhibits replication of deployments using such systems. One approach to handling HSM scaling issues involves creating a public key document that stores each deployment's public key, where new deployments add their public key to the public key document and encrypt outbound messages with the target deployment's public key (which is then decryptable by the target deployment via its private key). However, one issue with this approach is that it can be difficult to manage the public key document in a secure manner, as the number of deployments scale to enterprise levels. Additionally, even if a given deployment knows the target deployment's public key, that does not ensure that the target deployment is who it says it is. That is, for example, the target deployment may be a compromised or otherwise malicious deployment that is seeking to intercept data by proffering the compromised/malicious deployment's public key to other legitimate deployments in the mesh. Additionally, it is impractical to perform key rotation using the public key document (where key rotation is when each public key is replaced with a new public key), at least in part because each deployment would rotate their keys at the same time, which is difficult to do in practice and can be prone to errors. To solve these issues, the replication manager225can implement asymmetric keys and one or more symmetric keys to transmit data between databases, such as a source deployment (e.g., a primary database application in a VPN) and a target deployment (e.g., one or more secondary or replicated databases in another VPN cloud). In some example embodiments, each deployment generates a replication asymmetric keypair (RAK) to send and receive encrypted data, and an authentication asymmetric keypair (AAK) that is used to authenticate the given deployment. In some example embodiments, each deployment further generates a symmetric key to encrypt/decrypt each data file sent (e.g., data encryption key (DEK)), and a symmetric wrapping replication key (WRK) which wraps the DEKs, where the WRKs can be staggered across messages and constantly changed to further secure the sent data. The replication manager can use these keys in an authentication process and messaging protocol to securely send and receive data between the deployments without reliance on importing/exporting of keys from the HSMs. Generally, an example asymmetric keypair includes PKI (Public Key Infrastructure) keys comprising a private key and a corresponding public key. The PKI keys are generated by the HSMs using cryptographic algorithms based on mathematical problems to produce one-way functions. The keypair can be used to securely send data and also to authenticate a given device. To securely send/receive data using an asymmetric keypair, the public key can be disseminated widely, and the private key is kept private to that deployment. In such a system, any sending deployment can encrypt a message using the target deployments' public key, but that encrypted message can only be decrypted with that target deployment's private key. To use a keypair as a signature or authentication mechanism, a signing device uses the private key to “sign” a given data item, and other devices that have access to the public key can authenticate that the signature on the data item is authentic because only the signing device has the private key, and in such systems forging the signature is currently mathematically impractical. Generally, a symmetric key is a shared secret that is shared between the transmitter and receiver, where the shared secret (e.g., the symmetric key) is used to encrypt the message and also to decrypt the message. An example symmetric key scheme includes Advanced Encryption Standard (AES)256, which can be generated by the HSM; additional symmetric key schemes include Twofish, Blowfish, Serpent, DES, and others. In the example illustrated inFIG.4, deployment_A405and deployment_B430are separate instances of shared data processing platform100ofFIG.1with various components discussed inFIGS.1-3omitted for clarity. That is, for example, deployment_A is a first instance of shared data processing platform100installed within a first VPC at a first geographic location (e.g., AWS virtual private cloud hosted in San Francisco), and deployment_B is a second difference instance of shared data processing platform100installed and hosted within a second VPC at a second geographic location (e.g., a different AWS virtual private cloud hosted from New York City). Although only two deployments are discussed here as an example, it is appreciated that each location may implement multiple deployments within the same VPC or other VPCs. For example, the VPC that is hosting deployment_A405may have other deployments each running their own instances of shared data processing platform100. Further, although there the deployments are discussed as being geographically separated, it is appreciated that the deployments may be located within the same geographic region, albeit on different cloud systems (e.g., deployment_A405is a west coast AWS VPN instance of shared data processing platform100and deployment_B430a Google Cloud instance of shared data processing platform100) or different subnets of a single cloud site at the same geographic location (e.g., both deployments are on a west coast AWS virtual private cloud but on different partitioned subnets). In the illustrated example, deployment_A405includes a replication manager415that manages authentication of the deployment with other deployments (e.g., deployment_B430and/or other deployments in a mesh with deployment_A405and deployment_B430). The deployment_A405further comprises global services420, which is a consolidated or representative sub-system including instances of202,204,206,208,210,212, and214displayed inFIG.2. The deployment_A405further includes Foundation Database425(FoundationDB, “FDB”) which is another representative sub-system including instances of216,218, and220. The deployment_A405further includes HSM410, which, as discussed, is a hardware security module that can generate and manage encryption keys for the deployment_A405. Further, deployment_A includes channel manager433that manages transmission of data to and from other deployments over a channel470, as discussed in further detail below with reference toFIGS.6-8. Deployment_B430is an example deployment of shared data processing platform100located at a second geographic location (e.g., New York City). As illustrated, deployment_B430includes a replication manager440that manages authentication of the deployment with other deployments (e.g., deployment_A405and/or other deployments in a mesh with deployment_A405and deployment_B430). The deployment_B430further comprises global services445, which is a consolidated or representative sub-system including instances of202,204,206,208,210,212, and214displayed inFIG.2. The deployment_B430further includes FDB450which is another comprised or representative sub-system including instances of216,218, and220. Further, deployment_B430includes channel manager477that manages transmission of data to and from other deployments over the channel470(e.g., via one or more hosted connection to a private network), according to some example embodiments. The database architecture400further includes global deployment security system455, according to some example embodiments. As illustrated, the global deployment security system455includes a global HSM460which generates an asymmetric keypair, including a global public key and a global private key. The global public key is widely distributed (e.g., to all deployments in the mesh) and can be used by the deployments to check whether an item of data (e.g., a public key of an unknown deployment) was actually signed by the global signing key of global deployment security system455(e.g., using PKI signing operations discussed above). In the following example, deployment_A405is the primary database and seeks to send replication traffic to deployment_B430, though it is appreciated that in reverse processes, the architecture400can be implemented to send traffic from deployment_B430to deployment_A405. In some example embodiments, to authenticate the deployment_A405, the global deployment security system455signs the authentication public key of the deployment_A405with the global signing key, thereby indicating to other deployments that the deployment_A405is who it says it is (e.g., that is, an authenticated deployment and not a malicious or compromised deployment). In some example embodiments, to initiate channel470, deployment_A405sends deployment_B430the authentication public key of deployment_A405, which has been signed by the global signing key of global deployment security system455. In some example embodiments, the setup communications are sent over the VPN nodes, while in other embodiments the setup communications are transmitted to destination deployments over the Internet (e.g., encrypted traffic), where the setup communications can include key or authentication data that is not replication data, according to some example embodiments. Deployment_B430the receives the key data, and if the key is not signed by the global deployment security system455, the deployment_B430rejects further communications from the deployment_A405. Assuming the received public key is signed by the global deployment security system455, the deployment_B430saves network address data (e.g., URLs) and other data describing deployment_A405(e.g., tasks/functions) for further communications. In some example embodiments, after channel470is established, the deployment_A405can send encrypted data to deployment_B430, such as replication files from one or more of deployment_A's databases (e.g., data storage devices124connected to the execution units of deployment_A405). As discussed in further detail below with reference toFIG.6-8, the messages of channel470are transmitted by way of one or more nodes or networked servers of a virtual private network. In some example embodiments, to encrypt and decrypt the data sent over the channel470, HSM410generates a replication asymmetric key pair for deployment_A405, and HSM435generates a replication asymmetric key pair for deployment_B430, where the public keys from of each deployment can be widely spread and used to encrypt data sent to the destination deployment. For example, deployment_A405can send a data file encrypted with the public key of deployment_B430, so that only deployment_B430can decrypt the file. Further, each data message may initially be encrypted using a data encryption key (DEK) and further encrypted using a wrapping replication key (e.g., a symmetric key different than the DEK), which can be included in the files sent to the destination deployment, e.g., deployment_B430. Although in the above examples, two different asymmetric key pairs were generated for deployment_A—one for authentication and one for the sending of database data—in some example embodiments a single asymmetric keypair is used to both authenticate the deployment and send the encrypted data. For example, a keypair can be generated for deployment_A405and the public key of the keypair can be signed by the global private key from the global deployment security system455. After the public key pair is signed, the deployment_A405can send the signed public key to deployment_B430to both authenticate deployment_A405and to later send traffic to deployment_A405. That is, for example, deployment_B430receives the signed public key and knows that it can trust deployment_A405because the public key is a signed global private key, which only global deployment security system455has access to (e.g., as managed by global HSM460). Further, the deployment_B430can use the signed public key to encrypt and send data back to deployment_A405, where it is guaranteed that only deployment_A405can decrypt the data as only deployment_A405has the corresponding private key. In this way, and in accordance with some example embodiments, a single asymmetric keypair is used to both authenticate and send data to a given deployment. FIG.5shows an example messaging structure500for data transmission between deployments over a channel, according to some example embodiments. In the example ofFIG.5, the WRKs are staggered to increase security of the files sent between the deployments. In the following description, “−1” denotes a previous item, such as previously sent file or a WRK key previously sent, and “+1” denotes a subsequent item, such as a file that is created and is to be sent after the initial or previous file In the example, the messaging structure500can be a stream of replication database items sent from deployment_A405to deployment_B430. As an example, file503is the first database data item that is generated and then sent to the destination, which is followed by file505, which is created and then sent to the destination, which is followed by file510, which is the last file in the example ofFIG.5to be generated and sent to the destination (e.g., deployment_B430, a server, virtual machine, etc.). At a high level, each file is staggered in that the data encrypted in the file is accessed through an encryption key that is sent in another file, such as the previously sent file. For example, file503specifies the WRK that is to be used to access the data in file505, and file505species the WRK (e.g., in part505C) to be used to access the data in file510, and so on. When the destination device receives file503, it stores WRK in file503for use in decrypting the file in the next file, i.e., file505, and so on. In particular, and in accordance with some example embodiments, as illustrated in file505, the file structure can include bytes (e.g., byte stream) that can correspond to different parts of the file505including part505A, part505B, and part505C. In some example embodiments, part505A and part505B correspond to the message or file's body and store the replication data (e.g., “data” in part505A, such as database values) as well as staggered WRK data (e.g., the WRK key for the next file), and part505C is part of a file's header structure. In other example embodiments, each of the parts505A-C is part of the message body, and the header stores ID data for which WRK key and public key to use for that message in identifying correct keys after key rotations. In the messaging structure500, the data for each file is encrypted by a DEK. For example, as illustrated in part505A, the data has been encrypted by a DEK for that file505. In some example embodiments, the data of each file send is encrypted by a different DEK. That is, for example, data in the previous file503is decrypted by a different DEK, and data in the subsequent file510is decrypted in a different DEK and each file encrypted using a unique DEK. As illustrated in part505B, the DEK of file505is encrypted by a WRK which was received in the previous file503. That is, the WRK used to encrypt the DEK in file505was previously received in the file503. As illustrated in part505C, the WRK for the next file (“WRK+1”), file510(“file N+1”), is encrypted by the public key of the destination deployment, such as deployment_B430. In some example embodiments, the encrypted WRK is cached in one or more sending deployments so that one or more messages to be sent to the destination deployment can use the cached encrypted WRK. Accordingly, the WRKs are staggered and the WRK included in a given file is the WRK for the next file to be received. In this way, if the file505is maliciously intercepted, the DEK for that file cannot be accessed because the DEK is encrypted with a WRK that was sent in a previous message (e.g., file503). As an example, upon receiving file503, the destination deployment uses its private key to access the next file, which is file505. When the destination deployment receives file505, it accesses the DEK in part505B using the previously stored WRK from file503, and then uses the DEK to access the data of file505(e.g., in part505A). In some example embodiments, each WRK is stored inside the message and is used to access the data (e.g., the DEK to access the data) for that given message. That is, for example, whereas in the illustrated example ofFIG.5, each WRK is for another messages DEK, in some example embodiments, a given message's DEK is encrypted by a WRK and then that WRK is encrypted by the public key and included in that message so that each message includes the symmetric keys for accessing the data in that given message. For example, upon receiving the message, the destination deployment uses its private key to decrypt the WRK in the message, and then uses that newly unencrypted WRK to decrypt the DEK in that same message, and then finally access the data using DEK decryption. Additionally, in some example embodiments, the WRK is changed or regenerated by the HSM of the sending deployment periodically or in in response to event triggers. For example, the WRK may be regenerated by the HSM of the sending deployment every fifteen minutes or hour, where the new newly generated WRK is received by the destination deployment in the messages themselves (e.g., a new message includes the new WRK, which will be used for the next received messages for the next time period until a new WRK is generated). FIG.6shows an example channel architecture600for transmitting data between databases, according to some example embodiments. In the illustrated example, different components are displayed within deployment_A405and deployment_B430, in addition to example storage components, including storage platform615and storage platform645in accordance with some example embodiments. In addition to replication manager415and channel manager433, deployment_A405includes proxy servers605which receive traffic distributed from network traffic load balancer manager610(e.g., an AWS elastic load balancer). In some example embodiments, the balancer manager610is interfaced with a cloud bridge620for sending and receiving traffic out of the deployment's cloud, e.g., to a private or otherwise external network. For example, if deployment_A405is hosted from an AWS virtual private cloud (e.g., AWS VPC subnet) the cloud bridge620can be a plurality of hosted connections from AWS that connect to a private network (e.g., AWS Direct Connect, with hosted connections provisioned by AWS or a service provider of AWS). In the illustrated example, the channel470comprises a virtual private network of nodes in node network625. For example, the virtual private network can include a plurality of enterprise nodes of an enterprise provided virtual private network, such as AT&T NetBond. As an additional example, the node network625can include a plurality of servers configured as part of a single virtual private network (e.g., a server at a house in San Francisco and another server at another house in New York City, where the servers are connected as nodes of a single virtual private network). The traffic is then transmitted over the virtual private network using node network625to cloud bridge630, which is a private network connection (e.g., Direct Connect, Azure Express Route) provided by the VPC provider of deployment_B430(e.g., AWS, Azure, GPC). The traffic received by the cloud bridge630is then distributed to the proxy servers640of deployment_B430using balancer manager635which is load balancer, such as AWS elastic load balancer, which then process and store the data in storage platform645. FIG.7shows an example network architecture700for transmission of data between database deployments, according to some example embodiments. As illustrated, architecture includes three virtual private clouds, including ACME cloud east705and ACE cloud west730, which are different clouds of a same VPC provider (e.g., AWS), and further including smith cloud which is a virtual private cloud of a different provider (e.g., Azure). ACME cloud east705is a virtual private cloud that hosts deployment715(e.g., an example instance of shared data processing platform100), which stores data in storage buckets720(e.g., example instance of data storage device124) and connects to a virtual private network725of nodes725A,725B, and725C by way of ACME cloud bridge710. ACME cloud west730is another virtual private cloud that hosts deployment740(e.g., another example instance of shared data processing platform100), which stores data in storage buckets745(e.g., example instance of data storage device124) and connects to the virtual private network725of nodes725A,725B, and725C by way of ACME cloud bridge735. Smith cloud is a different virtual private cloud (e.g., from a different provider using different cloud architecture) that hosts deployment760(e.g., another additional instance of shared data processing platform100), which stores data in storage buckets765(e.g., example instance of data storage device124) and connects to the virtual private network725of nodes725A,725B, and725C by way of ACME cloud bridge755. As discussed above, sending and receiving data (e.g., replication data) between the deployments715,740,760can be difficult for different reasons including lack of certainty in the transmission path(s), egress fees, and security module restrictions. To address the issues, nodes725A-725C of a virtual private network725can be configured at different geographic locations to transmit data over the virtual private network725. In some example embodiments, which data sent through the virtual private network725is configured using shared tuple metadata managed by the channel manager in each deployment, where each data keeps a complete shared record of the tuple data. For example, the channel manager in deployment715(not depicted inFIG.7) may store one or more tuples for when traffic is to be sent through the nodes725A-725C. The tuple metadata can include a first value of the sending deployment and a second value specifying the destination deployment, e.g., [deployment715, deployment740], where if traffic is sent to the destination deployment then it is proxied over the virtual private network using nodes725A and725B (via hosted connections of the respective cloud bridges710and735, each of which interface using hosted connections to the nodes (e.g., “10×10GE HOSTED”) and private virtual interfaces (“private VIFS”) that connect to respective deployments. In some example embodiments, the tuple metadata specifying which traffic is sent through the nodes725A-725C can be configured at the account level, deployment level, VPC level, or specific external addresses of networks outside the VPCs. For example, if the tuple metadata can specify that replication traffic from a data share of a specific user account that is replicated to any database managed deployment740, e.g., [account_1, deployment740]. In this example embodiment, if the traffic is from account_1and being sent to deployment740it is sent over the virtual private network as encrypted messages (e.g., staggered WRK messages). As an additional example, if the tuple metadata is: [deployment715, account123] (where account123 is running within deployment760), then any traffic from deployment715to a specific user account (“account123”) of deployment760should be sent via the nodes725A-725B, but other traffic not sent to the specific account. For instance, if a primary database shares data with account123 in deployment715, then the traffic is proxied over the private network725; whereas if the same primary database send traffic to another account, e.g., account456 in deployment760) or to server777in cloud750, then the traffic may be sent over an open Internet path751(e.g., encrypted Internet traffic). FIG.8shows a flow diagram of a method800for transmission of data as channel messages sent between deployments, according to some example embodiments. At operation805, the channel manager433configures cloud connections of a virtual private cloud that is hosting a deployment database system (e.g. deployment_A405). For example, at operation805, one or more hosted connections of a cloud bridge620(e.g., AWS Direct Connect) are exposed and interfaced with the channel manager433. Additionally, and in accordance with some example embodiments, at operation805, additional channel managers in other deployments are configured to connect to the virtual private network through their respective cloud bridges (e.g., AWS Direct Connect, Azure Express Route, Ethernet). At operation810, the replication manager415generates or otherwise identifies data for transmission. For example, the replication manager415in deployment_A405identifies data from a primary database hosted from deployment_A405to be replicated to another database, such as a database running within deployment_B430. At operation815, the replication manager415encrypts the data for transmission. For example, the replication manager415encrypts the data as a sequence of messages to be transmitted to the replication database, as discussed above. For instance, the data in each message can be encrypted by a DEK for that message, which the message's DEK is then encrypted by a WRK, which is then stored in another message in encrypted form (e.g., encrypted by the public key of the destination deployment, and then included in a subsequent message). At operation820, the channel manager433transmits the data to private network nodes. For example, at operation820, the channel manager433sends the encrypted data using one or more private virtual interfaces (private VIFs) to the cloud bridge620to send the data to a private network node (e.g., NetBond nodes) of a node network, such as node network625. Once the private node network625receives the data, the nodes transmit the data across the network to the destination node, such as the node that is nearest to the destination deployment (e.g., in the same geographic area). At operation825, channel manager477of the destination deployment (deployment_B430) receives the data from the private network nodes. For example, the channel manager477receives the data through cloud bridge630using a hosted connection that imports data from the virtual private network. At operation830, the replication manager440of the destination deployment (deployment_B430) decrypts the data. For example, at operation830, the replication manager440receives a given message and decrypts the WRK in the message using the public key of the destination deployment, and then stores the decrypted WRK for use in decrypting data in the next received message. Upon receiving the next message, the replication manager440retrieves the stored WRK to decrypt the DEK in that next message, and then uses the newly encrypted DEK to decrypt the data in that given message, according to some example embodiments. At operation835, replication manager440processes the decrypted data. For example, at operation835, the replication manager440transmits the data to global services445running within the deployment (other modules within a compute service manager112,FIG.2) for further processing and storage. FIG.9shows an example flow diagram of a method900for transmitting data between deployments using metadata, according to some example embodiments. At operation905, the replication manager415identifies data for transmission. For example, the data may be replication data for transmission to one or more replication databases, or may be non-replication data for storage in another deployment, according to some example embodiments. At operation910, the channel manager433accesses tuple metadata (e.g. stored within configuration and metadata manager216and data storage device220,FIG.2) to determine whether the data matches a tuple for transmission through the virtual private network. For example, given tuple may specify that any data from deployment_A405that is sent to deployment_B430should be encrypted as a sequence of messages and send through private node network for storage and processing by deployment_B430. Assuming at operation910, that the channel manager433determines that the data does not satisfy the tuple (e.g., the sending parameter in the destination parameter do not match the metadata of the data for transmission) then the data is sent over non-node mechanisms at operation915, such as the Internet, and is then further processed at operation940(e.g., processed by global services445,FIG.4) In contrast, if the data for transmission does match the tuple metadata at operation910, then the method900proceeds to operations922-935. In particular, for example, at operation920, the replication manager415encrypts the data for transmission to the destination as a sequence of messages (e.g., sequence WRK messages). At operation925, the channel manager433uses a cloud bridge620that transmits data to the private node network625using a plurality of hosting connections (e.g., 10×10GE Hosted Connections). At operation930, on the destination deployment, the channel manager477receives the traffic from the node network625via the cloud bridge630. At operation935, the replication manager440decrypts the data, which is then processed at940by one or more modules of the destination deployment (e.g., global services445). FIG.10illustrates a diagrammatic representation of a machine1000in the form of a computer system within which a set of instructions may be executed for causing the machine1000to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically,FIG.10shows a diagrammatic representation of the machine1000in the example form of a computer system, within which instructions1016(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1000to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions1016may cause the machine1000to execute any one or more operations of any one or more of the methods800and900. As another example, the instructions1016may cause the machine1000to implement portions of the data flows illustrated in any one or more ofFIGS.1-9. In this way, the instructions1016transform a general, non-programmed machine into a particular machine1000(e.g., the remote computing device106, the access management system110, the compute service manager112, the execution platform114, the access management system118, the Web proxy120, remote computing device106) that is specially configured to carry out any one of the described and illustrated functions in the manner described herein. In alternative embodiments, the machine1000operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1000may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1000may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a smart phone, a mobile device, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1016, sequentially or otherwise, that specify actions to be taken by the machine1000. Further, while only a single machine1000is illustrated, the term “machine” shall also be taken to include a collection of machines1000that individually or jointly execute the instructions1016to perform any one or more of the methodologies discussed herein. The machine1000includes processors1010, memory1030, and input/output (I/O) components1050configured to communicate with each other such as via a bus1002. In an example embodiment, the processors1010(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor1012and a processor1014that may execute the instructions1016. The term “processor” is intended to include multi-core processors1010that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions1016contemporaneously. AlthoughFIG.10shows multiple processors1010, the machine1000may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiple cores, or any combination thereof. The memory1030may include a main memory1032, a static memory1034, and a storage unit1036, all accessible to the processors1010such as via the bus1002. The main memory1032, the static memory1034, and the storage unit1036store the instructions1016embodying any one or more of the methodologies or functions described herein. The instructions1016may also reside, completely or partially, within the main memory1032, within the static memory1034, within the storage unit1036, within at least one of the processors1010(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1000. The I/O components1050include components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1050that are included in a particular machine1000will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1050may include many other components that are not shown inFIG.10. The I/O components1050are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components1050may include output components1052and input components1054. The output components1052may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), other signal generators, and so forth. The input components1054may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1050may include communication components1064operable to couple the machine1000to a network1080or devices1070via a coupling1082and a coupling1072, respectively. For example, the communication components1064may include a network interface component or another suitable device to interface with the network1080. In further examples, the communication components1064may include wired communication components, wireless communication components, cellular communication components, and other communication components to provide communication via other modalities. The devices1070may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a universal serial bus (USB)). For example, as noted above, the machine1000may correspond to any one of the remote computing device106, the access management system110, the compute service manager112, the execution platform114, the access management system118, the Web proxy120, and the devices1070may include any other of these systems and devices. The various memories (e.g.,1030,1032,1034, and/or memory of the processor(s)1010and/or the storage unit1036) may store one or more sets of instructions1016and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions1016, when executed by the processor(s)1010, cause various operations to implement the disclosed embodiments. As used herein, the terms “machine-storage medium,” “device-storage medium,” and “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media, and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), field-programmable gate arrays (FPGAs), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below. In various example embodiments, one or more portions of the network1080may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local-area network (LAN), a wireless LAN (WLAN), a wide-area network (WAN), a wireless WAN (WWAN), a metropolitan-area network (MAN), the Internet, a portion of the Internet, a portion of the public switched telephone network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network1080or a portion of the network1080may include a wireless or cellular network, and the coupling1082may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling1082may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. The instructions1016may be transmitted or received over the network1080using a transmission medium via a network interface device (e.g., a network interface component included in the communication components1064) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions1016may be transmitted or received using a transmission medium via the coupling1072(e.g., a peer-to-peer coupling) to the devices1070. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions1016for execution by the machine1000, and include digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The terms “machine-readable medium,” “computer-readable medium,” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of the methods800and900may be performed by one or more processors. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but also deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment, or a server farm), while in other embodiments the processors may be distributed across a number of locations. The following numbered examples are embodiments: Example 1. A method comprising: identifying, by a first database deployment, a virtual private network comprising a plurality of virtual private network nodes, the virtual private network connected to a plurality of virtual private clouds including the first database deployment on a first virtual private cloud and a second database deployment on a second virtual private cloud; generating, by the first database deployment, database items to be replicated to the second database deployment; determining that the database items are for transmission to one or more proxy servers on the second virtual private cloud; in response to determining that the database items are for transmission to the second virtual private cloud, exporting, using a hosted connection of the first virtual private cloud, the database items to the virtual private network for transmission to the to the one or more proxy servers on the second virtual private cloud, the database items being transmitted to the second virtual private cloud using the plurality of using the virtual private network nodes of the virtual private network, the second database deployment receiving the database items from the virtual private network using another hosted connection that imports the database items to the second database deployment. Example 2. The method of example 1, wherein the plurality of virtual private network nodes are located in different geographic locations comprising a first geographic region and a second geographic region. Example 3. The method of any one or more of examples 1 or 2, wherein the first virtual private cloud is hosted by a first datacenter in the first geographic region and the second virtual private cloud is hosted by a second datacenter in the second geographic region. Example 4. The method of any of one or more examples 1-3, wherein the database items are transmitted from the first virtual private cloud to the second virtual private cloud in a sequence of encrypted messages. Example 5. The method of any of one or more examples 1-4, wherein each encrypted message is encrypted by a changing symmetric key and a public key of the second database deployment. Example 6. The method of any of one or more examples 1-5, wherein the changing symmetric key for each encrypted message is included in the encrypted message. Example 7. The method of any of one or more examples 1-6, wherein the database items are directed to a network load balancer in the second virtual private cloud that distributes the database items to the one or more proxy servers in the second virtual private cloud. Example 8. The method of any of one or more examples 1-7, wherein determining that the database items are addressed to one or more proxy servers on the second virtual private cloud comprises: identifying a pre-configured deployment tuple for proxying using the one or more proxy servers, the pre-configured deployment tuple comprising a sending database deployment and a destination database deployment. Example 9. The method of any of one or more examples 1-8, wherein database items for replication are proxied to the one or more proxy servers by way of the virtual private network in response to determining that the first database deployment matches the sending database deployment and the second database deployment matches the destination database deployment in the pre-configured deployment tuple. Example 10. The method of any of one or more examples 1-9, further comprising: generating, by the first database deployment, additional database items for transmission to a third database deployment that is external to the second virtual private cloud. Example 11. The method of any of one or more examples 1-further comprising: determining that the third database deployment does not match the destination database deployment in the pre-configured deployment tuple. Example 12. The method of any of one or more examples 1-11, further comprising: in response to the third database deployment not matching the destination database deployment in the pre-configured deployment tuple, transmitting the additional database items to the third database deployment without using the virtual private network nodes. Example 13. The method of any of one or more examples 1-12, wherein the additional database items are transmitted to the third database deployment as encrypted messages on the Internet. Example 14. The method of any of one or more examples 1-13, wherein the first virtual private cloud and the second virtual private cloud are different subnets of a virtual cloud network site. Example 15. The method of any of one or more examples 1-14, wherein the first virtual private cloud is a private subnet of a virtual cloud network site and the second virtual private cloud is a private subnet of a different virtual cloud network site. Example 16. A system comprising: one or more processors of a machine; and a memory storing instructions that, when executed by the one or more processors, cause the machine to perform operations implementing any one of example methods1-15. Example 17. A non-transitory machine-readable storage device embodying instructions that, when executed by a machine, cause the machine to perform operations implementing one of methods1-15. Although the embodiments of the present disclosure have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader scope of the inventive subject matter. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show, by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent, to those of skill in the art, upon reviewing the above description. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended; that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim is still deemed to fall within the scope of that claim. | 89,398 |
11943204 | DETAILED DESCRIPTION The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims. As used herein, the terms “wireless device,” “mobile device,” and “user equipment (UE)” may be used interchangeably and refer to any one of various cellular telephones, personal data assistants (PDA's), palm-top computers, laptop computers with wireless modems, wireless electronic mail receivers (e.g., the Blackberry® and Treo® devices), multimedia Internet enabled cellular telephones (e.g., the iPhone®), and similar personal electronic devices. A wireless device may include a programmable processor and memory. In a preferred embodiment, the wireless device is a cellular handheld device (e.g., a wireless device), which can communicate via a cellular telephone communications network. As used in this application, the terms “component,” “module,” “engine,” “manager” are intended to include a computer-related entity, such as, but not limited to, hardware, firmware, a combination of hardware and software, software, or software in execution, which are configured to perform particular operations or functions. For example, a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, a computer, a server, network hardware, etc. By way of illustration, both an application running on a computing device and the computing device may be referred to as a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one processor or core and/or distributed between two or more processors or cores. In addition, these components may execute from various non-transitory computer readable media having various instructions and/or data structures stored thereon. The term “neural network” may be used herein to refer to an interconnected group of processing nodes (or neuron models) that collectively operate as a software application or process that controls a function of a computing device and/or generates an overall inference result as output. Individual nodes in a neural network may attempt to emulate biological neurons by receiving input data, performing simple operations on the input data to generate output data, and passing the output data (also called “activation”) to the next node in the network. Each node may be associated with a weight value that defines or governs the relationship between input data and output data. A neural network may learn to perform new tasks over time by adjusting these weight values. In some cases, the overall structure of the neural network and/or the operations of the processing nodes do not change as the neural network learns a task. Rather, learning is accomplished during a “training” process in which the values of the weights in each layer are determined. As an example, the training process may include causing the neural network to process a task for which an expected/desired output is known, comparing the activations generated by the neural network to the expected/desired output, and determining the values of the weights in each layer based on the comparison results. After the training process is complete, the neural network may begin “inference” to process a new task with the determined weights. The term “inference” may be used herein to refer to a process that is performed at runtime or during execution of the software application program corresponding to the neural network. Inference may include traversing the processing nodes in the neural network along a forward path to produce one or more values as an overall activation or overall “inference result.” The term “deep neural network” may be used herein to refer to a neural network that implements a layered architecture in which the output (activation) of a first layer of nodes becomes an input to a second layer of nodes, the output of a second layer of nodes becomes an input to a third layer of nodes, and so on. As such, computations in a deep neural network may be distributed over a population of processing nodes that make up a computational chain. The term “generative adversarial network (GAN)” may be used herein to refer to a specific type of machine learning system, technique or technology that is implemented or used by the various embodiments. A generative adversarial network may include two or more neural networks that compete with each other in a game (e.g., a zero-sum game in which one's gain is another's loss). For example, a generative adversarial network may include a deep neural network (DNN) and a generator. The generator may be viewed as the inverse of a layer of artificial neurons. Because the layer includes non-linear elements, the inverse transform may be lossy. When the layer is trained, patterns of inputs that are significant may be encoded onto the output and recovered with reasonable accuracy. The input to the generator may be much smaller vector of numbers relative to the inputs to its related generative adversarial network. Outputs of the generator may be classified as a match by the related generative adversarial network. The generator may be lossy because it may not be able to regenerate patterns that are not a good match for the desired category of outputs. As an example, the generator is configured to generate images that look like dogs, the generator may be unable to generate images of cats. The term “fake” may be used herein to refer to a decoy system or pattern using a credible signature that is not related to a real system. Some embodiments may include components (e.g., a spectrum management firewall or “SMF”, etc.) that are configured to use generative adversarial networks to generate fake frequency blanking patterns to obscure operations that might otherwise be revealed by reverse engineering the frequency suppression messages. The term “credible signature” may be used herein to refer to a pattern such as a spectrum signature or an activity pattern that is plausible. Activity patterns are descriptions of the movements and spectrum activities of these systems. A generative adversarial network may be used to produce credible signatures of activity patterns. The term “movement signature” may be used herein to refer to a pattern of movement that is indicative of a particular type of platform such as a vehicle, airplane or ship. The term “mode signature” may be used herein to refer to a pattern of spectrum use that is typical of a type of system such as a communication system or a radar or a pattern of operating modes associated with a specific system such as a specific version of an Aegis radar. The term “spectrum signature” may be used herein to refer to a pattern of spectrum transmit signals combined with a related pattern of receive frequencies that must be protected. This could be modeled as an emission spectrum defined as a list of power spectrum densities as a function of frequency. The protection data would be a list of maximum allowed interference signal levels or spectrum density per frequency. One spectrum signature (transmit and receive) is defined per operating mode of a type of system. Multiple types of systems may be active onboard a platform. The term “credible mask” may be used herein to refer to a credible signature that encloses the signature of a protected (or secret, primary, etc.) system. The credible mask must be at least as wide and at least as sensitive to interference so that when the system blanks frequencies, the interference protection would be more than adequate to protect the protected system. The term “coordination interval” may be used herein to refer to time intervals used by a 5G core, regulator, and/or generator to coordinate network traffic. The traffic demand levels and patterns may be synchronized on the basis of these intervals. The terms “spectrum sharing” and “sharing spectrum” may be used interchangeably herein to refer to systems, techniques, and/or technologies that help optimize the use of the airwaves, or wireless communications channels, by enabling multiple categories of users to safely share the same frequency bands. Though the wireless industry has been tooting the spectrum sharing horn for over a decade, the vast majority of incumbent mobile network operators (MNOs) have not made sufficient technical progress toward realizing suitable solutions, often because it is simply not in their current best interest to do so. The various embodiments include components that configured to incentivize the wireless industry and commercial network providers/operators to develop, improve upon, implement and/or use spectrum sharing techniques and solutions that improve the efficiency, performance and functionality of the network. A number of different cellular and mobile communication services and standards are available or contemplated in the future, all of which may implement and benefit from the various embodiments. Such services and standards include, e.g., third generation partnership project (3GPP), long term evolution (LTE) systems, third generation wireless mobile communication technology (3G), fourth generation wireless mobile communication technology (4G), fifth generation wireless mobile communication technology (5G), global system for mobile communications (GSM), universal mobile telecommunications system (UMTS), 3GSM, general packet radio service (GPRS), code division multiple access (CDMA) systems (e.g., cdmaOne, CDMA2000™), enhanced data rates for GSM evolution (EDGE), advanced mobile phone system (AMPS), digital AMPS (IS-136/TDMA), evolution-data optimized (EV-DO), digital enhanced cordless telecommunications (DECT), Worldwide Interoperability for Microwave Access (WiMAX), wireless local area network (WLAN), public switched telephone network (PSTN), Wi-Fi Protected Access I & II (WPA, WPA2), Bluetooth®, integrated digital enhanced network (iden), land mobile radio (LMR), and evolved universal terrestrial radio access network (E-UTRAN). Each of these technologies involves, for example, the transmission and reception of voice, data, signaling and/or content messages. It should be understood that any references to terminology and/or technical details related to an individual telecommunication standard or technology are for illustrative purposes only, and are not intended to limit the scope of the claims to a particular communication system or technology unless specifically recited in the claim language. 5G new radio (NR) and other recently developed communication technologies allow wireless devices to communicate information at data rates (e.g., in terms of Gigabits per second, etc.) that are orders of magnitude greater than even 4G Long Term Evolution (LTE) network. 5G networks are also more secure, resilient to multipath fading, allow for lower network traffic latencies, and provide better communication efficiencies. As such, developed and developing nations are feverishly moving forward with 5G rollouts. In doing so, many are using radio spectrum bands that, in the United States, are currently allocated to a government entity (GE), such as the department of defense (DOD). Concurrent with these developments, in the United States, the GEs are moving towards implementing their own 5G networks and solutions. Each GE could own, build and/or operate its own 5G network in spectrum that it currently occupies. However, a GE-owned and/or GE-operated network (e.g., a wholly owned exclusive-use GE 5G network, etc.) may be an inefficient use of limited resources (e.g., spectrum, network or resulting broadband capacity, infrastructure, network equipment, etc.). In addition, owning a nationwide network has numerous disadvantages for GE. Chief among the disadvantages of GE ownership is cost. If a GE were to build, operate and own a 5G network for its exclusive use, it would need to pay for the entire cost of that network—likely many billions of dollars at least, plus billions more every year for operations and maintenance. There are further disadvantages related to the limited coverage and capacity of a wholly owned exclusive-use GE 5G network. As an example, due to cost constraints imposed by the need to fully fund its deployment and operations, a wholly owned exclusive-use GE 5G network would likely be smaller than ideal in terms of both coverage and capacity. The limited scale of such a network could prevent, reduce or otherwise limit market-driven commitments from important vendors of infrastructure, network equipment, mobile terminals (including smartphones), software, operating systems and other capabilities that are critical to the success of the non-foreign 5G ecosystem. As another example, implementation and use of wholly owned exclusive-use GE 5G network could prevent the GE from benefiting from the many important technological developments and innovations resulting from the operations of a commercial network (e.g., commercial wholesale network, commercially scaled network, etc.). In contrast to wholly owned exclusive-use networks, commercial networks may be built on an accelerated basis with greater coverage at a lower cost through partnerships with owners or providers of existing infrastructure (e.g., backhaul, power, rights of access, towers, community owned infrastructure such as land lots, rooftops and water towers, etc.). In addition, commercial networks may allow for an efficient exchange of access to infrastructure for wholesale capacity (that can be used or resold) and/or further reduce costs by “hosting” the spectrum of others that need access to shared infrastructure (e.g., similar to the regional or community-oriented bidders in the recent citizens broadband radio service (CBRS) auction, etc.). As such, a better alternative to a wholly owned exclusive-use GE 5G network is a GE network that implements spectrum sharing techniques, obtains the above-described benefits associated with commercial networks, and reduces or eliminates the above-described disadvantages or challenges associated with implementing or operating a wholly owned exclusive-use network. The various embodiments include components that may be deployed or used in a GE network to allow the GE network to utilize spectrum sharing techniques, obtain the above-described benefits associated with commercial networks, and reduce or eliminate the above-described disadvantages or challenges associated with implementing or operating a wholly owned exclusive-use network. FIG.1illustrates a system100that includes a spectrum management firewall (SMF)120component, which may be configured to delineate between commercial operations and GE operations and/or to otherwise implement, support, or provide networks and dynamic spectrum sharing techniques in accordance with the various embodiments. In the example illustrated inFIG.1, the system100includes a commercial network102, a network core104, translator106-110components, API112-116components, the SMF120component, a secret, primary, or protected system monitor130and a protected systems network140. The commercial network102may be a 4G LTE or 5G NR network that includes various user equipment (UE) devices103, such as the illustrated connected car103, laptop computer103b, smartphone103c, and wearable device103d. The commercial network102may include connections or communication links to the SMF120component via the network core104, translator106-110components, and/or API112-116components. The commercial network102(and its constituent components) may be configured to use cooperative methods, techniques and/or solutions, which are generally not designed for, or not suitable for use in, contested environments (e.g., battlefields, etc.). The network core104may include various components (e.g., control systems, network interfaces, tec.) that allow the commercial network102to cooperate, interoperate and/or communicate with other networks and systems. In some embodiments, the network core104may be 5G core network (5GC) that is included as part of the commercial network102. A 5GC may include various network functions (NF), examples of which include an authentication server function (AUSF), core access and mobility management function (AMF), data network (DN), structured data storage network function (SDSF), unstructured data storage network function (UDSF), network exposure function (NEF), network function repository function (NRF), policy control function (PCF), session management function (SMF), unified data management (UDM), user plane function (UPF) and application function (AF). The translator106-110components may be configured to allow the commercial network102to interface with the SMF120component. For example, the translator106-110components may be configured to covert data formats to enable the SMF120component to communicate and/or interoperate with the components in the commercial network102. The API112-116components may be documented interfaces that facilitate communications between the SMF120component and the commercial network102. The protected systems network140may be a network that is owned, operated or associated with a government entity (GE), and may include various secure or sensitive government resources/assets141that have priority use of the spectrum and/or whose operational patterns should be obscured. Examples of such resources/assets141include the illustrated unmanned aerial vehicle (UAV)141a, artillery equipment141b, radar141c, armored fighting vehicle (AFV)141d, aircraft141e, and guided missile/rocket141f. The protected systems network140may be configured to use autonomous and/or competitive methods that are more appropriate for contested environments (e.g., battlefields, etc.). The protected systems network140may include connections or communication links to the SMF120component via the protected system monitor130. The protected system monitor130may be configured to receive or collect data about the activities of resources/assets141in the protected systems network140, and send the received/collected data to the SMF120component (or to the mediation system122of the SMF120component). In some embodiments, the protected system monitor130may be configured to receive the data from direct communications links that the resources/assets141use to announce their activities. In some embodiments, the system100may include sensors that are deployed in and/or near the area of operations of the resources/assets141in the protected systems network140. The resources/assets141(or their associated sensors) may be configured to send data to the protected system monitor130. Examples of the types of data that may be received or collected by the protected system monitor130include radar data, imagery and radio signals transmitted by the resources/assets141or by other systems in proximity to the resources/assets141, current operating conditions, recent activities, communications, etc. The SMF120component (and/or its constituent components) may be configured to enable, allow, provide or facilitate secure communications, interactions, collaborations, and spectrum sharing between the components in the commercial network102and the components in the protected systems network140. For example, the SMF120component may be configured to prevent leakage of sensitive operational information (e.g., by transmitting blanking patterns to the commercial network130, etc.) so that the components in commercial network102and the protected systems network140may readily share telecommunication resources (e.g., RF spectrum resources, etc.). This added level of security may allow the components within these networks102,1040to integrate, collaborate and/or cooperate more readily, thereby further improving the performance and functioning of the overall system100and its constituent components. As an example, the inclusion and use of the SMF120component may allow the system100(and/or its components) to use adaptive radars and systems with anti-jamming capabilities that included in the protected systems network140in conjunction with adaptive frequency allocation techniques, massive multiple-input and multiple-output (MIMO), self-optimizing networks, and/or other techniques for avoiding interference and increasing utilization of the spectrum that are included in, provided by, or made available by a commercial network102. For these and other reasons, the inclusion and use of the SMF120component in accordance with the various embodiments may significantly improve the performance and functioning of the system100and its components. In the example illustrate inFIG.1, the SMF120component includes a mediation system122and an obfuscator124component. The obfuscator124component may be configured to protect information about the activities, locations, and properties of the resources/assets141in the protected systems network140that use the radio spectrum. For instance, the obfuscator124component may be configured to mask or cloak the activities, operations, communications, locations, features, properties, or characteristics of the identified or evaluated resources/assets141by adding additional frequencies to mask the properties of the resources/assets141. The obfuscator may also create decoy system activity to obscure the operational patterns of resources/assets141, generate noise that obscures one or more features of the blanking patterns, and/or otherwise change the appearance or observable characteristics of the of the system100, the protected systems network140, and/or the resources/assets141. The mediation system122may be configured to coordinate spectrum use between resources/assets141in the protected systems network140and the UEs103in the commercial network102. For instance, the mediation system122may be configured to identify the resources/assets141in the protected systems network140, determine the resources/assets141are active, determine the activities of the active resources/assets141, determine the types and numbers of each type of active resources/assets141, determine, predict or estimate the locations of the active resources/assets141, determine the frequencies, cell sites, and locations that are available for use by the UEs103in the commercial network102, determine which systems are likely to interfere, and/or determine the frequencies that should be allowed or suppressed at specific sites in the commercial network102(e.g., to avoid interference, etc.). The mediation system122may cause the networks102,140to allow or suppress the determined frequencies at the determined cell sites and/or locations based on the activities and/or locations of the resources/assets141and/or based on information received or collected from the protected system monitor130. In some embodiments, the mediation system122may be configured to determine whether there is a high probability that two or more systems (e.g., commercial network102and protected systems network140, etc.) will interfere with one another. In some embodiments, the mediation system122may be configured to determine the probability that two or more systems will interfere based on propagation models and/or information stored in databases of transmitter or receiver characteristics. As mentioned above, the mediation system122may be configured to identify the resources/assets141in the protected systems network140and perform other related tasks. In some embodiments, the mediation system122may be configured to identify the class of the resources/assets141(without identifying the specific resources/assets141) in response to determining that a resource/asset141cannot or should not be readily identified. Alternatively, or in addition, the mediation system122may be configured to categorize a resource/asset141as an unidentified system type (also without identifying the specific resources/assets141) in response to determining that a resource/asset141cannot or should not be readily identified. In some embodiments, the mediation system122may be configured to detect activity on or by the resources/assets141included in the protected systems network140. The mediation system122may detect the activity directly (e.g., through sensing) or indirectly (e.g., through encrypted messages received from the resources/assets141, etc.). In either case, the collection and handling of the data regarding the activity may be in a completely secured environment on the protected systems network140side of the SMF120component. In some embodiments, the mediation system122may be configured to securely collect data from the resources/assets141included in the protected systems network140, aggregate and/or analyze the collected data to generate analysis results, and use the generated analysis results to determine whether there are resources/assets141active within an area. In response to determining that there are resources/assets141active within the area, the mediation system122(or another component in the SMF120) may identify the nearby areas and commercial networks102that need to stop using those frequencies. The mediation system122(or another component) may generate and send communication messages to instruct the components (e.g., mobile devices, etc.) in the identified areas and/or the commercial networks102to stop using select frequencies and/or to perform other responsive actions. In some embodiments, the SMF120component may be configured to use other methods to detect or respond to activity of the resources/assets141in the protected systems network140. For example, the SMF120component may determine that an active resource/asset141is primarily listening (e.g., acting as sensors, rather than transmitting, etc.) at some location. In response, the SMF120component may determine the location of a listening point for that resource/asset141, use information about the location of the listening point to identify the areas and commercial networks102that are to stop using the associated frequencies, and generate and send communication messages to instruct the components (e.g., mobile devices, etc.) in the identified areas and/or the commercial networks102to stop using select frequencies and/or to perform other responsive actions. As mentioned above, to increase security, the collection and handling of the data regarding the activity may be in a completely secured environment on the protected systems network140side of the SMF120component. To further increase security, in some embodiments, the SMF120component may be configured such that it does not transmit any specific information regarding the components in the protected systems network140. For example, the SMF120component may be configured to only transmit information regarding the frequencies that may be suppressed or permitted within predefined areas. Alternatively, or in addition, the SMF120component may be configured to insert decoy patterns of suppressed frequencies into the messages transmitted to the commercial network102. The decoy patterns may help further obfuscate any sensitive operational information, further securing the secure resources and government assets141included in the protected systems network140. In some embodiments, the SMF120component may be configured to send the commercial network102lists of frequencies that are allowed or blocked at specific cells in the network. Each network may have a slightly different way of organizing and presenting information about cells and frequencies, but all modern cellular networks include or provide the data in some format. The SMF120component may use the data in any or all available formats to determine the frequencies that are or should allowed or blocked. In some embodiments, the system100, SMF120component, or obfuscator124component may include, utilize, work in conjunction with, or implement generative adversarial networks (GAN), machine learning (ML) and/or artificial intelligence (AI). For example, in some embodiment, the system100may include a GAN component (not illustrated separately inFIG.1) that has access to the outputs of the SMF120component and protected system monitor130. The GAN component may use these outputs to detect and differentiate between real and fake activities of the resources/assets141in the protected systems network140. For example, the GAN component may include a deep neural network and a generator. In some embodiments, the generator and the deep neural network (DNN) of the generative adversarial network may be co-optimized. The optimization may begin with a collection of valid inputs and a collection of random inputs. In the first cycle, random data (noise) may be injected into the generator. The deep neural network may be optimized using the random outputs of the generator as data that is “fake” combined with a reference data set that is “true”. After the deep neural network is optimized, the scores for the fake data may be passed back to the generator. The generator may then be optimized to produce better fakes (e.g., higher scores) from the deep neural network, and another cycle may begin. After many cycles, the fake data from the generator may be very similar (or indistinguishable or nearly indistinguishable) to the true data. The obfuscator124component may use the fake data to mask the properties and/or activities of the resources/assets141. In some embodiments, the generative adversarial network (GAN) may include a deep neural network (DNN) and generator configured in a competitive feedback loop producing high quality fakes and a sophisticated “fake detector”, the optimized DNN, that operate competitively at roughly equal levels of effectiveness. In some embodiments, the system100may include components that are configured to use or apply GANs to different related types of data. For example, the system100may include a component that is configured to use or apply GANs to the activity patterns of resources/assets141, to sensor readings related to the activities of the resources/assets141, and/or to frequency blanking messages from an obfuscator124component. The sensor readings may be readings that are generated at a collection of sensors due to the activity of protected systems, and the frequency blanking messages may be messages that are generated by the mediation system to prevent a commercial network from interfering with the protected systems. Each of these types of data may be subdivided into segments of time ranging from a few seconds to a few minutes of time. In some embodiments, the system100may include multiple GAN components. For example, the system100may include one component that develops the obfuscation patterns, and another component that identifies the real activity of the resources/assets141(e.g., by detecting and ignoring irrelevant blanking patterns). In some embodiments, the system100may include a collection of sensors that can report the spectral power density for one or more bands. The system100may collect or determine the amount of energy that is collected in a narrow frequency interval during a short period of time at a specific sensor, and apply the collected/determined information to a neural network (e.g., GAN, DNN, etc.) to generate an activation. In some embodiments, the system100may include a sensor site that includes multiple receivers connected to directional antennas adding dimensionality to the data. The system100may apply the information collected/determined from the sensor site to a neural network to generate an activation. In some embodiments, the system100may transform inputs (e.g., small units of information) into a GAN component (or into a deep neural network, etc.) into a linear array which is a vector of floating-point values. The inputs (or small units of information) may include signal strength, per small frequency interval, at each sensor that is collecting data, during small intervals of time (e.g., few seconds or minutes). In some embodiments, the data may include power per 1 kHz increment over a band. FIGS.2A-2Cillustrate various SMF120components that could be configured to enable, allow, provide or facilitate secure communications, interactions, collaborations, and spectrum sharing between a commercial network102and the protected systems network140in accordance with the various embodiments. In the examples illustrated inFIGS.2A-2C, the SMF120component includes a mediation system122and an obfuscator124component. The mediation system122may include blanking patterns202component, an interference estimation204component, and a system identification206component. The system identification206component may receive and use information from the protected system monitor130to identify active resources/assets141. The interference estimation204component may receive and use information from the API112component and the system identification206component to determine the frequencies that are be suppressed at specific sites in the commercial network102to avoid interference. The blanking patterns202component may be configured to receive and use the determined frequencies for the identified devices to generate fake frequency blanking patterns that obscure operations that might otherwise be revealed by reverse engineering. In the example illustrated inFIG.2A, the obfuscator124component includes a regulator208, a protected system (PS) activity history210component, a PS activity generator212component, a random number generator (RNG)214component, a mediation system220, and a max230component. The mediation system220may include blanking patterns222and an interference estimation224component. The interference estimation224component may include a propagation model226and a spectrum model228. In the example illustrated inFIG.2B, the obfuscator124component includes a regulator208, a sensor activity history252component, a sensor activity generator254component, a random number generator (RNG)214component, and a system identification256component. In the example illustrated inFIG.2C, the obfuscator124component includes a regulator208, a random number generator (RNG)214, a max230component, a blanking activity history272component, and a blanking generator274component. FIG.3illustrates various components that may be include included in a SMF120component and configured to train the system in accordance with some embodiments. In the example illustrated inFIG.3, the system includes a history302component, a random number generator (RNG)304, a generator306component, and a discriminator308component. During the training process, GAN methods may be used to “adversarially” tune the discriminator and the activity generator. In each cycle of the training process, the generator306component may generate increasingly realistic activity patterns and the discriminator308component may become better and better at identifying fakes. The system illustrated inFIG.3may be used to separately simulate coarse and fine features, which may then be combined in various ways. Using this feature or GANs, it is possible to train the generator306component on fine features taken from many systems and then use a different data set for the coarse features or to use some degree of manual adjustment to achieve particular properties. FIG.4illustrates a method400of using a spectrum management firewall (SMF)120to avoid interference in a system configured to dynamically share spectrum. Method400may be performed by one or more processors of components included in a commercial network102, network core104, SMF120, mediation system122, obfuscator124, PSM130, or protected systems network140. In operation402, a component in the commercial network102may transmit information to mediation system122via the SMF120. As discussed above, the SMF120may be configured to operate as a demark between the commercial network102and the protected systems network140in order to ensure that a GE or protected system network140is not operating the network and does not have access to user information. In operation404, a component in the protected system network140may send or transmit information regarding the characteristics of protected systems to the mediation system122. The mediation system122may receive and use any or all such information to analyze potential interference. In operation block406, the mediation system122may receive and/or determine class of system and planned area of operation. For example, in a sense and avoid configuration, when a component in the protected systems network140becomes active in an area, sensors within its vicinity detect the activity and transmit information about the signal levels and frequencies to the mediation system122. In operation block406, the mediation system122may use the received information (e.g., detected activity, information about the signal levels and frequencies, etc.) to determine which class of system is operating and its approximate area. As another example, in a spectrum reservation, when a component in the protected systems network140is anticipated to begin operating with an area, the protected systems network140generates and sends (e.g., through the PSM130, etc.) a spectrum reservation message to the mediation system122. In operation block406, the mediation system122may use the received information (e.g., information included in the spectrum reservation message) to determine which class of system is operating and its planned area of operation. In operation block408, the mediation system122may determine the cell sites and frequencies that would result in interference. For example, the mediation system122may calculates the cell sites and frequencies which would result in interference between components within the protected systems network140and specific cells and attached mobiles in the commercial network120. In operation block410, the mediation system122may determine which frequencies may be suppressed on which cells in the commercial network102. In operation412, the mediation system122may generate a message that indicates which frequencies may be suppressed on which cells in the commercial network (determined in operation block410), and send the generated message to the SMF120component. In some embodiments, in operation414, the SMF120component may send the message to the obfuscator, which may perform various operations to mask or cloak the activities, operations, communications, locations, features, properties, or characteristics of the information (e.g., by adding additional frequencies to mask the properties, etc.), generate an obfuscation message, and send the obfuscation message back to the SMF120. In operation416, the SMF120component may transmit or send the generated message or obfuscation message that identifies the suppressed frequencies per cell to the commercial network102. In some embodiments, the SMF may also generate and send additional suppression messages that are intentionally misleading in operation416. In operation block420, a component in the commercial network102may receive and use the information in the message to perform various operations to suppress the frequencies at each indicated site. For example, the component may stop all transmissions on the indicated frequencies, reduce power on the indicated frequencies, reorient antennas to direct power away from the susceptible protected systems (e.g., using additional information transmitted by the SMF120, etc.), and/or downtilting antennas or directing them into focused areas that only allow the power to be transmitted in the immediate vicinity of the cell site. In operation422, a component in the protected system network140may generate and send a message that indicates the detected activity (e.g., activity detected in operation404, etc.) has ceased within an area to the mediation system122. In operation424, the mediation system122may generate and send a message indicating that frequency suppression has ended on selected frequencies per site to the SMF120component. In operation426, the SMF120component may send the message indicating that frequency suppression has ended on selected frequencies per site to the commercial network102. In operation block428, a component in the commercial network may restore power levels. In operation block430, a component in the commercial network may reorient and uptilt antennas back to configurations that are optimized for full utilization of those frequencies on the commercial network. FIG.5illustrates a method500for dynamically sharing spectrum between a commercial network and a protected system network in accordance with some embodiments. Method500may be implemented by a processor in a spectrum management firewall (SMF). In block502, the SMF may receive information from the commercial network. In block504, the SMF may receive characteristic information identifying one or more characteristics of a resource or entity in the protected system network. For example, in some embodiments the SMF may receive detected activity information, signal level information and frequency information collected by sensors within the vicinity of the resource or entity in the protected systems network. In some embodiments the SMF may receive spectrum reservation message from the protected system network indicating that the resource or entity is anticipated to become active in an area. In some embodiments, the SMF may receive the characteristic information in response to the sensors or SMF detecting that the resource or entity has recently become active. In some embodiments, the SMF may receive the characteristic information in response to the sensors or SMF detecting that the resource or entity is anticipated to become active in an area within a certain time or in the near future. In block506, the SMF may determine a class of system (COS) and a planned area of operation (PAOO) for the resource or entity based on the characteristic information received from the protected system network. In some embodiments, the SMF may determine the COS and an approximate area associated with the recently active resource or entity based on the activity information, signal level information, and/or frequency information received in block504. In some embodiments, the SMF may determining the COS and an approximate area of a resource or entity that is anticipated to become active in an area (e.g., based on the received spectrum reservation message, etc.). In block508, the SMF may determine potential interference based on the information received from the commercial network and the characteristic information received from the protected system network. For example, the SMF may determine the cell sites and frequencies that would result in interference between the resource or entity within the protected systems network and specific cells and attached mobiles in the commercial network. In block510, the SMF may determine which frequencies may be suppressed on which cells in the commercial network based on the determined potential interference. In block512, the SMF may generate a suppression message that identifies the determined frequencies per cell. In some embodiments, the SMF may generate an obfuscation message that masks or cloaks the activities, operations, communications, locations, features, properties, or characteristics of the resource or entity in the protected system network. In some embodiments, the SMF may generate the obfuscation message by adding additional frequencies to the suppression message. The added frequencies may mask the activities, operations, communications, locations, features, properties, or characteristics of the resource or entity in the protected system network. In block514, the SMF may send the generated suppression message to a component in the commercial network to cause that component to suppress the identified frequencies in the identified cells. For example, the SMF may send the generated message to the component in the commercial network to cause that component to stop all transmissions on the identified frequencies, reduce power on the identified frequencies, reorient antennas to direct power away from the resource or entity in the protected systems network, or down-tilt or direct the antennas into focused areas that only allow the power to be transmitted in the immediate vicinity of the identified cells. In block516, the SMF may receive a notification message from the protected system network indicating that a detected activity identified in the received characteristic information has ceased. In block518, the SMF may cause the component in the commercial network to cease suppressing the identified frequencies in the identified cells and restore power levels in response the SMF receiving the notification message from the protected system network in block516indicating that the detected activity identified in the received characteristic information has ceased. For example, the SMF may send a communication message to the component that causes the component to reorient and uptilt antennas back to configurations that are optimized for full utilization of the identified frequencies on the commercial network. In some embodiments, method500may include using a generative adversarial network (GAN) that includes a deep neural network and a generator to detect and differentiate between real and fake activities of the resource or entities in the protected systems network. In some embodiments, method500may include using a generative adversarial network (GAN) that includes a deep neural network and a generator to produce fake data. In some embodiments, method500may include inserting the generated fake data into the suppression message in block512and/or prior to sending the generated suppression message to the component in the commercial network in block514. In some embodiments, method500may include use the generated fake data to generate additional suppression messages that are intentionally misleading, and sending the additional suppression messages to the component in the commercial network. As discussed above, a GAN is one of the tools that may be used in the spectrum management firewall (SMF) to generate fake frequency blanking patterns to obscure operations that might otherwise be revealed by reverse engineering the frequency suppression messages emanating from the SMF. A GAN may include a DNN and generator configured in a competitive feedback loop producing high quality fakes and a sophisticated “fake detector”, the optimized DNN, that operate competitively at roughly equal levels of effectiveness. GANs may be applied to three related types of data: the activity patterns of resources (or assets, entities, devices, etc.) in the protected systems; the sensor readings collected based on the activity of the resources; and frequency blanking messages from the obfuscator. In some embodiments, the SMF may configured to use a GAN or other neural network technologies to implement methods for detection and masking of the activity of the resources (or assets, entities, devices, etc.) in a protected system. Such methods may include a training phase and an operational phase. During the training phase, the SMF may use neural network technologies with automated or manual labeling of data. During the normal activity in an area of operations, the SMF may collect many examples of resource movements and spectrum signatures. The SMF may identify the signatures and the type of system generating each signature, either manually or via an automated system that has information about the activities of the systems. In some embodiments, the SMF may configured to collect signature data, label the collected data by type of system, and perform various neural network training phase operations. The neural network training phase operations may be performed for the detected activities of any type of resource or system. In some embodiments, the SMF may be configured to use a separate training process for specific systems or categories of systems. In some embodiments, the SMF may configured to use the GAN to generate fake systems related to each type of identified resource or protected system, and select the generated fake systems that are suitable for use as masks. The SMF may determine that a generated fake system is suitable for use as a mask based on its credibility (as verified by humans or via process) and/or its ability to “cover” the protected system. For example, the SMF may determine that a generated fake system is suitable based on whether its frequency protection requirements are equal or greater than the frequency protection requirements of the real protected system. In some embodiments, the SMF may configured to determine that a resource (or asset, entity, device, etc.) in the protected system has become active or has started operating within the detection area of the system, collect data from sensors in the vicinity of the resource, and/or receive or feed a pattern of data into a neural network (e.g., GAN) that detects the activity of the protected system. The SMF may identify the type of resource that is active, generate a library of spectrum signatures, query the library of spectrum signatures to obtain a list of applicable masks, intelligently or randomly select one of the masks included in the list, use a combination of the spectrum signature and the movements of the resource to generate a blanking pattern, and use/apply the generated blanking pattern to obscure operations that might otherwise be revealed by reverse engineering the frequency suppression messages emanating from the SMF. In some embodiments, the SMF may query a library of blanking pattern to obtain a list of applicable masks, and intelligently or randomly select one of the masks included in the list, use the movements of the resource to generate a blanking pattern (without an intervening step from spectrum mask to blanking pattern), and use/apply the generated blanking pattern to obscure operations that might otherwise be revealed by reverse engineering the frequency suppression messages emanating from the SMF. FIGS.6A and6Billustrate methods600,650for detection and masking of the activity of the resources (or assets, entities, devices, etc.) in a protected system in accordance with some embodiments. Methods600,650may be implemented by a processor in a spectrum management firewall (SMF). With reference toFIG.6A, in block602, the SMF may collect signature data. In block604, the SMF may label the collected data by type of system. In block606, the SMF may perform various neural network training phase operations and/or otherwise use a GAN to generate fake systems related to each type of identified resource or protected system. In block608, the SMF may select the generated fake systems that are suitable for use as masks. With reference toFIG.6B, in block610, the SMF may determine that a resource (or asset, entity, device, etc.) in the protected system has become active or has started operating within the detection area of the system. In block612, the SMF may collect data from sensors in the vicinity of the resource. In block614, the SMF may receive or feed a pattern of data into a neural network (e.g., GAN) that detects the activity of the protected system. In block616, the SMF may identify the type of resource that is active. In block618, the SMF may generate a library of spectrum signatures. In block620, the SMF may query the library of spectrum signatures to obtain a list of applicable masks. In block622, the SMF may select one of the masks included in the list. In block624, the SMF may use a combination of the spectrum signature and the movements of the resource to generate a blanking pattern. In block626, the SMF may use/apply the generated blanking pattern to obscure operations that might otherwise be revealed by reverse engineering the frequency suppression messages emanating from the SMF. In the various embodiments, the SMF may be configured to implement or apply different types of obfuscation, including obfuscation of operational patterns, vehicle movements and spectrum separately, spectrum signatures, and/or blanking patterns. Obfuscation of operational patterns may include performing training operations that include collecting training data, using a neural network (or GAN) to detect patterns, checking data periodicity, autocorrelation, etc., and explicitly modeling how patterns change with increasing activity levels so that additional fake activity can be added in a “realistic” way. The collected training data may be the same data collected and used in generating masks (discussed above), but with an emphasis on the activity periods rather than the emission characteristics. In some embodiments, the collected training data may include the start times and durations of activity per system type, such as time of day, day of week or even seasonal patterns depending on how long data is collected. Obfuscation of operational patterns may also include an operational phase, which may include loading the operational patterns produced during training, selecting an activity level, and using a random number generator (RNG) combined with the operational patterns and the selected activity level to generate fake periods of activity. In some embodiments, obfuscation may be based on vehicle movements and spectrum separately. In these embodiments, the training data may include vehicle movements defined as traces that are lists of coordinates and time stamps per system type. The GAN may be used to generate fake traces that are trigged by the fake activity pattern generator. In some embodiments, obfuscation may be based on spectrum signatures. Spectrum signatures may be patterns of spectrum readings per sensor or patterns of spectrum use applied to real or fake “platforms” (planes, trains or automobiles) so that the combination of fake movements and fake spectrum properties collectively present a credible fake system. In these embodiments, the training data may include sensor readings at specific sensor locations in combination with each other or isolated sensor readings with normalized signal levels. The former method applies to a set of installed sensors at a specific venue. The latter method focuses on the protected system characteristics and may be applied to the same types of systems in other locations. In other words, the data collected at one base may be used to generate fake patterns at an entirely different location. In some embodiments, obfuscation based on spectrum signatures may include using propagation models in combination with the fake movement generator to obtain fake signal levels and blanking patterns at the same or different area of operations. In some embodiments, obfuscation may be based on blanking patterns. Obfuscation based on blanking patterns may be simpler than the other obfuscation solutions because it does not require analysis of movements or propagation models. However, the results are not transferable to other areas of operations or to new sensor configurations. In these embodiments (e.g., obfuscation based on blanking patterns), simple or complicated propagation models may be used to determine which frequencies to blank at which locations in the commercial network based on the sensor readings. The obfuscator may be not supplied any information about the underlying propagation model, only the frequencies that are blanked as a result. As such, the training data includes the blanking patterns that result from the sensors and models that are used. The fake blanking patterns may be generated using GAN and implicitly include all the affects of the locations of the sensors and the propagation model. A network equipped with components configured in accordance with the embodiments does not need to compete with existing carrier businesses in any geographical area. Instead, it would provide existing carriers with quick and flexible additional 5G capacity if and when they need it, on either a short-term or long-term basis. In addition, such a network would not preclude GE from standing up special-purpose or ad hoc private networks in particular locations utilizing the same spectrum, should it require them. A network equipped with components configured in accordance with the embodiments could allow a commercial network operator (CWNO) to lower prices, drive increased utilization, improve access to 5G and provide enhanced coverage to underserved areas. As such, the embodiments could reduce the costs of the network build and operations, eliminate retail expenses, and operate through an open access wireless sales model. The embodiments could allow the GE to more readily sell broadband capacity dynamically, at the lowest possible price over cost. A network equipped with components configured in accordance with the embodiments may allow a GE to sell network capacity to other cellular mobile network operators (MNOs) that need extra capacity, to existing and new mobile virtual network operators (MVNOs) (potentially including the GE as an MVNO to its employees or more broadly to federal employees), and/or to other providers of innovative new products, services and solutions. Some embodiments may include components configured to implement dynamic spectrum sharing techniques. Such techniques may include: (1) a sharing/coexistence plan that includes rules of engagement to foster collision avoidance; (2) technology that supports dynamic allocation of spectrum and/or network capacity; and (3) a value system for making decisions about sharing to effectively predict when and where spectrum may be made available (e.g., a way to forecast, provision or sell network capacity, etc.). In some embodiments, the components may be configured to use citizens broadband radio service (CBRS) techniques and technologies to provide dynamic spectrum sharing and/or standards-based techniques and technologies to provide network sharing. The various embodiments may be implemented on a variety of mobile wireless computing devices, an example of which is illustrated inFIG.7. Specifically,FIG.7is a system block diagram of a mobile transceiver device in the form of a smartphone/cell phone700suitable for use with any of the embodiments. The cell phone700may include a processor701coupled to internal memory702, a display703, and to a speaker704. Additionally, the cell phone700may include an antenna705for sending and receiving electromagnetic radiation that may be connected to a wireless data link and/or cellular telephone transceiver706coupled to the processor701. Cell phones700typically also include menu selection buttons or rocker switches707for receiving user inputs. A typical cell phone700also includes a sound encoding/decoding (CODEC) circuit708which digitizes sound received from a microphone into data packets suitable for wireless transmission and decodes received sound data packets to generate analog signals that are provided to the speaker704to generate sound. Also, one or more of the processor701, wireless transceiver706and CODEC708may include a digital signal processor (DSP) circuit (not shown separately). The cell phone700may further include a ZigBee transceiver (i.e., an IEEE 802.15.4 transceiver) for low-power short-range communications between wireless devices, or other similar communication circuitry (e.g., circuitry implementing the Bluetooth® or WiFi protocols, etc.). The embodiments described above, including the spectrum arbitrage functions, may be implemented within a system on any of a variety of commercially available server devices, such as the server800illustrated inFIG.8. Such a server800typically includes a processor801coupled to volatile memory802and a large capacity nonvolatile memory, such as a disk drive803. The server800may also include a floppy disc drive, compact disc (CD) or DVD disc drive804coupled to the processor801. The server800may also include network access ports806coupled to the processor801for establishing data connections with a network807, such as a local area network coupled to other communication system computers and servers. The processors701,801, may be any programmable microprocessor, microcomputer or multiple processor chip or chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described below. In some wireless devices, multiple processors801may be provided, such as one processor dedicated to wireless communication functions and one processor dedicated to running other applications. Typically, software applications may be stored in the internal memory702,802, before they are accessed and loaded into the processor701,801. The processor701,801may include internal memory sufficient to store the application software instructions. In some servers, the processor801may include internal memory sufficient to store the application software instructions. In some receiver devices, the secure memory may be in a separate memory chip coupled to the processor701. The internal memory702,802may be a volatile or nonvolatile memory, such as flash memory, or a mixture of both. For the purposes of this description, a general reference to memory refers to all memory accessible by the processor701,801, including internal memory702,802, removable memory plugged into the device, and memory within the processor701,801itself. The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments may be performed in the order presented. As will be appreciated by one of skill in the art the order of steps in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DPC), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DPC and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DPC core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function. In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable medium or non-transitory processor-readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product. The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. | 66,644 |
11943205 | DETAILED DESCRIPTION OF THE INVENTION The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of particular applications of the invention. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the scope of the present invention. Reference to various embodiments and examples does not limit the scope of the invention, which is limited only by the scope of the claims attached hereto. Additionally, any examples set forth in this specification are not intended to be limiting and merely set forth some of the many possible embodiments for the claimed invention. The program environment in which a present embodiment of the invention is executed illustratively incorporates a general-purpose computer or a special purpose device such as a hand-held computer, telephone or PLC. Details of such devices (e.g., processor, memory, data storage, display) may be omitted for the sake of clarity. It is also understood that the techniques of the present invention may be implemented using a variety of technologies. For example, the methods described herein may be implemented in software executing on a computer system, or implemented in hardware utilizing either a combination of microprocessors or other specially designed application specific integrated circuits, programmable logic devices, or various combinations thereof. In particular, the methods described herein may be implemented by a series of computer-executable instructions residing on a suitable computer-readable medium. Suitable computer-readable media may include volatile (e.g., RAM) and/or non-volatile (e.g., ROM, disk) memory, carrier waves, non-transitory computer-readable mediums, and transmission media (e.g., copper wire, coaxial cable, fiber optic media). Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data streams along a local network, a publicly accessible network such as the Internet or some other communication link. In reference to the example embodiments shown in the figures, it is understood that simplified examples were chosen for clarity. Single instances of an element (e.g. a historian, a tunneller, a server, a client, a data source, a data sink, etc.) appearing in the figures may be substituted for a plurality of the same element, and still fall within the scope of the present invention. The server may further can include one or more components selected from: a data modification component; a data creation component; a user interface component; a computer file system interaction component; a program interaction component for interacting with other programs running on a computer running the server; a scripting language component to perform programmable actions; a HTTP component for accepting HTTP requests from client programs and respond with documents as specified by those requests, in a manner analogous to a “web server”, including the ability to dynamically construct the document in response to the request, and to include within the document the current values of the data resident in the server and the results of executing statements in the server's built-in scripting language; a synchronization component to exchange and synchronize data with another running instance of the server on any local or network-accessible computer, such that both servers maintain essentially identical copies of that data, thereby enabling client applications connected to either instance of the server to interact with the same data set; a first throttling component to limit the rate at which data is collected; a second throttling component to limit the rate at which data is emitted; a connectivity component to detect a loss of connectivity to other servers, and to reconnect to the other servers when connectivity is regained; a redundancy component to redundantly connect to multiple other servers of identical or similar information such that data from any of the other servers may be collected in the event that one or more of the other servers is inaccessible; and a bridging component to “bridge” data among sources of data such that some or all of the data within those sources will maintain similar values with one another, or bridge data among data sources including a mathematical transformation such that the data in one source is maintained as the mathematical transformation of the data in the other source, including the ability to apply the mathematical transformation in both the forward and inverse directions through a bi-direction bridging operation. It is understood that this set of server components could be extended by adding additional functionality to the server to support other data collection and transmission mechanisms, other processing mechanisms and other storage mechanisms. The data collection component can collect data in one or more of the following manners: on demand, wherein the server sends a request for some or all of the data resident in another server, and that other sever responds with the current value or values of the requested data only once in response to the request; by subscription, wherein the server sends a request for a subscription to some or all of the data resident in another server, and the other server responds by sending the current value or values of its data, and then continues to send any subsequent changes to the value or values of the data until the server either terminates its connection to the other server, or requests that the other server cease sending updates; on a trigger, wherein a client, script or human (a “user”) configures the server to collect the data only if a certain trigger condition is met, be that a timer, a time of day, a data change, a change in the system status, a user action or some other detectable event; and passively by waiting for a “client” application to send data to the server. The data emission component can emit data in one or more of the following manners: on demand, wherein a “client” application sends a request for some or all of the data, and the sever responds with the current value or values of the requested data only once in response to the request; by subscription, wherein a client application sends a request for a subscription to some or all of the data, and the server responds by sending the current value or values of the data, and then continues to send any subsequent changes to the value or values of the data until the client either terminates its connection to the server, or requests that the server cease sending updates; and on a trigger, wherein a client, script or human (a “user”) configures the server to emit the data only if a certain trigger condition is met, be that a timer, a time of day, a data change, a change in the system status, a user action or some other detectable event. The data collected at the data collection component may be received using one or more transmission protocols selected from: Dynamic Data Exchange (DDE), OLE for Process Control (OPC), OPC Alarm and Event specification (OPC A&E), OPC Unified Architecture (OPC-UA), OPC Express Interface (OPC-Xi), TCP/IP, SSL (Secure Socket Layer) over TCP/IP through a custom interface, Hypertext Transfer Protocol (HTTP), Secure HTTP (HTTPS), Open Database Connectivity (ODBC), Microsoft Real-Time Data specification (RTD), Message queues, Windows Communication Foundation (WCF), industrial bus protocols such as Profibus and Modbus, Windows System Performance Counters, TCP/IP communication from embedded systems, TCP/IP communication from non-MS-Windows systems, TCP/IP communication from Linux, TCP/IP communication from QNX, TCP/IP communication from TRON, TCP/IP communication from any system offering a C compiler and TCP implementation, Scripts written using a built-in scripting language, data entered by humans through a user interface, data read from a local disk file, data read from a remotely accessible disk file, proprietary formats, user-defined formats, and formats added through extensions to the server. An example of a proprietary format is Wonderware SuiteLink™. The data emitted from the data emission component may be transmitted using one or more transmission protocols selected from: Dynamic Data Exchange (DDE), OLE for Process Control (OPC), OPC Alarm and Event specification (OPC A&E), OPC Unified Architecture (OPC-UA), OPC Express Interface (OPC-Xi), TCP/IP, SSL (Secure Socket Layer) over TCP/IP through a custom interface, Hypertext Transfer Protocol (HTTP), Secure HTTP (HTTPS), Open Database Connectivity (ODBC), Microsoft Real-Time Data specification (RTD), Message queues, Windows Communication Foundation (WCF), industrial bus protocols such as Profibus and Modbus, TCP/IP communication to embedded systems, TCP/IP communication to non-MS-Windows systems, data presented to humans through a user interface, data written to a local disk file, data written to a remotely accessible disk file, proprietary formats, user-defined formats, formats added through extensions to the server, electronic mail (E-Mail), and Short Message Service (SMS) message format. Further, the data collected at the data collection component may be in a format appropriate to the transmission protocol. The data emitted from the data emission component may be in a format appropriate to the transmission protocol. The data collected at the data collection component and the data emitted from the data emission component may also be in a format selected from: parenthetical expression (LISP-like) format, Hypertext Markup Language (HTML), eXtensible Markup Language (XML), JavaScript Object Notation (JSON), proprietary binary format, user-definable text format, and a format added through extension of the server. The system may further include an Application Programming Interface (API) that implements a TCP/IP connection and one or more of the data formats supported by the server, which may assist a programmer in establishing a connection as described above. The API may be implemented for one or more of the following platforms: “C” programming language, “C++” programming language, Microsoft .Net programming environment, Microsoft Silverlight framework, Adobe Flash framework, Adobe Air framework, a programming language supporting TCP/IP communication (including any scripting language), and a store-and-forward historian framework supporting TCP/IP communication. The store-and-forward historian framework may include support for: making a first long-lived TCP/IP data connection to the server to receive data; receiving data from the server; and transmitting data to the server over a second TCP/IP data connection. The data may be received from the server on demand or by subscription. The first TCP/IP data connection and the second TCP/IP data connection may be the same connection. The second TCP/IP data connection may be a long-lived connection. The second TCP/IP data connection may be a short-lived connection. The TCP/IP data connection to the server may be in a protocol selected from: an API, as described above, a direct TCP/IP connection, HTTP and HTTPS. The client may be implemented using a RIA framework, a web browser, a compiled computer language, an interpreted computer language, a hardware device, or another implementation mechanism that supports the HTTP and/or HTTPS protocols. The client may comprise support for: making a first long-lived TCP/IP data connection to the server to receive data; receiving data from the server; and transmitting data to the server over a second long-lived TCP/IP data connection. The data may be received from the server on demand or by subscription. The TCP/IP data connections to the server may be in a protocol selected from: HTTP and HTTPS. Data from the server may be received, or data to the server may be transmitted, in one or more form selected from: a parenthetical expression (LISP-like) format, Hypertext Markup Language (HTML), eXtensible Markup Language (XML), JavaScript Object Notation (JSON), a proprietary binary format, a user definable format, and a format added by extension to the server. The store-and-forward historian framework may further include support for presenting a graphical display representing the data to a user. The graphical display may comprise one or more graphical elements selected from: a textual display, a slider, a chart, a trend graph, a circular gauge, a linear gauge, a button, a check box, a radio button, a progress bar, a primitive graphical object, controls, and a customized graphical element. Configuration information of the graphical display may be saved on the server, as well as loaded from the server. A graphical element may be created and modified within the graphical display. The graphical element may be a customized graphical element, customizable by a user, wherein the customization may be saved on the server. Customization may be performed by a programmer, without requiring modification to an application implemented in the RIA framework. The customized graphical element may be available for use to a user in other graphical displays. These customizations may be for creating new displays, modifying existing displays, all in addition to the graphical elements originally supported by the user interface application. The graphical element may comprise one or more property that is user-modifiable, and which may be selectable by a programmer. User interaction with the graphical element may cause a user interface application to emit modifications to the data to the server. A user-only mode may be provided to disallow creation or modification of the graphical display by a user, and a read-only mode may also be provided to disallow interaction with the graphical element by the user. A system administrator may select which user and for which graphical display a user interface application will operate in one of the user-only mode and read-only mode. The user may be required to identify himself, and where such identification is required, the user interface application may operate in at least one of the user-only mode and the read-only mode. Advantageously, the features of the invention allow modification of the graphical displays through any user RIA terminal and the resulting changes, upon saving, are immediately available to all other RIA terminals connected to the server. In another aspect, the present invention provides a method of providing bi-directional streaming communication over the HTTP or HTTPS protocol between a client and a server, the method comprising: generating a session ID; opening a first socket via a first HTTP transaction from the client to the server; associating the session ID with the first socket at the server and client; opening a second socket via a second HTTP transaction from the client to the server; associating the session ID with the second socket at the server and at the client; maintaining a long-lived connection on the first socket; and maintaining a long-lived connection on the second socket, wherein a correspondence is created among the session ID, the first socket and the second socket, and wherein bi-directional communication is established between the client and the server. The method may further comprise the client transmitting at least one data message selected from the group comprising: configuration information, commands, real-time information, pending data from a previous transaction, and other data. The method may further comprise waiting for an event from the first socket; verifying whether the event from the first socket is an error; reading available data from the first socket when the event is not an error; processing the data to produce a result; and optionally sending the result to the server via the second socket. The method may further comprise the client: closing the first socket; and closing the second socket, wherein the event from the first socket is an error. The method may further comprise the client: waiting for a client-generated event; processing the client-generated event to produce a result; and optionally sending the result to the server via the second socket. The client-generated event may be selected from the group comprising: an internally-generated stimulus, a result of user activity, a timer, and an external stimulus. The method may further comprise the client: marking data for transmission to the server as pending; closing the second socket; opening a new second socket; and associating the new second socket with the session ID. The method may further comprise the server: waiting for an event from the second socket; verifying whether the event from the second socket is an error; reading available data from the second socket when the event is not an error; processing the data to produce a result; and optionally sending the result to the client via the first socket. The method may further comprise the server closing the second socket, wherein the event from the second socket is an error. The method may further comprise the server: waiting for a server-generated event; processing the server-generated event to produce a result; and optionally sending the result to the client via the first socket. The server-generated event may be selected from the group comprising: an internally-generated stimulus, a result of user activity, a timer, a result from another connected client, data from a data source, and an external stimulus. The method may further comprise the server: closing the first socket; and closing the second socket. In the above method, the first HTTP transaction may be selected from the group comprising: a HTTP GET transaction and a HTTP HEAD transaction; and the second HTTP transaction may be selected from the group comprising: a HTTP POST transaction, a HTTP PUT transaction, a HTTP PATCH transaction, and a HTTP TRACE transaction. Preferably, the first HTTP transaction is a HTTP GET transaction, and the second HTTP transaction is a HTTP POST transaction. In yet another aspect, the present invention provides a system for providing bi-directional streaming communication over the HTTP or HTTPS protocol, the system comprising: at least one client; and at least one server, wherein the at least one client is adapted to implement the above-described method, and wherein the at least one server is adapted to implement the above-described method. The at least one client may comprise a RIA. The at least one server may comprise: a data collection component for collecting data from the at least one data source; and a data emission component for emitting data to at least one data client. In yet a further aspect, the present invention provides a computer readable memory storing instructions that, when executed on one or more computers, causes the computers to perform a method of providing bi-directional streaming communication over the HTTP or HTTPS protocol between a client and a server, the method comprising the steps of the above-described method. In yet a further aspect, the present invention provides a computer readable memory storing instructions that, when executed on one or more computers, causes the computers engage in a bidirectional networked real-time data exchange over the HTTP or HTTPS protocol between a data historian and a server, the method comprising the steps of the above-described method. As described above, the HTTP protocol implements a transaction model where each transaction is generally short-lived. Each transaction is initiated by the client, and is specified to either transmit data to the server, or to request data from the server, but not both. A web client may need to transmit or receive a large volume of data. In this case, it may implement an API that allows the client to send and receive the data in incomplete chunks. That is, it may require multiple send and receive actions before the entire data set has been transmitted. For example, a client that receives an image from a server may receive the image in chunks of 1 KB so that it can begin to render the image before the entire image has arrived to produce a progressive rendering effect. This behaviour can be leveraged within the client to produce a continuous stream of data. The client may make an HTTP GET request to a URL on a specially designed server (or a standard server with a specially designed handler for that URL). The server may respond with an HTTP header, and then hold the socket open. At any time in the future, the server may transmit data on the socket, which will arrive at the client as an incomplete transmission. The client can process this data and then wait for more. So long as the server holds the socket open, the client will simply act on the expectation that there is more data to be received, and will process it as it arrives. The server can transmit more information asynchronously to the client at any time without the need for the client to repeatedly open and close HTTP connections. This mechanism is the underlying methodology of Streaming AJAX. As disclosed above, it is uni-directional. This mechanism does not provide high-speed communication from the client to the server. One of the important innovations of the present invention is to solve the problem of creating a high-speed connection from the client to the server. The solution provides that the client opens an HTTP POST transaction with the server, and transmits the necessary HTTP header information. The server will then wait for the data payload of the POST to arrive. At any time in the future, the client may transmit data on the open socket, effectively acting like the Streaming AJAX mechanism in the reverse direction. The client may hold the socket open indefinitely, transmitting data as necessary without having to repeatedly open and close HTTP connections for each new transmission. The server must be aware that the data will arrive as a stream, and to process the information as it arrives. This may require custom behaviour in the server. The HTTP protocol specifies that a client must inform the server of the size of an HTTP POST message in the HTTP headers (the content-length). It is a violation of the HTTP protocol for the client to transmit more or less data than specified in the content-length header. The present invention recognizes this by tracking the number of bytes transmitted from the client to the server. The HTTP POST content length is specified by the client to be an arbitrary number of bytes. When the client has transmitted content-length bytes, it closes its existing connection and opens a new connection and continues transmitting. The number of bytes in a POST message can be large (e.g. up to 2{circumflex over ( )}31 bytes), so this open and close will happen very infrequently. The result will be a slight latency in the transmission of some data, but no loss of information. In a preferred embodiment, the present invention requires two sockets, one handling the server-to-client communication via HTTP GET, and the other handling client-to-server communication via HTTP POST. In order for these two sockets to act in concert to provide bi-directional streaming communication, the web server must be aware that they are related halves of a single conversation. This relationship may be established by the client. The client opens the HTTP GET connection first, and includes in its URL a unique session handle (e.g., a randomly generated GUID). When the client subsequently opens the HTTP POST request, it includes the same session handle in the URL. The server is then able to associate the two connections. When the HTTP POST connection must be closed and re-opened due to reaching the content-length limit, the client transmits the same GUID again. The server is then able to associate this new POST socket with the existing GET socket. The web server needs to understand that this methodology is being employed. It must keep track of calls to a specially designated URL for the original GET connection, associate the session handle with that connection, and then subsequently associate POST connections with the same session handle with that GET connection. It may be desirable, but not necessary, for the web server to spawn a separate thread to handle each connection pair. Having established the GET and POST connections, the client can receive asynchronous data transmissions from the server via the GET connection and transmit asynchronous data to the server via the POST connection. The server does the reverse, transmitting data via the GET connection and receiving data via the POST connection. The behaviour of both client and server are otherwise the same as if they were communicating via a single bi-directional socket. As will be understood by a person skilled in the art, other HTTP verbs such as HEAD, PUT, PATCH and TRACE may also be used. It will also be appreciated, for example, that it is possible to further modify a server to recognize other verbs or relax protocol restrictions on the HEAD transaction to behave like a GET. So, other verbs may be used if the server is modified to recognize the added/different behaviour. Such modifications depart from a strict implementation of the HTTP specification, yet still fall with the present invention. The unexpected advantages of the present invention in regard to the system and method for secure real-time cloud services are several. To address security concerns, one prior art method for sharing process data on the cloud has been to use a Virtual Private Network (“VPN”). However, from a security perspective, use of a VPN is problematic because every device on the VPN is open to every other machine. Each device (and each user of said device) must be fully trusted on the VPN. Security is complex and not very good, making it virtually impossible to use this approach for open communication between companies. Accordingly, the present invention allows sharing of data between third party companies without requiring that the third parties access an existing VPN, and therefore never exposing computers and devices on the VPN to those third parties. Furthermore, VPNs also incur a performance penalty, either compromising real-time performance or significant additional cost to compensate (e.g., by requiring additional hardware, computational resources and complexity to a system). Further advantageously, the present invention allows users to connect plant floor equipment to management as well as partner and third-party companies, using software at the plant site that is configured by the client company to allow specific data streams to be uploaded or downloaded. The present invention may be completely software-based, and can be implemented on existing hardware, therefore not introducing significant complexity to an established network. Advantageously, using methods disclosed herein, once the client/server connection is established, the data can flow in either direction. Client users can monitor a system in real time, effect changes, and see the effect of their actions immediately, as if they were working on a local system. Or, if required, the system can be configured from the plant to be one-way, read-only. The present invention provides the ability to connect to any industrial system, using open, standard protocols like OPC, TCP, and ODBC. Such flexibility allows further cost reduction by fully utilizing investments in existing equipment, or enhance new installations with cloud connectivity. Examples uses of the present invention are the addition to existing ICS, enhanced function as an HMI for an individual machine, or access RTUs or even individual embedded devices. In combination with methods disclosed herein, the present invention supports publish/subscribe data delivery, an event-driven model in which a client registers for data changes one time and then receives subsequent updates immediately after they occur. This low-latency, cloud-based system adds extremely low overhead to the overall data transmission time, effectively keeping throughput speeds to just a few milliseconds (or less) more than the network propagation time. In one embodiment, the present invention may achieve very high-speed performance is by handling data in the simplest possible format. Providing a data-centric design, the present system can function with various kinds of data sources and users, such as control systems, OPC servers, databases, spreadsheets, web pages, and embedded devices. Preferably, when a connection is made to the cloud server, incoming data is stripped of unnecessary formatting (XML, HTML, OPC, SQL, etc.) and passed as quickly as possible to any registered clients. At the receiving end the data is delivered in whatever format the client requires. With the methods disclosed herein, a RIA or web-based user interface for secure cloud services provide anywhere-access to register for the service, configure data connection options, and monitor usage and costs. Additionally, all data display screens may be provided via the web-based interface. This web-based HMI allows users to create pages from anywhere, and deploy them immediately. Further advantageously, one of the benefits of cloud computing is its ability to scale up or down to meet the needs of its users. The present invention can not only handle bursts of high-speed activity in the data flow, it can also be quickly configured to meet the needs of a growing system. Users can add data points to a particular device, or bring on new devices, new ICS, even new locations and installations through an easy-to-use, web-based configuration interface. The present invention is operable as a real-time industrial system, and can maintain a suitable level of performance and security in a cloud environment. Its sophisticated connectivity options allow the primary control system in a plant to continue functioning without disruption. The result is a robust and secure feed of live process data into an enterprise to provide opportunities for real-time monitoring, collaboration, and predictive maintenance. Referring toFIG.3, a system configuration for providing the secure networking of real-time data and historical data is shown. Generally, a novel and secure communication of historical data from an OT network110, through a DMZ120, to an IT network130is made possible by interleaving real-time data and historical data over a single, secure tunnel connection though firewalls111,131and by a combination of pull replication and daisy chaining of historical data. As a result, firewalls111,131can be closed to all inbound connection requests (i.e., no open incoming ports) from DMZ120, thereby completely isolating both IT network110and OT network130from DMZ120. A data source100provides real-time data to a connector310, which includes several components: a tunneller314, a history-writer316and a history-tunneller317. The data is written to history-writer316and optionally to tunneller314. History-writer316is configured to write data to a historian315. As shown inFIG.3, historian315is external from connector310. In other words the historian315is an application external from connector310. In an alternative embodiment (not shown), data source100can write data directly to historian315, or indirectly to historian315via an external application. In such case, history-writer316can be omitted. Data from source100is shown as sent indirectly to historian315via history-writer316, and optionally simultaneously sent to tunneller314, represented symbolically by broken arrow312. Once data is stored in historian315, this historical data can be retrieved and sent to tunneller314by history-tunneller317. First Network “Hop” Tunnel connection318is established from tunneller314to tunneller324aby an outbound connection request from OT network110to DMZ120, as symbolically represented by the direction of arrow318. In this embodiment, the direction of historical data flow is shown symbolically by broken arrow319from tunneller314to tunneller324a. If the network connection is lost and tunnel318is disconnected, the following steps take place upon network reconnection. Tunneller314re-establishes a lost connection to tunneller324a, after which history-tunneller317will re-initiate transmission of any missed historical data, which history-tunneller317retrieves from historian315. In turn, history tunneller317transmits the missed historical data via tunneller314over tunnel318to tunneller324a. The historical data flow is represented symbolically by broken arrow319. Historical data flow319is, in turn, sent from tunneller324ato historian325indirectly via history-writer326. An initial connection of tunnel318and a first initiation of transmission of historical data flow319is equivalent to a network reconnection as described above. Once missed historical data is successfully stored in historian325, a response is sent from historian325to history-writer326to confirm receipt of missed historical data. This confirmation is delivered via tunnellers324aand314to history-tunneller317. Optionally, this confirmation can be stored in historian315or elsewhere. Distribution of real-time data among the components within connectors310,320,330could be implemented with a real-time memory-resident data store, data bus or other communication method (not shown). For example, data source100could write data to a memory resident data store, which then redistributes real-time data to tunneller314and/or history-writer316. Similarly, communication between tunnellers324aand324bcould be via a real-time memory resident data store, data bus or other communication method (not shown). It is appreciated that history-writer316and history-tunneller317can be distinct processes from each other as shown inFIG.3, but both have access to historical data in historian315. Similarly, history-writer326and history-tunneller327can be distinct processes from each other as shown inFIG.3, but both have access to historical data in historian325. Further, each of connectors310,320,330may comprise a single executable or a collection of executables. Second Network “Hop” Tunnel connection338is established from a tunneller334to tunneller324bby an outbound connection request from IT network130to DMZ120, as symbolically represented by the direction of arrow338. In this embodiment, the direction of historical data flow is shown symbolically by broken arrow339from tunneller324bto tunneller334. If network connection is lost and tunnel338is disconnected, the following steps take place upon network reconnection. Tunneller334re-establishes a lost connection to tunneller324b, after which tunneller334will send a request to history-tunneller327to re-initiate transmission of any missed historical data, which history-tunneller327retrieves from historian325, and transmits via tunneller324bover tunnel338to tunneller334, represented symbolically by broken arrow339. Historical data flow339is sent from tunneller324bto historian335indirectly via history-writer336. In an alternative embodiment (not shown), a separate component or process may be provided in connector330to re-initiate transmission of historical data from history-tunneller327. There is an important security advantage of this unique architecture that is not immediately apparent: it provides that no configuration information (other than configuration of the communication with historian325and authentication information allowing tunnellers314and334to connect) is stored inside DMZ120or in connector320. It is only in either end of the intermediate network(s) (i.e., connector310in OT network110and connector330in IT network130) that decisions over which data will be transferred are made. Therefore, only the “protected” networks (behind closed firewalls closed to incoming connection requests) hold this configuration information, and control thereof. A successful exploit of connector320or exploit within the DMZ120would not be able to expose more data than OT network110chooses to transmit. Moreover, and perhaps more importantly, each protected network (and its corresponding security and configuration settings) is isolated from the other protected network(s). That is, settings within connector310and OT network110are distinct from connector330and IT network130settings; they have no control over each other, and are also unknown to each other. Furthermore, the connectors310,330themselves act to isolate the “end” historians from direct access from outside networks (e.g., from the intermediate network(s)), or even from internal networks if security requirements demand it. This degree of security isolation is a unique advantage of the present invention over the prior art, and represents a critical part of the security statement. For example, the OPC UA protocol, which is a common standard industrial protocol that is promoted for its security, allows a ‘reverse connection’ from the server to client (the server could be a historian, and the client an ‘edge’ historian). But, to function, configuration information is necessary at the server and at the client, effectively splitting the configuration between the secure and insecure networks. The present invention avoids this insecure requirement. The example embodiments described herein enable real-time data and historical data over the same tunnel connection and single port. By contrast, common commercial solutions, such as AVEVA™ Historian or OSIsoft PI™ database, require a separate port for database replication. Real-time data would necessarily be transmitted over a separate port (as well as by a distinct protocol), and out-of-band with the historical data transmission. The example embodiments described herein do not require the use of a second port or channel. Optionally, the real-time data and historical data may be carried over one or more sockets. In some embodiments, the historical data is data compressed by using a data compression process. In yet a further alternative embodiment (not shown), historians315,325,335can be built into one or more of connectors310,320,330, respectively. It is appreciated that for clarity, the example inFIG.3describes historical and/or real-time data “flow” from OT network110to DMZ120to IT network130, or from left to right (for further clarity, see description ofFIG.7below). However, it is within the scope of the invention that historical data flow may be bi-directional on all established tunnel connections, and that data flow can be easily reversed, or additional data sources can be added to meet system requirements and network topology. For example, an additional data source (not shown) could be added by way of an incoming connection to the DMZ120from another network. Another unique advantage of the present invention is the ability to initiate historical tunnelling from either end of a tunnel connection (i.e., from either connector). Accordingly, historical data can be “pushed” or “pulled” as demanded by the system requirements and network topology. In more general terms, DMZ network120(and connector320) represents an “intermediate” network. Advantageously, the present invention allows for as many intermediate networks (and data “hops” across them) as needed to meet system requirements and network topology. This enables a new capability for systems to store-and-forward historical data. An alternative example embodiment in accordance the present invention is illustrated symbolically inFIG.4. As shown inFIG.4, a data source400sends real-time and/or historical data to a connector401in a network411indirectly to connector409in network419. In this example, historical data flow is from left to right, represented symbolically by broken arrows419, over tunnel connections shown by solid arrows, while the arrow directions indicate the direction of initial connections. Preferably, each network is separated by a firewall that is closed to inbound connections (not shown). Connector402in network412sends data on to subsequent connectors/networks420until arriving at connector408in network418. Intermediate networks412,418with their corresponding connectors (402,408) are functionally similar to network120and corresponding connector320inFIG.3, but can receive an inbound connection or make an outbound connection as necessary to satisfy network security requirements, as illustrated by the double-arrowed solid lines between intermediate networks412,418(and connector402,408, respectively) connecting any number of further intermediate networks420(with their corresponding connectors, not shown). In some embodiments, any connector initiating a network connection stores connection security information. As well, no intermediate connector402,408, etc., contains connection security settings or configuration information from the “end” connectors401,409inside networks411,419, respectively. In some embodiments, one or more intermediate networks412,418,420include cloud-based brokers and network services. In some embodiments, real-time data and historical data are transmitted over the same tunnel, thereby interleaving historical and real-time data over the same underlying connection. In some embodiments, the historical data is data compressed by using a data compression process. Referring toFIG.5, yet another embodiment is shown. One aspect of this embodiment is that the network architecture need not be 1:1. That is, a connector is capable of sending data to several connectors to distribute historical data simultaneously to several target systems. Similar toFIG.4, a data source500sends real-time and/or historical data to a connector501in network511indirectly to connectors505,508,509in networks515,518,519, respectively. In this example, historical data flow is from left to right, represented symbolically by broken arrows519, over tunnel connections shown by solid arrows, while the arrow directions indicate the direction of initial connections. Preferably, each network is separated by a firewall that is closed to incoming connections (not shown). A connector502in network512sends data on to subsequent connectors506,507. The data, in turn, is sent on to connectors508,509in networks518,519, respectively. In some embodiments, connector502also propagates data to any number of additional intermediate connectors/networks520to arrive at end connectors/networks521. Intermediate networks512,513,514,516,517,520with their corresponding connectors (502,503,504,507,506,520) are functionally similar to network120inFIG.3, but can make a connection in either direction, as illustrated by the double-arrow solid lines. This embodiment is advantageous to provide redundant historical data for a variety of useful purposes, such as to remote control systems, 3rd party suppliers, equipment manufacturers, analytics, AI analysis, etc. Effectively, historical data can be distributed to a web of systems. In some embodiments, real-time data and historical data are transmitted over the same tunnel and the historical data is data compressed by using a data compression process. Referring toFIG.6, a further alternative embodiment is shown. Similar toFIG.4, data source600sends real-time and/or historical data to connector601and indirectly to connector609. Historical data flow is from left to right, represented symbolically by broken arrows619, over tunnel connections shown by solid arrows, while the arrow directions indicate the direction of initial connections. In this example, the current invention is employed inside a single network, and the transmission characteristics between connectors601,602,608,609and620are preserved, but without the added protection of firewalls between each connector. This is example is useful in a distributed system such as a single water treatment plant network or a single wind farm network, where there is a need to ensure that multiple storage servers are all up-to-date and contain complete copies of historical data. To meet system requirements, multiple different configurations (that are not 1:1) within a single network, similar toFIG.5, are within the scope of the invention. In some embodiments, real-time data and historical data are transmitted over the same tunnel, where the historical data is data compressed by using a data compression process. Referring toFIG.7, a further alternative embodiment is shown, similar in configuration toFIG.3andFIG.4, to illustrate secure store-and-forward of historical data over a real-time tunnel.FIG.7shows a “sender” and “receiver” networks and computers, each protected by firewalls and also separated by a DMZ network and computer. Within each network, and in communication with (or running on) each computer, is an external historian. A data source sends data to the sender computer that stores historical data to an external historian (large white arrows). The sender establishes an outbound real-time tunnel connection (large black arrow) to the DMZ computer, over which historical data is sent (small white arrows), and stored on an external historian within the DMZ network. The receiver establishes an outbound real-time tunnel connection (large black arrow) to the DMZ computer, over which historical data is retrieved (small white arrows), and stored on an external historian within the receiver network. It should be noted that any different network types and labels are applicable to the present invention, e.g. Internet, IT network, OT network, sender/receiver networks, etc. Specific network labels are used to describe preferred embodiments and to provide context and clarity, but in no way limit the scope of the invention. It should be understood that when referring to a network residing between, for instance, a server and a client, the network itself may comprise a series of network connections; that is, there is no implication of a direct connection. Similarly, any server, client or device ‘on’ the Internet is understood to mean that the server, client or device is connected to a network connection that is accessible to the Internet. It is also understood that an authority on a data set, or an authoritative holder of a data set, refers to the originator of the data set, and all other recipients of the data set hold non-authoritative copies. In the present invention, a server, client or device can inherit authority from another server, client or device; for example, the cloud server may act as an authority on a data set for another client/end-user device; the client/end-user device sees the cloud server as the authority on the data set, but unknown to the client/end-user device, the cloud server may be propagating the data from a “true” authoritative client/end-user device connected to the cloud server. It is appreciated that the present invention allows for a myriad of combinations of servers, clients, and devices interconnected and inheriting authority over multiple data sets shared amongst them. A data server may be any application designed to collect data from a data source or act as a data source itself, as long as it also supplies a TCP/IP communication method that can be accessed by a constructed data historian. A data source may be any application or system capable of producing real-time data that can be converted into a format suitable for representation within the data server. A data source may also be any application or system capable of producing non-real-time data that can be converted into a format suitable for representation within the server. The server can poll this data repeatedly or collect it by subscription to provide the data to a data historian even in the case that the original data is not real-time. For example, a database management system (DBMS) is generally not real-time, but the data can be polled repeatedly to create a periodically updating data set within the server, thus supplying a data historian with a pseudo-real-time view of the data within the DBMS. The server and the data source may be combined into a single application, as may be the case with an OPC-UA server, or with an embedded device that offers access to its data via a TCP/IP connection. A program developed using any compiled or interpreted computer language that can open and interact with a TCP/IP socket may be used in place of a data historian, which may or may not run within a web browser. The client and server can implement wait states in any number of ways, including creating a new process thread to perform a synchronous wait or performing a multiple-event wait in a single thread. These are implementation details that will depend on choices made during the client and server implementations, but do not depart from the scope of the present invention. Advantageously, the present invention is operable on any device that is capable of opening a TCP socket. For example, the client/server implementation may comprise multiple servers propagating data in real-time over a network or the Internet, optionally in a secure manner, without any major and therefore costly changes in existing infrastructure (e.g., security policies, firewalls, software, hardware, etc.). The present invention may be implemented, for example, by way of software code built-in to the data historian, a software add-on, a software plug-in, a scripting language, a separate software application, or a combination thereof. Additional Clauses The following are additional clauses relative to the present disclosure, which could be combined and/or otherwise integrated with any of the embodiments described above or listed in the claims below. Clause 1. A method for providing access to historical data over a real-time tunnel, the method comprising:establishing, by a first connector (401,601), a first tunnel connection between the first connector (401,601) and a second connector (402,602);establishing, by a third connector (409,609), a second tunnel connection between the third connector (409,609) and a fourth connector (408,608);obtaining by the first connector (401,601), data; andpropagating the data from the first connector (401,601) to the third connector (409,609) through the first tunnel connection and the second tunnel connection. Clause 2. The method of Clause 1, wherein the first connector (401,601), the second connector (402,602), the third connector (409,609) and the fourth connector (408,608) are in a same network. Clause 3. The method of Clause 1, further comprising:configuring a first network (411) to include the first connector (401);configuring a second network (412) to include the second connector (402);configuring a third network (419) to include the third connector (409);configuring a fourth network (418) to include the fourth connector (408); andseparating each of the first network (411), the second network (412), the third network (419), and the fourth network (418) by a respective firewall. Clause 4. The method according to any of Clauses 1-3, wherein the data is (i) real-time data, (ii) historical data, or (iii) a combination of (i) and (ii). Clause 5. The method according to any of Clauses 3-4, further comprising:receiving, by the second network (412), an inbound connection. Clause 6. The method according to any of Clauses 3-5, further comprising:making, by the first network (411), an outbound connection. Clause 7. The method according to any of Clauses 2-6, further comprising:initiating, by any one of the (i) first connector (401,601), (ii) the second connector (402,602), (iii) the third connector (409,609) or (iv) the fourth connector (408,608), a connection; andstoring, by the connector initiating the connection, security information. Clause 8 A method for providing access to historical data over a real-time tunnel in an architecture including an OT network (110), a DMZ (120) and an IT network (130), comprising:interleaving real-time data and historical data over a secure tunnel connection (318), a first firewall (111) and a second firewall (131) by (a) performing pull replication of the historical data, (b) daisy chaining the historical data, or (c) a combination of (a) and (b). Clause 9. The method of Clause 8, further comprising:closing the first firewall (111) and the second firewall (131) to all inbound connection requests, thereby isolating the IT network (110) and the OT network (130) from DMZ (120). Clause 10. The method according to any of Clauses 8 and 9, further comprising:obtaining from a data source (100), by a first connector (310), real-time data, wherein the first connector (310) includes a first tunneller (314), a first history-writer (316) and a first history-tunneller (317). Clause 11. The method according to any of Clauses 8-10, further comprising:writing the real time data to the first history-writer (316). Clause 12. The method according to any of Clauses 8-11, further comprising:writing the real time data to the first tunneller (314). Clause 13. The method according to any of Clauses 8-12, further comprising:writing the real-time data, by the first history writer (316), to a first historian (315). Clause 14. The method according to any of Clauses 8-13, wherein the first historian (315) is external from the first connector (310). Clause 15. The method according to any of Clauses 8-14, wherein the first historian (315) is an application external from the first connector (310). Clause 16. The method according to any of Clauses 8-15, further comprising:obtaining from the data source (100), by the first historian (315). Clause 17. The method according to any of Clauses 8-16, further comprising:obtaining, from the data source (100) by the first historian (315), the real-time data via the first history-writer (316). Clause 18. The method according to any of Clauses 8-17, further comprising:retrieving, from the first historian (315), historical data; andpropagating, by the first history-tunneller (317), the historical data retrieved from the first historian (315) to the first tunneller (314). Clause 19. The method according to any of Clauses 8-18, further comprising:establishing a first tunnel connection (318) from the first tunneller (314) to a first DMZ tunneller (324a) by an outbound connection request from the OT network (110) including the first tunneller (314) to the DMZ (120) including the first DMZ tunneller (324a). Clause 20. The method according to any of Clauses 8-19, further comprising:re-establishing, by the first tunneller (314), a lost connection to the first DMZ tunneller (324a);re-initiating, by the first history-tunneller (317), transmission of missed historical data;retrieving, by the first history-tunneller (317) the historical data from the first historian (315);transmitting, by the first history-tunneller (317), missed historical data via the first tunneller (314) over the first tunnel connection (318) to the first DMZ tunneller (324a); andpropagating the historical data from the first DMZ tunneller (324a) to a second historian (325) via a second history-writer (326), where the first DMZ tunneller (324a), the second historian (325) and the second history-writer (326) are in the DMZ (120). Clause 21. The method according to any of Clauses 8-20, an initial connection of the tunnel connection (318) and a first initiation of transmission of a historical data flow (319) via the tunnel connection (318) is equivalent to a network reconnection. Clause 22. The method according to any of Clauses 8-21, further comprising:sending, by the second historian (325), a confirmation response to the second history-writer (326) confirming receipt of missed historical data. Clause 23. The method according to any of Clauses 8-22, further comprising:delivering, via the first DMZ tunneller (324a) and the first tunneller (314), the response confirmation response to first history-tunneller (317). Clause 24. The method according to any of Clauses 8-23, further comprising:establishing a second tunnel connection (338) from a third tunneller (334) in the IT network (130) to a second DMZ tunneller (324b) in the DMZ (120) by an outbound connection request from the IT network (130) to the DMZ (120). Clause 25. The method according to any of Clauses 8-24, further comprising:re-establishing, by the third tunneller (334), a lost connection to the second DMZ tunneller (324b);sending, by the third tunneller (334) a request to the second history-tunneller (327) to re-initiate transmission of historical data;retrieving, by the second history-tunneller (327) the historical data from the second historian (325); andtransmitting, via the second DMZ tunneller (324b) over the second tunnel connection (338) to the third tunneller (334), the historical data. Clause 26. The method according to any of Clauses 8-25, wherein the historical data flow (339) is sent from the second DMZ tunneller (324b) to the third historian (335) via the third history-writer (336). Clause 27. The method according to any of Clauses 8-26, wherein the first historian (315), the second historian (325) and the third historian (335) are built into one or more of the first connector (310), second connector (320), and third connector (330), respectively. Clause 28. The method according to any of Clauses 8-27, wherein the first DMZ tunneller (324a) and the second DMZ tunneller (324b) are the same tunneller. Clause 29. A method for providing access to historical data over a real-time tunnel, the method comprising:receiving, by a first connector (501), (i) real-time, (ii) historical data, or (iii) a combination of (i) and (ii) from a data source (500); andtransmitting, by the first connector (501) to at least one intermediary connector (502,503), (i) the real-time, (ii) the historical data, or (iii) a combination of (i) and (ii); andtransmitting, by the intermediate connector (502,503) to at least one end connector (505,508,509). Clause 30. The method according to Clause 28, wherein the first connector (501) is in a first network (511), the at least one intermediate connector (502,503) is in a correspondingintermediate network (512,513), and the at least one end connector (505,508,509) is in a corresponding end network (515,518,519). Clause 31. The method according to any of Clauses 30, wherein each of the first network (511), the corresponding intermediate network (512,513), and the corresponding end network (515,518,519) is separated by a firewall that is closed to incoming connections. Clause 32. A system for providing access to historical data over a real-time tunnel, the method comprising:a first connector (401,601) configured to establish a first tunnel connection with a second connector (402,602) and to receive data from a data source (100);a third connector (409,609) configured to establish a second tunnel connection with a fourth connector (408,608); andthe first connector (401,601) configured to propagate the data to the third connector (409,609) through the first tunnel connection and the second tunnel connection. Clause 33. The system of Clause 32, wherein the first connector (401,601), the second connector (402,602), the third connector (409,609) and the fourth connector (408,608) are in a same network. Clause 34. The system of Clause 32, wherein:the first connector (401) is configured to be included in a first network (411);the second connector (402) is configured to be included in a second network (412);the third connector (409) is configured to be included in a third network (419);the fourth connector (408) is configured to be included in a fourth network (418); andeach of the first network (411), the second network (412), the third network (419), and the fourth network (418) is configured to be separated by a respective firewall. Clause 35. The system according to any of Clauses 32-34, wherein the data is (i) real-time data, (ii) historical data, or (iii) a combination of (i) and (ii). Clause 36. The system according to any of Clauses 32-35, wherein:at least one of the (i) first connector (401,601), (ii) the second connector (402,602), (iii) the third connector (409,609) and (iv) the fourth connector (408,608) further configured to:initiate a connection; andstore, by the connector initiating the connection, security information. Clause 37. A system for providing access to historical data over a real-time tunnel, comprising:an OT network (110) configured to interleave real-time data and historical data over a secure tunnel connection (318), a first firewall (111) and a second firewall (131) in conjunction with a DMZ (120) and an IT network (130) by (a) performing pull replication of the historical data, (b) daisy chaining the historical data, or (c) a combination of (a) and (b). Clause 38. A connector (310) for providing access to historical data over a real-time tunnel, comprising:a first history-writer (316) configured to obtain real-time data from a data source (100);the first history-writer (316) configured to store the real-time data and supply the real-time data to a first historian (315);a first history-tunneller (317) configured to retrieve historical data from the first historian (315); anda first tunneller (314) configured to store and forward the real-time data and the historical data. Clause 39. The connector (310) of Clause 38, the first history writer (316) further configured to:write the real-time data to a first historian (315). Clause 40. The connector (310) of Clause 39, wherein the first historian (315) is external from the first connector (310). Clause 41. The connector (310) according to Clause 39, wherein the first historian (315) is an application external from the first connector (310). Clause 42. The connector (310) according to Clause 38, the first tunneller (314) further configured to:establishing a first tunnel connection (318) with a first DMZ tunneller (324a) by transmitting an outbound connection request. Clause 43. The connector (310) according to Clause 42:the first tunneller (314) further configured to re-establish a lost connection to the first DMZ tunneller (324a);the first history-tunneller (317) further configured to re-initiate transmission of missed historical data;the first history-tunneller (317) further configured to retrieve the historical data from the first historian (315); andthe first history-tunneller (317) further configured to transmit the missed historical data via the first tunneller (314) over the first tunnel connection (318) to the first DMZ tunneller (324a). Clause 44. The connector (310) according to any of Clauses 38-43:the first tunneller (314) further configured to receive a confirmation response and forward the confirmation response to the first history-tunneller (317), andthe confirmation response confirming receipt of missed historical data by a second historian (325). Clause 45. The connector (310) according to any of Clauses 38-44, an initial connection of the tunnel connection (318) and a first initiation of transmission of a historical data flow (319) via the tunnel connection (318) is equivalent to a network reconnection. Clause 46. The connector (310) according to Clause 38, further comprising:the first historian (315). Clause 47. A connector (320) for providing access to historical data over a real-time tunnel, comprising:a first DMZ tunneller (324a) configured to receive (i) real time data, (ii) historical data, or (iii) a combination of (i) and (ii) via a tunnel connection (318);a second history-writer (326) configured to propagate historical data to a second historian (325);second history-tunneller (327) configured to retrieve the historical data from the second historian (325); anda second DMZ tunneller (324b) configured to transmit over the second tunnel connection (338) (i) the real-time data, (ii) the historical data, or (iii) a combination of (i) and (ii). Clause 48. The connector (320) of Clause 47, the second history-writer (326) further configured to receive from a second historian (325), a confirmation response confirming receipt of missed historical data. Clause 49. The connector (320) according to Clause 48, the first DMZ tunneller (324a) configured to deliver via the first tunneller (314), the confirmation response. Clause 50. The connector (320) according to any of Clauses 47-49, wherein an initial connection of the tunnel connection (318) and a first initiation of transmission of a historical data flow (319) via the tunnel connection (318) is equivalent to a network reconnection. Clause 51. The connector (320) according to any of Clauses 47-50, the second DMZ tunneller (324b) further configured to receive a connection request and establish a second tunnel connection (338) with a third tunneller (334). Clause 52. The connector (320) according to any of Clauses 47-51:the second DMZ tunneller (324b) further configured to re-establishing a lost connection with the third tunneller (334);the second history-tunneller (327) further configured to receive a request to re-initiate transmission of historical data and retrieve the historical data from the second historian (325); andthe second DMZ tunneller (324b) further configured to transmit over the second tunnel connection (338) the historical data. Clause 53. The connector (320) according to any of Clauses 47-52, further comprising:the second historian (325). Clause 54. The connector (320) according to Clause 47, further comprising:wherein the first DMZ tunneller (324a) and the second DMZ tunneller (324b) are the same tunneller. Clause 55. A connector (330) for providing access to historical data over a real-time tunnel, comprising:a third tunneller (334) configured to establish a second tunnel connection (338) with a second DMZ tunneller (324b) and receive (i) real-time data, (ii) historical data, or (iii) a combination of (i) and (ii); anda third history-writer (336) configured to send historical data to a third historian (335). Clause 56. The connector (330) of Clause 55:the third-tunneller (334) further configured to re-establish a lost connection with the second DMZ tunneller (324b) and receive historical data over the second tunnel connection (338). Clause 57. The connector (330) according to any of Clauses 55-56, further comprising the third historian (335). Clause 58. A method for providing access to historical data over a real-time tunnel, the method comprising:establishing, by a first connector (310), a first tunnel connection (318) from the first connector (310) to a second connector (320);establishing, by a third connector (330), a second tunnel connection (338) from the third connector (330) to the second connector (320);obtaining by the first connector (310), data; andpropagating the data from the first connector (310) to the third connector (330) through the first tunnel connection and the second tunnel connection. Clause 59. The method according to Clause 58, further comprising:retrieving, by the first connector (310), historical data from a first historian (315); andtransmitting, by the first connector (310), the historical data over the first tunnel connection (318) to the second connector (320). Clause 60. The method according to any of Clauses 58-59, further comprising:receiving, by the second connector (320), (i) real time data, (ii) historical data, or (iii) a combination of (i) and (ii) via a tunnel connection (318);propagating, by the second connector (320), historical data to a second historian (325);retrieving, by the second connector (320) the historical data from the second historian (325); andtransmitting, by the second connector (320), via the second tunnel connection (338) (i) the real-time data, (ii) the historical data, or (iii) a combination of (i) and (ii). Clause 61. The method according to any of Clauses 58-60, further comprising:establishing, by the third connector (330) a second tunnel connection (338) with the second connector (320);receiving, by the third connector (330) (i) real-time data, (ii) historical data, or (iii) a combination of (i) and (ii); andsending, by the third connector (330) historical data to a third historian (335). Clause 62. A non-transitory computer-readable medium having stored thereon one or more sequences of instructions for causing one or more processors to perform any of the methods of Clauses 1-31 and 58-61. | 68,509 |
11943206 | DETAILED DESCRIPTION OF THE DRAWINGS InFIG.1the architecture of a digital content distribution system in accordance with the present invention is shown. User A communicates with a DRM Self-Service Web-Site100using a device130afor the purpose of inputting various information regarding the distribution of content owned or controlled by User A. Device130amay be any type of general purpose personal computer (PC), personal digital assistant (PDA), mobile handset, cellular telephone or other handheld device capable of communicating in a wired or wireless manner with the Internet so as to display one or more user input screen such as those discusses below in relation toFIG.6. Device130awould need software such as an Internet browser, Wireless Access Protocol (WAP) browser or other similar software in order to send and receive data from the DRM Self-Service Web-Site100. This type of software is well-known in the art. User A communicates using device130awith DRM Self-Service Web-Site100in order to specify various parameters with respect to the transfer of content between one or more other users such as User B and User C.FIG.1shows the arrangement of components within a typical operational digital content distribution system. In this example, transfer of digital content owned or controlled by User A is transferred between User B and User C using the associated DRM Controller120. The other components are important for the construction of a physical system but are not as important to the present invention as DRM Controller120. DRM Controller120communicated with DRM Self-Service Web-Site100in order to receive information regarding how to handle a transfer of digital content from one user to another, such as the transfer of digital content from User B to User C. User B and User C communicate with DRM Controller120and with each other by using devices130band130cwhich devices are similarly enabled to device130adescribed above, although devices130band130cshould contain an interface for use by an actual person. A typical transaction would begin with some type of dialog between User B and User C that leads the two to decide that one has content that it would like to share with the other. Accounting and Content Web (ACW) Server140comprises software implemented on a general purpose computer that is capable of keeping track of transfer of digital content and payment of digital content. ACW Server140is in communication with DRM Self-Service Web-Site100in order to receive information about the amount of compensation a user such as User A desires to receive for transfers of digital content between other user such as User B and User C. ACW Server140is also in communication with SCP Pre-Pay Web Service Server160that is an intelligent service control point capable of decrementing an account of the user paying for a transfer of content and incrementing one or more of the accounts of the user transferring content and/or the owner of the content being transferred. In this way, P2P transfers of digital content can be accomplished with the knowledge and approval of the owner of the content who is properly compensated for the transfer. SCP Pre-Pay Web Service Server160is in communication with the Digital Rights Server (DRS) which is a repository of records associated with the transfer of digital content and payment for such transfers. SCP Pre-Pay Web Service Sever160can be any of several known intelligent service control points such as the Telcordia Converged Application Server and/or Real-Time Charging System. FIG.2depicts a more detailed embodiment of a digital content distribution system, in accordance with the present invention. Again User A communicates using a device (not shown) through the Internet220with one or more DRM Self Service servers/servlets230in order to input various information about the distribution of digital content owned or controlled by User A. ACW Server140is broken into two components: Content Registry Web Server140aand Content Account Web Server (Digital Rights Management Platform) (“DRMP”)140b. Content Registry Web Server140amanages the information that plays a role in allowing content to be forwarded between users. That is, it contains user or content-owner “preferences” pertaining to allowing content exchange such as exchange rights spelled out in traditional DRM systems. Content Accounting Web Service140bkeeps track of the amount a user desires for transfer of specific digital content and communicated through the Internet220using a Simple Object Access Protocol (SOAP)260with ISCP pre-pay web-services160to enable the account of the users and owners of content to properly decremented and incremented in accordance with the payment scheme. Content Accounting Web-Service140bcan also communicate using Java Data Base Connectivity (JDBC) with DRS180in order to directly access records of users of the digital content distribution system. As withFIG.1, User B and User C get permission for a transfer of digital content by communicating with DRM Controller120. DRM Controller120communicated with Content Accounting Web Service140band Content Registry Web Server140a. In the case of the former, DRM Controller120sends information about the transfer so as to enable proper incrementing and decrementing of user accounts. For example, a transfer of digital content from User B to User C could result in a decrementing of the account of User C as well as an incrementing of the accounts of User A and User B. User A, as the owner of the digital content, is likely to receive the majority of the payment made by User C but User B might also receive a small payment as a reward for being the one distributing content on behalf of User/Owner A. FIG.3depicts a few of the graphical user interface (GUI) screens shown by the DRM Controller120to users of the system. Interface Screen310is the P2P transfer control screen. Interface screen320is the interface seen by the receiving peer or user such as User C in the example transaction inFIGS.1and2. Interface Screen330is the interface seen by the sending peer/user such as User B. The flow of content transfer process between User B and User C is shown inFIG.4. User B and User C have previously registered with DRM controller120and have by some arbitrary method decided that they wish to exchange a piece of digital content, X at step400ofFIG.4. User C requests a copy of digital content X from User B at step405/410. User B is willing to accept the request and so sends an acknowledgement back to User C at step415. Both User B and User C register their interest in the digital content X with the DRM Controller120at steps420and425respectively. Note that in the general case there may be more than one sender (i.e. equivalent to A) for a given reception. Digital content X may be any type of digital information including but not limited to digital music, movies, books, magazines, computer software, audiobooks, etc. At step430the DRM Controller120performs a set of arbitrary tests against the transfer request. For example the DRM Controller120may be designed to query whether User C has sufficient funds. Alternatively, DRM Controller may query whether User B legitimately has a copy of digital content X, or whether it is a time period in which User A is allowed to distribute content. Any number of arbitrary tests can be generated. Assuming these tests are successful, DRM Controller120sends an acknowledge (ACK) message back to User C at step435and/or an acknowledge (ACK) message with an encryption key E to User B at step440. This encryption key E is taken from a table of encryption key/hash pairs which have been provided to the DRM Controller by an external authority. For example, the encryption key/hash pairs may be provided by User A, the owner or licensed distributor of digital content X. User B encrypts the content using they key provided by the DRM Controller120. User B also performs a hash function (preferably MD5) over the encrypted digital content and returns this hash to the DRM Controller120at an optional step not shown inFIG.4. If the hash matches that in the database of the DRM Controller then the DRM Controller instructs User A and User B that the transfer may proceed at an additional optional step not depicted inFIG.4. User B then transfers the encrypted content to User C by arbitrary means that are well known in the art at step445. Once the content transfer has completed User C ensures that the received content has been physically written to non-volatile storage (to account for crashes) in a step not shown inFIG.4. User C then calculates a hash over the encrypted form of the content E(X) and returns this hash value to the DRM Controller120at step450. Because the encryption key E is not known ahead of time, User C cannot know the value of the hash a priori and can only calculate it by performing the Encryption/Hash Calculation steps. On checking the returned hash value against the hash from the table the DRM Controller120knows that User C does indeed have the digital content X and that the digital content is in good condition. If this value matches the value provided by the content owner User A and stored by the DRM Controller then a transfer of valid content has been successful and the DRM Controller updates whatever central records are appropriate at step455, while also returning an acknowledge (ACK) message with a decrypt key to User C to allow User C to decrypt the digital content X. A record of the transfer is kept for a period of time such that if User C crashed in the period from obtaining the complete content to receiving the decrypt key and decrypting the content then they could request said key again without incurring additional charges. It will be noted that the DRM Controller120never needed to ‘see’ or possess an actual copy of the digital content. DRM Controller120only requires a set of encrypt key/hash pairs. If these pairs are generated by an external responsible authority then the organization running the DRM Controller need never see or have knowledge of what the digital content X is. In an extension to the invention if the key/hash pairs are consumed this would serve as a form of audit and tracking for the content rights holder and would also prevent possible attacks based in the re-use of key/hash pairs. By “consumed” it is meant that the DRM server would use a key/hash pair for one and only one transaction and would never re-use the transactions for subsequent transactions. Furthermore, the external repository could supply the key//has pairs to the DRM server on demand, when users have committed to a content transfer. FIG.5depicts an example of digital content that is being transferred from one user to another. Field510contains the filenames of the digital content to be transferred. In this example the digital content is MP3 encoded music files. Field520contains the encrypt and/or decrypt keys and field530contains the related MD5 checksum hash. One line from the file set forth inFIG.5is all that is needed for the DRM Controller120to be able to validate a specific transfer. FIGS.6A-Edepict a set of graphical user interface (GUI) screens used by the DRM Self-Service Web Server100in order to gather information from the owner of digital content. Screen610ofFIG.6Ais a user login screen for such a server. Screen620ofFIG.6Bprovides the owner/user with the ability to select the viewing of account balances, billing activity, media, and to “top-up” a pre-pay account balance. Screen630ofFIG.6Cprovides information on the account balance. Screen640ofFIG.6Denables the user to view the digital content that he or she has transferred from another source. Screen650ofFIG.6Eprovides an interface for adding money to a pre-pay wallet for the future purchase of digital content. The above description has been presented only to illustrate and describe the invention. It is not intended to be exhaustive or to limit the invention to any precise form disclosed. Many modifications and variations are possible in light of the above teaching. The applications described were chosen and described in order to best explain the principles of the invention and its practical application to enable others skilled in the art to best utilize the invention on various applications and with various modifications as are suited to the particular use contemplated. | 12,387 |
11943207 | DETAILED DESCRIPTION The following embodiments generally relate to data security and secure data management functions (SDMF) in a distributed edge computing environment. Security schemes allow client-server applications to communicate over a network connection while preventing unauthorized or malicious entities from eavesdropping on the communication. Transport layer security (TLS) and its predecessor, secure socket layer (SSL), are two such security schemes. TLS/SSL include cryptographic protocols that use asymmetric cryptography to establish a shared session key between the client and the server. This key is subsequently used as a symmetric key for securely transmitting messages between the client and the server. Adding support for TLS/SSL to an application such as a web server or video streaming service adds significant overhead in terms of memory bandwidth. In some cases, memory bandwidth can become a bottleneck, limiting the throughput capacity that can be served by a platform. This is because encrypting the data requires additional “touches” of the data in memory, increasing memory bandwidth, and potentially polluting the cache. Example embodiments provide SDMF including configuring TLS/SSL in a fashion that uses only one touch to encrypt the data. Example embodiments can be implemented in systems similar to those shown in any of the systems described below in reference toFIGS.1-7B. Additional description of SDMF in connection with an edge architecture and edge computing devices is provided hereinbelow in connection with at leastFIG.8-FIG.11. As used herein, the term “one-touch inline processing” (e.g., cryptographic data processing) refers to data processing where the data is retrieved from storage and is processed (e.g., encrypted) inline, on the way to memory (e.g., before storage in a memory device such as a DRAM). This is distinguished from conventional encryption techniques where the retrieved data is initially stored in memory, retrieved for encryption, then stored back as encrypted data in memory, and then retrieved again for communication to a requesting device. FIG.1is a block diagram100showing an overview of a configuration for edge computing, which includes a layer of processing referred to in many of the following examples as an “edge cloud”. As shown, the edge cloud110is co-located at an edge location, such as an access point or base station140, a local processing hub150, or a central office120, and thus may include multiple entities, devices, and equipment instances. The edge cloud110is located much closer to the endpoint (consumer and producer) data sources160(e.g., autonomous vehicles161, user equipment162, business and industrial equipment163, video capture devices164, drones165, smart cities and building devices166, sensors and IoT devices167, etc.) than the cloud data center130. Compute, memory, and storage resources which are offered at the edges in the edge cloud110are critical to providing ultra-low latency response times for services and functions used by the endpoint data sources160as well as reduce network backhaul traffic from the edge cloud110toward cloud data center130thus improving energy consumption and overall network usages among other benefits. Compute, memory, and storage are scarce resources, and generally decrease depending on the edge location (e.g., fewer processing resources being available at consumer endpoint devices, than at a base station, than at a central office). However, the closer that the edge location is to the endpoint (e.g., user equipment (UE)), the more that space and power are often constrained. Thus, edge computing attempts to reduce the number of resources needed for network services, through the distribution of more resources which are located closer both geographically and in-network access time. In this manner, edge computing attempts to bring the compute resources to the workload data where appropriate, or, bring the workload data to the compute resources. The following describes aspects of an edge cloud architecture that covers multiple potential deployments and addresses restrictions that some network operators or service providers may have in their infrastructures. These include a variety of configurations based on the edge location (because edges at a base station level, for instance, may have more constrained performance and capabilities in a multi-tenant scenario); configurations based on the type of compute, memory, storage, fabric, acceleration, or like resources available to edge locations, tiers of locations, or groups of locations; the service, security, and management and orchestration capabilities; and related objectives to achieve usability and performance of end services. These deployments may accomplish processing in network layers that may be considered as “near edge”, “close edge”, “local edge”, “middle edge”, or “far edge” layers, depending on latency, distance, and timing characteristics. Edge computing is a developing paradigm where computing is performed at or closer to the “edge” of a network, typically through the use of a compute platform (e.g., x86 or ARM compute hardware architecture) implemented at base stations, gateways, network routers, or other devices which are much closer to endpoint devices producing and consuming the data. For example, edge gateway servers may be equipped with pools of memory and storage resources to perform computation in real-time for low latency use cases (e.g., autonomous driving or video surveillance) for connected client devices. Or as an example, base stations may be augmented with compute and acceleration resources to directly process service workloads for the connected user equipment, without further communicating data via backhaul networks. Or as another example, central office network management hardware may be replaced with standardized compute hardware that performs virtualized network functions and offers compute resources for the execution of services and consumer functions for connected devices. Within edge computing networks, there may be scenarios in services which the compute resource will “moved” to the data, as well as scenarios in which the data will be “moved” to the compute resource. Or as an example, base station compute, acceleration and network resources can provide services to scale to workload demands on an as-needed basis by activating dormant capacity (subscription, capacity-on-demand) to manage corner cases, emergencies or to provide longevity for deployed resources over a significantly longer implemented lifecycle. In some aspects, the edge cloud110and the cloud data center130can be configured with secure data management functions (SDMF)111. For example, network management entities within the edge cloud110and the cloud data center130can be configured with a secure data manager performing the SDMF to implement security schemes in data transfer between nodes, allowing client-server applications to communicate over a network connection while preventing unauthorized or malicious entities from eavesdropping on the communication. In some embodiments, the SDMF include performing an initial handshake with a client device via a secure channel (e.g., using a secure protocol such as a TLS/SSL secure channel) to negotiate a shared encryption key (SEK). The initial handshake is performed in response to a data request from the client device (e.g., in connection with accessing a Web server, a data streaming service, etc.). The SDMF further include configuring a cryptographic engine with a read address (indicating where the requested data can be retrieved from), the SEK (allowing the cryptographic engine to encrypt the retrieved data), and a write address (indicating where the encrypted data can be stored). In some aspects, the SDMF further include establishing a record template (RT), such as a TLS RT, which is preconfigured with sender and destination information (e.g., sender and destination IP addresses, etc.) as well as a pointer to the stored encrypted data. The TLS RT is provided to network interface circuitry (such as a network interface card, or NIC) so that the NIC can retrieve the stored encrypted data, configure a data packet with a header using the TLS RT (and the payload including the encrypted data), and communicate the packet to the client device. In this regard, the network management entity implementing the SDMF performs secure transfer of data between a requesting client device and a data source with one-touch inline cryptography. Additional functionalities and techniques associated with SDMF and a secure data manager performing SDMF are discussed in connection withFIG.8-FIG.11. FIG.2illustrates operational layers among endpoints, an edge cloud, and cloud computing environments. Specifically,FIG.2depicts examples of computational use cases205, utilizing the edge cloud110among multiple illustrative layers of network computing. The layers begin at an endpoint (devices and things) layer200, which accesses the edge cloud110to conduct data creation, analysis, and data consumption activities. The edge cloud110may span multiple network layers, such as an edge devices layer210having gateways, on-premise servers, or network equipment (nodes215) located in physically proximate edge systems; a network access layer220, encompassing base stations, radio processing units, network hubs, regional data centers (DC), or local network equipment (equipment225); and any equipment, devices, or nodes located therebetween (in layer212, not illustrated in detail). The network communications within the edge cloud110and among the various layers may occur via any number of wired or wireless mediums, including via connectivity architectures and technologies not depicted. Any of the communication use cases205can be configured based on secure data management functions111, which may be performed by a secure data manager as discussed in connection withFIG.8-FIG.11. Examples of latency, resulting from network communication distance and processing time constraints, may range from less than a millisecond (ms) when among the endpoint layer200, under 5 ms at the edge devices layer210, to even between 10 to 40 ms when communicating with nodes at the network access layer220. Beyond the edge cloud110are core network230and cloud data center240layers, each with increasing latency (e.g., between 50-60 ms at the core network layer230, to 100 or more ms at the cloud data center layer). As a result, operations at a core network data center235or a cloud data center245, with latencies of at least 50 to 100 ms or more, will not be able to accomplish many time-critical functions of the use cases205. Each of these latency values are provided for purposes of illustration and contrast; it will be understood that the use of other access network mediums and technologies may further reduce the latencies. In some examples, respective portions of the network may be categorized as “close edge”, “local edge”, “near edge”, “middle edge”, or “far edge” layers, relative to a network source and destination. For instance, from the perspective of the core network data center235or a cloud data center245, a central office or content data network may be considered as being located within a “near edge” layer (“near” to the cloud, having high latency values when communicating with the devices and endpoints of the use cases205), whereas an access point, base station, on-premise server, or network gateway may be considered as located within a “far edge” layer (“far” from the cloud, having low latency values when communicating with the devices and endpoints of the use cases205). It will be understood that other categorizations of a particular network layer as constituting a “close”, “local”, “near”, “middle”, or “far” edge may be based on latency, distance, a number of network hops, or other measurable characteristics, as measured from a source in any of the network layers200-240. The various use cases205may access resources under usage pressure from incoming streams, due to multiple services utilizing the edge cloud. To achieve results with low latency, the services executed within the edge cloud110balance varying requirements in terms of (a) Priority (throughput or latency; also referred to as service level objective or SLO) and Quality of Service (QoS) (e.g., traffic for an autonomous car may have higher priority than a temperature sensor in terms of response time requirement; or, a performance sensitivity/bottleneck may exist at a compute/accelerator, memory, storage, or network resource, depending on the application); (b) Reliability and Resiliency (e.g., some input streams need to be acted upon and the traffic routed with mission-critical reliability, whereas some other input streams may tolerate an occasional failure, depending on the application); and (c) Physical constraints (e.g., power, cooling, and form-factor). The end-to-end service view for these use cases involves the concept of a service-flow and is associated with a transaction. The transaction details the overall service requirement for the entity consuming the service, as well as the associated services for the resources, workloads, workflows, and business functional and business level requirements. The services executed with the “terms” described may be managed at each layer in a way to assure real-time, and runtime contractual compliance for the transaction during the lifecycle of the service. When a component in the transaction is missing its agreed to SLA, the system as a whole (components in the transaction) may provide the ability to (1) understand the impact of the SLA violation, and (2) augment other components in the system to resume overall transaction SLA, and (3) implement steps to remediate. Thus, with these variations and service features in mind, edge computing within the edge cloud110may provide the ability to serve and respond to multiple applications of the use cases205(e.g., object tracking, video surveillance, connected cars, etc.) in real-time or near real-time, and meet ultra-low latency requirements for these multiple applications. These advantages enable a whole new class of applications (Virtual Network Functions (VNFs), Function as a Service (FaaS), Edge as a Service (EaaS), standard processes, etc.), which cannot leverage conventional cloud computing due to latency or other limitations. However, with the advantages of edge computing comes the following caveats. The devices located at the edge are often resource-constrained and therefore there is pressure on the usage of edge resources. Typically, this is addressed through the pooling of memory and storage resources for use by multiple users (tenants) and devices. The edge may be power and cooling constrained and therefore the power usage needs to be accounted for by the applications that are consuming the most power. There may be inherent power-performance tradeoffs in these pooled memory resources, as many of them are likely to use emerging memory technologies, where more power requires greater memory bandwidth. Likewise, improved security of hardware and root of trust trusted functions are also required because edge locations may be unmanned and may even need permission access (e.g., when housed in a third-party location). Such issues are magnified in the edge cloud110in a multi-tenant, multi-owner, or multi-access setting, where services and applications are requested by many users, especially as network usage dynamically fluctuates and the composition of the multiple stakeholders, use cases, and services changes. At a more generic level, an edge computing system may be described to encompass any number of deployments at the previously discussed layers operating in the edge cloud110(network layers200-240), which provide coordination from the client and distributed computing devices. One or more edge gateway nodes, one or more edge aggregation nodes, and one or more core data centers may be distributed across layers of the network to provide an implementation of the edge computing system by or on behalf of a telecommunication service provider (“telco”, or “TSP”), internet-of-things service provider, the cloud service provider (CSP), enterprise entity, or any other number of entities. Various implementations and configurations of the edge computing system may be provided dynamically, such as when orchestrated to meet service objectives. Consistent with the examples provided herein, a client compute node may be embodied as any type of endpoint component, device, appliance, or another thing capable of communicating as a producer or consumer of data. Further, the label “node” or “device” as used in the edge computing system does not necessarily mean that such node or device operates in a client or agent/minion/follower role; rather, any of the nodes or devices in the edge computing system refer to individual entities, nodes, or subsystems which include discrete or connected hardware or software configurations to facilitate or use the edge cloud110. As such, the edge cloud110is formed from network components and functional features operated by and within edge gateway nodes, edge aggregation nodes, or other edge compute nodes among network layers210-230. The edge cloud110thus may be embodied as any type of network that provides edge computing and/or storage resources which are proximately located to radio access network (RAN) capable endpoint devices (e.g., mobile computing devices, IoT devices, smart devices, etc.), which are discussed herein. In other words, the edge cloud110may be envisioned as an “edge” which connects the endpoint devices and traditional network access points that serve as an ingress point into service provider core networks, including mobile carrier networks (e.g., Global System for Mobile Communications (GSM) networks. Long-Term Evolution (LTE) networks, 5G/6G networks, etc.), while also providing storage and/or compute capabilities. Other types and forms of network access (e.g., Wi-Fi, long-range wireless, wired networks including optical networks) may also be utilized in place of or in combination with such 3GPP carrier networks. The network components of the edge cloud110may be servers, multi-tenant servers, appliance computing devices, and/or any other type of computing device. For example, the edge cloud110may include an appliance computing device that is a self-contained electronic device including a housing, a chassis, a case, or a shell. In some circumstances, the housing may be dimensioned for portability such that it can be carried by a human and/or shipped. Example housings may include materials that form one or more exterior surfaces that partially or fully protect the contents of the appliance, in which protection may include weather protection, hazardous environment protection (e.g., EMI, vibration, extreme temperatures), and/or enable submergibility. Example housings may include power circuitry to provide power for stationary and/or portable implementations, such as AC power inputs, DC power inputs, AC/DC or DC/AC converter(s), power regulators, transformers, charging circuitry, batteries, wired inputs and/or wireless power inputs. Example housings and/or surfaces thereof may include or connect to mounting hardware to enable attachment to structures such as buildings, telecommunication structures (e.g., poles, antenna structures, etc.) and/or racks (e.g., server racks, blade mounts, etc.). Example housings and/or surfaces thereof may support one or more sensors (e.g., temperature sensors, vibration sensors, light sensors, acoustic sensors, capacitive sensors, proximity sensors, etc.). One or more such sensors may be contained in, carried by, or otherwise embedded in the surface and/or mounted to the surface of the appliance. Example housings and/or surfaces thereof may support mechanical connectivity, such as propulsion hardware (e.g., wheels, propellers, etc.) and/or articulating hardware (e.g., robot arms, pivotable appendages, etc.). In some circumstances, the sensors may include any type of input devices such as user interface hardware (e.g., buttons, switches, dials, sliders, etc.). In some circumstances, example housings include output devices contained in, carried by, embedded therein and/or attached thereto. Output devices may include displays, touchscreens, lights, LEDs, speakers, I/O ports (e.g., USB), etc. In some circumstances, edge devices are devices presented in the network for a specific purpose (e.g., a traffic light), but may have processing and/or other capacities that may be utilized for other purposes. Such edge devices may be independent of other networked devices and may be provided with a housing having a form factor suitable for its primary purpose; yet be available for other compute tasks that do not interfere with its primary task. Edge devices include Internet of Things devices. The appliance computing device may include hardware and software components to manage local issues such as device temperature, vibration, resource utilization, updates, power issues, physical and network security, etc. Example hardware for implementing an appliance computing device is described in conjunction withFIG.7B. The edge cloud110may also include one or more servers and/or one or more multi-tenant servers. Such a server may include an operating system and a virtual computing environment. A virtual computing environment may include a hypervisor managing (spawning, deploying, destroying, etc.) one or more virtual machines, one or more containers, etc. Such virtual computing environments provide an execution environment in which one or more applications and/or other software, code, or scripts may execute while being isolated from one or more other applications, software, code, or scripts. InFIG.3, various client endpoints310(in the form of mobile devices, computers, autonomous vehicles, business computing equipment, industrial processing equipment) exchange requests and responses that are specific to the type of endpoint network aggregation. For instance, client endpoints310may obtain network access via a wired broadband network, by exchanging requests and responses322through an on-premise network system332. Some client endpoints310, such as mobile computing devices, may obtain network access via a wireless broadband network, by exchanging requests and responses324through an access point (e.g., cellular network tower)334. Some client endpoints310, such as autonomous vehicles may obtain network access for requests and responses326via a wireless vehicular network through a street-located network system336. However, regardless of the type of network access, the TSP may deploy aggregation points342,344within the edge cloud110to aggregate traffic and requests. Thus, within the edge cloud110, the TSP may deploy various compute and storage resources, such as at edge aggregation nodes340, to provide requested content. The edge aggregation nodes340and other systems of the edge cloud110are connected to a cloud or data center360, which uses a backhaul network350to fulfill higher-latency requests from a cloud/data center for websites, applications, database servers, etc. Additional or consolidated instances of the edge aggregation nodes340and the aggregation points342,344, including those deployed on a single server framework, may also be present within the edge cloud110or other areas of the TSP infrastructure. In an example embodiment, the edge cloud110and the cloud or data center360utilize secure data management functions111in connection with disclosed techniques. The secure data management functions may be performed by at least one secure data manager as discussed in connection withFIG.8-FIG.11. FIG.4illustrates deployment and orchestration for virtual edge configurations across an edge computing system operated among multiple edge nodes and multiple tenants. Specifically,FIG.4depicts the coordination of a first edge node422and a second edge node424in an edge computing system400, to fulfill requests and responses for various client endpoints410(e.g., smart cities/building systems, mobile devices, computing devices, business/logistics systems, industrial systems, etc.), which access various virtual edge instances. Here, the virtual edge instances432,434(or virtual edges) provide edge compute capabilities and processing in an edge cloud, with access to a cloud/data center440for higher-latency requests for websites, applications, database servers, etc. However, the edge cloud enables coordination of processing among multiple edge nodes for multiple tenants or entities. In the example ofFIG.4, these virtual edge instances include: a first virtual edge432, offered to a first tenant (Tenant 1), which offers the first combination of edge storage, computing, and services; and a second virtual edge434, offering a second combination of edge storage, computing, and services. The virtual edge instances432,434are distributed among the edge nodes422,424, and may include scenarios in which a request and response are fulfilled from the same or different edge nodes. The configuration of the edge nodes422,424to operate in a distributed yet coordinated fashion occurs based on edge provisioning functions450. The functionality of the edge nodes422,424to provide coordinated operation for applications and services, among multiple tenants, occurs based on orchestration functions460. In an example embodiment, the edge provisioning functions450and the orchestration functions can utilize secure data management functions111in connection with disclosed techniques. The secure data management functions111may be performed by a secure data manager as discussed in connection withFIG.8-FIG.11. It should be understood that some of the devices in the various client endpoints410are multi-tenant devices where Tenant 1 may function within a tenant1 ‘slice’ while a Tenant 2 may function within a tenant2 slice (and, in further examples, additional or sub-tenants may exist; and each tenant may even be specifically entitled and transactionally tied to a specific set of features all the way day to specific hardware features). A trusted multi-tenant device may further contain a tenant-specific cryptographic key such that the combination of key and slice may be considered a “root of trust” (RoT) or tenant-specific RoT. An RoT may further be computed dynamically composed using a DICE (Device Identity Composition Engine) architecture such that a single DICE hardware building block may be used to construct layered trusted computing base contexts for layering of device capabilities (such as a Field Programmable Gate Array (FPGA)). The RoT may further be used for a trusted computing context to enable a “fan-out” that is useful for supporting multi-tenancy. Within a multi-tenant environment, the respective edge nodes422,424may operate as security feature enforcement points for local resources allocated to multiple tenants per node. Additionally, tenant runtime and application execution (e.g., in virtual edge instances432,434) may serve as an enforcement point for a security feature that creates a virtual edge abstraction of resources spanning potentially multiple physical hosting platforms. Finally, the orchestration functions460at an orchestration entity may operate as a security feature enforcement point for marshaling resources along tenant boundaries. Edge computing nodes may partition resources (memory, central processing unit (CPU), graphics processing unit (GPU), interrupt controller, input/output (I/O) controller, memory controller, bus controller, etc.) where respective partitionings may contain an RoT capability and where fan-out and layering according to a DICE model may further be applied to Edge Nodes. Cloud computing nodes consisting of containers, FaaS engines, Servlets, servers, or other computation abstraction may be partitioned according to a DICE layering and fan-out structure to support an RoT context for each. Accordingly, the respective RoTs spanning devices in410,422, and440may coordinate the establishment of a distributed trusted computing base (DTCB) such that a tenant-specific virtual trusted secure channel linking all elements end to end can be established. Further, it will be understood that a container may have data or workload-specific keys protecting its content from a previous edge node. As part of the migration of a container, a pod controller at a source edge node may obtain a migration key from a target edge node pod controller where the migration key is used to wrap the container-specific keys. When the container/pod is migrated to the target edge node, the unwrapping key is exposed to the pod controller that then decrypts the wrapped keys. The keys may now be used to perform operations on container specific data. The migration functions may be gated by properly attested edge nodes and pod managers (as described above). In further examples, an edge computing system is extended to provide for orchestration of multiple applications through the use of containers (a contained, deployable unit of software that provides code and needed dependencies) in a multi-owner, multi-tenant environment. A multi-tenant orchestrator may be used to perform key management, trust anchor management, and other security functions related to the provisioning and lifecycle of the trusted ‘slice’ concept inFIG.4. For instance, an edge computing system may be configured to fulfill requests and responses for various client endpoints from multiple virtual edge instances (and, from a cloud or remote data center). The use of these virtual edge instances may support multiple tenants and multiple applications (e.g., augmented reality (AR)/virtual reality (VR), enterprise applications, content delivery, gaming, compute offload) simultaneously. Further, there may be multiple types of applications within the virtual edge instances (e.g., normal applications; latency-sensitive applications; latency-critical applications, user plane applications; networking applications; etc.). The virtual edge instances may also be spanned across systems of multiple owners at different geographic locations (or, respective computing systems and resources which are co-owned or co-managed by multiple owners). For instance, each edge node422,424may implement the use of containers, such as with the use of a container “pod”426,428providing a group of one or more containers. In a setting that uses one or more container pods, a pod controller or orchestrator is responsible for local control and orchestration of the containers in the pod. Various edge node resources (e.g., storage, compute, services, depicted with hexagons) provided for the respective edge slices of virtual edges432,434are partitioned according to the needs of each container. With the use of container pods, a pod controller oversees the partitioning and allocation of containers and resources. The pod controller receives instructions from an orchestrator (e.g., orchestrator460) that instructs the controller on how best to partition physical resources and for what duration, such as by receiving key performance indicator (KPI) targets based on SLA contracts. The pod controller determines which container requires which resources and for how long to complete the workload and satisfy the SLA. The pod controller also manages container lifecycle operations such as: creating the container, provisioning it with resources and applications, coordinating intermediate results between multiple containers working on a distributed application together, dismantling containers when workload completes, and the like. Additionally, a pod controller may serve a security role that prevents the assignment of resources until the right tenant authenticates or prevents provisioning of data or a workload to a container until an attestation result is satisfied. Also, with the use of container pods, tenant boundaries can still exist but in the context of each pod of containers. If each tenant-specific pod has a tenant-specific pod controller, there will be a shared pod controller that consolidates resource allocation requests to avoid typical resource starvation situations. Further controls may be provided to ensure the attestation and trustworthiness of the pod and pod controller. For instance, the orchestrator460may provision an attestation verification policy to local pod controllers that perform attestation verification. If an attestation satisfies a policy for a first tenant pod controller but not a second tenant pod controller, then the second pod could be migrated to a different edge node that does satisfy it. Alternatively, the first pod may be allowed to execute and a different shared pod controller is installed and invoked before the second pod executing. FIG.5illustrates additional compute arrangements deploying containers in an edge computing system. As a simplified example, system arrangements510,520depict settings in which a pod controller (e.g., container managers511,521, and container orchestrator531) is adapted to launch containerized pods, functions, and functions-as-a-service instances through execution via compute nodes (515in arrangement510) or to separately execute containerized virtualized network functions through execution via compute nodes (523in arrangement520). This arrangement is adapted for use of multiple tenants in system arrangement530(using compute nodes537), where containerized pods (e.g., pods512), functions (e.g., functions513, VNFs522,536), and functions-as-a-service instances (e.g., FaaS instance514) are launched within virtual machines (e.g., VMs534,535for tenants532,533) specific to respective tenants (aside from the execution of virtualized network functions). This arrangement is further adapted for use in system arrangement540, which provides containers542,543, or execution of the various functions, applications, and functions on compute nodes544, as coordinated by a container-based orchestration system541. The system arrangements depicted inFIG.5provide an architecture that treats VMs. Containers, and Functions equally in terms of application composition (and resulting applications are combinations of these three ingredients). Each ingredient may involve the use of one or more accelerator (FPGA, ASIC) components as a local backend. In this manner, applications can be split across multiple edge owners, coordinated by an orchestrator. In the context ofFIG.5, the pod controller/container manager, container orchestrator, and individual nodes may provide a security enforcement point. However, tenant isolation may be orchestrated where the resources allocated to a tenant are distinct from resources allocated to a second tenant, but edge owners cooperate to ensure resource allocations are not shared across tenant boundaries. Or, resource allocations could be isolated across tenant boundaries, as tenants could allow “use” via a subscription or transaction/contract basis. In these contexts, virtualization, containerization, enclaves, and hardware partitioning schemes may be used by edge owners to enforce tenancy. Other isolation environments may include bare metal (dedicated) equipment, virtual machines, containers, virtual machines on containers, or combinations thereof. In further examples, aspects of software-defined or controlled silicon hardware, and other configurable hardware, may integrate with the applications, functions, and services an edge computing system. Software-defined silicon may be used to ensure the ability for some resource or hardware ingredient to fulfill a contract or service level agreement, based on the ingredient's ability to remediate a portion of itself or the workload (e.g., by an upgrade, reconfiguration, or provision of new features within the hardware configuration itself). It should be appreciated that the edge computing systems and arrangements discussed herein may be applicable in various solutions, services, and/or use cases involving mobility. As an example,FIG.6shows a simplified vehicle compute and communication use case involving mobile access to applications in an edge computing system600that implements an edge cloud110. In this use case, respective client compute nodes610may be embodied as in-vehicle compute systems (e.g., in-vehicle navigation and/or infotainment systems) located in corresponding vehicles that communicate with the edge gateway nodes620during traversal of a roadway. For instance, the edge gateway nodes620may be located in a roadside cabinet or other enclosure built-into a structure having other, separate, mechanical utility, which may be placed along the roadway, at intersections of the roadway, or other locations near the roadway. As respective vehicles traverse along the roadway, the connection between its client compute node610and a particular edge gateway device620may propagate to maintain a consistent connection and context for the client compute node610. Likewise, mobile edge nodes may aggregate at the high priority services or according to the throughput or latency resolution requirements for the underlying service(s) (e.g., in the case of drones). The respective edge gateway devices620include an amount of processing and storage capabilities and, as such, some processing and/or storage of data for the client compute nodes610may be performed on one or more of the edge gateway devices620. The edge gateway devices620may communicate with one or more edge resource nodes640, which are illustratively embodied as compute servers, appliances, or components located at or in a communication base station642(e.g., a base station of a cellular network). As discussed above, the respective edge resource nodes640include an amount of processing and storage capabilities, and, as such, some processing and/or storage of data for the client compute nodes610may be performed on the edge resource node640. For example, the processing of data that is less urgent or important may be performed by the edge resource node640, while the processing of data that is of a higher urgency or importance may be performed by the edge gateway devices620(depending on, for example, the capabilities of each component, or information in the request indicating urgency or importance). Based on data access, data location, or latency, work may continue on edge resource nodes when the processing priorities change during the processing activity. Likewise, configurable systems or hardware resources themselves can be activated (e.g., through a local orchestrator) to provide additional resources to meet the new demand (e.g., adapt the compute resources to the workload data). The edge resource node(s)640also communicates with the core data center650, which may include compute servers, appliances, and/or other components located in a central location (e.g., a central office of a cellular communication network). The core data center650may provide a gateway to the global network cloud660(e.g., the Internet) for the edge cloud110operations formed by the edge resource node(s)640and the edge gateway devices620. Additionally, in some examples, the core data center650may include an amount of processing and storage capabilities and, as such, some processing and/or storage of data for the client compute devices may be performed on the core data center650(e.g., processing of low urgency or importance, or high complexity). The edge gateway nodes620or the edge resource nodes640may offer the use of stateful applications632and a geographic distributed database634. Although the applications632and database634are illustrated as being horizontally distributed at a layer of the edge cloud110, it will be understood that resources, services, or other components of the application may be vertically distributed throughout the edge cloud (including, part of the application executed at the client compute node610, other parts at the edge gateway nodes620or the edge resource nodes640, etc.). Additionally, as stated previously, there can be peer relationships at any level to meet service objectives and obligations. Further, the data for a specific client or application can move from edge to edge based on changing conditions (e.g., based on acceleration resource availability, following the car movement, etc.). For instance, based on the “rate of decay” of access, prediction can be made to identify the next owner to continue, or when the data or computational access will no longer be viable. These and other services may be utilized to complete the work that is needed to keep the transaction compliant and lossless. In further scenarios, a container636(or a pod of containers) may be flexibly migrated from an edge node620to other edge nodes (e.g.,620,640, etc.) such that the container with an application and workload does not need to be reconstituted, re-compiled, re-interpreted for migration to work. However, in such settings, there may be some remedial or “swizzling” translation operations applied. For example, the physical hardware at node640may differ from edge gateway node620and therefore, the hardware abstraction layer (HAL) that makes up the bottom edge of the container will be re-mapped to the physical layer of the target edge node. This may involve some form of late-binding technique, such as binary translation of the HAL from the container-native format to the physical hardware format, or may involve mapping interfaces and operations. A pod controller may be used to drive the interface mapping as part of the container lifecycle, which includes migration to/from different hardware environments. The scenarios encompassed byFIG.6may utilize various types of mobile edge nodes, such as an edge node hosted in a vehicle (car/truck/tram/train) or other mobile units, as the edge node will move to other geographic locations along the platform hosting it. With vehicle-to-vehicle communications, individual vehicles may even act as network edge nodes for other cars, (e.g., to perform caching, reporting, data aggregation, etc.). Thus, it will be understood that the application components provided in various edge nodes may be distributed in static or mobile settings, including coordination between some functions or operations at individual endpoint devices or the edge gateway nodes620, some others at the edge resource node640, and others in the core data center650or global network cloud660. In an example embodiment, the edge cloud110utilizes secure data management functions111in connection with disclosed techniques. The secure data management functions may be performed by at least one secure data manager (e.g., as present within the edge resource node640, the edge gateway node620, and the core data center650), as discussed in connection withFIG.8-FIG.11. In further configurations, the edge computing system may implement FaaS computing capabilities through the use of respective executable applications and functions. In an example, a developer writes function code (e.g., “computer code” herein) representing one or more computer functions, and the function code is uploaded to a FaaS platform provided by, for example, an edge node or data center. A trigger such as, for example, a service use case or an edge processing event, initiates the execution of the function code with the FaaS platform. In an example of FaaS, a container is used to provide an environment in which function code (e.g., an application that may be provided by a third party) is executed. The container may be any isolated-execution entity such as a process, a Docker or Kubemetes container, a virtual machine, etc. Within the edge computing system, various datacenter, edge, and endpoint (including mobile) devices are used to “spin up” functions (e.g., activate and/or allocate function actions) that are scaled on demand. The function code gets executed on the physical infrastructure (e.g., edge computing node) device and underlying virtualized containers. Finally, the container is “spun down” (e.g., deactivated and/or deallocated) on the infrastructure in response to the execution being completed. Further aspects of FaaS may enable deployment of edge functions in a service fashion, including support of respective functions that support edge computing as a service (Edge-as-a-Service or “EaaS”). Additional features of FaaS may include: a granular billing component that enables customers (e.g., computer code developers) to pay only when their code gets executed; common data storage to store data for reuse by one or more functions; orchestration and management among individual functions; function execution management, parallelism, and consolidation; management of container and function memory spaces; coordination of acceleration resources available for functions; and distribution of functions between containers (including “warm” containers, already deployed or operating, versus “cold” which require initialization, deployment, or configuration). The edge computing system600can include or be in communication with an edge provisioning node644. The edge provisioning node644can distribute software such as the example computer-readable instructions782ofFIG.7B, to various receiving parties for implementing any of the methods described herein. The example edge provisioning node644may be implemented by any computer server, home server, content delivery network, virtual server, software distribution system, central facility, storage device, storage node, data facility, cloud service, etc., capable of storing and/or transmitting software instructions (e.g., code, scripts, executable binaries, containers, packages, compressed files, and/or derivatives thereof) to other computing devices. Component(s) of the example edge provisioning node644may be located in a cloud, in a local area network, in an edge network, in a wide area network, on the Internet, and/or any other location communicatively coupled with the receiving party(ies). The receiving parties may be customers, clients, associates, users, etc. of the entity owning and/or operating the edge provisioning node644. For example, the entity that owns and/or operates the edge provisioning node644may be a developer, a seller, and/or a licensor (or a customer and/or consumer thereof) of software instructions such as the example computer-readable instructions782ofFIG.7B. The receiving parties may be consumers, service providers, users, retailers, OEMs, etc., who purchase and/or license the software instructions for use and/or re-sale and/or sub-licensing. In an example, the edge provisioning node644includes one or more servers and one or more storage devices. The storage devices host computer-readable instructions such as the example computer-readable instructions782ofFIG.7B, as described below. Similarly to edge gateway devices620described above, the one or more servers of the edge provisioning node644are in communication with a base station642or other network communication entity. In some examples, the one or more servers are responsive to requests to transmit the software instructions to a requesting party as part of a commercial transaction. Payment for the delivery, sale, and/or license of the software instructions may be handled by the one or more servers of the software distribution platform and/or via a third party payment entity. The servers enable purchasers and/or licensors to download the computer-readable instructions782from the edge provisioning node644. For example, the software instructions, which may correspond to the example computer-readable instructions782ofFIG.7Bmay be downloaded to the example processor platform/s, which is to execute the computer-readable instructions782to implement the methods described herein. In some examples, the processor platform(s) that execute the computer-readable instructions782can be physically located in different geographic locations, legal jurisdictions, etc. In some examples, one or more servers of the edge provisioning node644periodically offer, transmit, and/or force updates to the software instructions (e.g., the example computer-readable instructions782ofFIG.7B) to ensure improvements, patches, updates, etc. are distributed and applied to the software instructions implemented at the end-user devices. In some examples, different components of the computer-readable instructions782can be distributed from different sources and/or to different processor platforms; for example, different libraries, plug-ins, components, and other types of compute modules, whether compiled or interpreted, can be distributed from different sources and/or to different processor platforms. For example, a portion of the software instructions (e.g., a script that is not, in itself, executable) may be distributed from a first source while an interpreter (capable of executing the script) may be distributed from a second source. In further examples, any of the compute nodes or devices discussed with reference to the present edge computing systems and environment may be fulfilled based on the components depicted inFIGS.7A and7B. Respective edge compute nodes may be embodied as a type of device, appliance, computer, or other “thing” capable of communicating with other edges, networking, or endpoint components. For example, an edge compute device may be embodied as a personal computer, a server, a smartphone, a mobile compute device, a smart appliance, an in-vehicle compute system (e.g., a navigation system), a self-contained device having an outer case, shell, etc., or other device or system capable of performing the described functions. In the simplified example depicted inFIG.7A, an edge compute node700includes a compute engine (also referred to herein as “compute circuitry”)702, an input/output (I/O) subsystem708, data storage710, a communication circuitry subsystem712, and, optionally, one or more peripheral devices714. In other examples, respective compute devices may include other or additional components, such as those typically found in a computer (e.g., a display, peripheral devices, etc.). Additionally, in some examples, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. The compute node700may be embodied as any type of engine, device, or collection of devices capable of performing various compute functions. In some examples, the compute node700may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable gate array (FPGA), a system-on-a-chip (SOC), or other integrated system or device. In the illustrative example, the compute node700includes or is embodied as a processor704and a memory706. The processor704may be embodied as any type of processor capable of performing the functions described herein (e.g., executing an application). For example, the processor704may be embodied as a multi-core processor(s), a microcontroller, a processing unit, a specialized or special purpose processing unit, or other processor or processing/controlling circuit. In some examples, the processor704may be embodied as, include, or be coupled to an FPGA, an application-specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate the performance of the functions described herein. Also in some examples, the processor704may be embodied as a specialized x-processing unit (xPU) also known as a data processing unit (DPU), infrastructure processing unit (IPU), or network processing unit (NPU). Such an xPU may be embodied as a standalone circuit or circuit package, integrated within a SOC or integrated with networking circuitry (e.g., in a SmartNIC, or enhanced SmartNIC), acceleration circuitry, storage devices, or AI hardware (e.g., GPUs, programmed FPGAs, Network Processing Units (NPUs), Infrastructure Processing Units (IPUs), Storage Processing Units (SPUs), AI Processors (APUs), Data Processing Unit (DPUs), or other specialized accelerators such as a cryptographic processing unit/accelerator). Such an xPU may be designed to receive programming to process one or more data streams and perform specific tasks and actions for the data streams (such as hosting microservices, performing service management or orchestration, organizing or managing server or data center hardware, managing service meshes, or collecting and distributing telemetry), outside of the CPU or general-purpose processing hardware. However, it will be understood that an xPU, a SOC, a CPU, and other variations of the processor704may work in coordination with each other to execute many types of operations and instructions within and on behalf of the compute node700. The memory706may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as DRAM or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In an example, the memory device is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include a three-dimensional crosspoint memory device (e.g., Intel® 3D XPoint™ memory), or other byte-addressable write-in-place nonvolatile memory devices. The memory device may refer to the die itself and/or to a packaged memory product. In some examples, 3D crosspoint memory (e.g., Intel® 3D XPoint™ memory) may comprise a transistor-less stackable cross-point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. In some examples, all or a portion of the memory706may be integrated into the processor704. The memory706may store various software and data used during operation such as one or more applications, data operated on by the application(s), libraries, and drivers. The compute circuitry702is communicatively coupled to other components of the compute node700via the I/O subsystem708, which may be embodied as circuitry and/or components to facilitate input/output operations with the compute circuitry702(e.g., with the processor704and/or the main memory706) and other components of the compute circuitry702. For example, the/O subsystem708may be embodied as, or otherwise include memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some examples, the I/O subsystem708may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the processor704, the memory706, and other components of the compute circuitry702, into the compute circuitry702. The one or more illustrative data storage devices710may be embodied as any type of device configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Individual data storage devices710may include a system partition that stores data and firmware code for the data storage device710. Individual data storage devices710may also include one or more operating system partitions that store data files and executables for operating systems depending on, for example, the type of compute node70. The communication circuitry712may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over a network between the compute circuitry702and another compute device (e.g., an edge gateway of an implementing edge computing system). The communication circuitry712may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., a cellular networking protocol such a 3GPP 4G or 5G standard, a wireless local area network protocol such as IEEE 802.11/Wi-Fi@, a wireless wide area network protocol, Ethernet, Bluetooth®, Bluetooth Low Energy, an IoT protocol such as IEEE 802.15.4 or ZigBee®, low-power wide-area network (LPWAN) or low-power wide-area (LPWA) protocols, etc.) to effect such communication. The illustrative communication circuitry712includes a network interface controller (NIC)720, which may also be referred to as a host fabric interface (HFI). The NIC720may be embodied as one or more add-in-boards, daughter cards, network interface cards, controller chips, chipsets, or other devices that may be used by the compute node700to connect with another compute device (e.g., an edge gateway node). In some examples, the NIC720may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors or included on a multichip package that also contains one or more processors. In some examples, the NIC720may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC720. In such examples, the local processor of the NIC720may be capable of performing one or more of the functions of the compute circuitry702described herein. Additionally, or in such examples, the local memory of the NIC720may be integrated into one or more components of the client compute node at the board level, socket level, chip level, and/or other levels. Additionally, in some examples, a respective compute node700may include one or more peripheral devices714. Such peripheral devices714may include any type of peripheral device found in a compute device or server such as audio input devices, a display, other input/output devices, interface devices, and/or other peripheral devices, depending on the particular type of the compute node700. In further examples, the compute node700may be embodied by a respective edge compute node (whether a client, gateway, or aggregation node) in an edge computing system or like forms of appliances, computers, subsystems, circuitry, or other components. In a more detailed example,FIG.7Billustrates a block diagram of an example of components that may be present in an edge computing node750for implementing the techniques (e.g., operations, processes, methods, and methodologies) described herein. This edge computing node750provides a closer view of the respective components of node700when implemented as or as part of a computing device (e.g., as a mobile device, a base station, server, gateway, etc.). The edge computing node750may include any combinations of the hardware or logical components referenced herein, and it may include or couple with any device usable with an edge communication network or a combination of such networks. The components may be implemented as integrated circuits (ICs), portions thereof, discrete electronic devices, or other modules, instruction sets, programmable logic or algorithms, hardware, hardware accelerators, software, firmware, or a combination thereof adapted in the edge computing node750, or as components otherwise incorporated within a chassis of a larger system. The edge computing device750may include processing circuitry in the form of a processor752, which may be a microprocessor, a multi-core processor, a multithreaded processor, an ultra-low voltage processor, an embedded processor, an xPU/DPU/IPU/NPU, special purpose processing unit, specialized processing unit, or other known processing elements. The processor752may be a part of a system on a chip (SoC) in which the processor752and other components are formed into a single integrated circuit, or a single package, such as the Edison™ or Galileo™ SoC boards from Intel Corporation, Santa Clara, California. As an example, the processor752may include an Intel® Architecture Core™ based CPU processors, such as a Quark™, an Atom™, an i3, an i5, an i7, an i9, or an MCU-class processor, or another such processor available from Intel®. However, any number other processors may be used, such as available from Advanced Micro Devices, Inc. (AMD®) of Sunnyvale, California a MIPS®-based design from MIPS Technologies, Inc. of Sunnyvale, California an ARM®-based design licensed from ARM Holdings, Ltd. or a customer thereof, or their licensees or adopters. The processors may include units such as an A5-A13 processor from Apple® Inc., a Snapdragon™ processor from Qualcomm® Technologies, Inc., or an OMAP™ processor from Texas Instruments, Inc. The processor752and accompanying circuitry may be provided in a single socket form factor, multiple socket form factor, or a variety of other formats, including in limited hardware configurations or configurations that include fewer than all elements shown inFIG.7B. The processor752may communicate with a system memory754over an interconnect756(e.g., a bus). Any number of memory devices may be used to provide for a given amount of system memory. As examples, the memory754may be random access memory (RAM) per a Joint Electron Devices Engineering Council (JEDEC) design such as the DDR or mobile DDR standards (e.g., LPDDR, LPDDR2, LPDDR3, or LPDDR4). In particular examples, a memory component may comply with a DRAM standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4. Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces. In various implementations, the individual memory devices may be of any number of different package types such as single die package (SDP), dual die package (DDP), or quad die package (Q17P). These devices, in some examples, may be directly soldered onto a motherboard to provide a lower profile solution, while in other examples the devices are configured as one or more memory modules that in turn couple to the motherboard by a given connector. Any number of other memory implementations may be used, such as other types of memory modules, e.g., dual inline memory modules (DIMMs) of different varieties including but not limited to microDIMMs or MiniDIMMs. To provide for persistent storage of information such as data, applications, operating systems, and so forth, a storage758may also couple to the processor752via the interconnect756. In an example, the storage758may be implemented via a solid-state disk drive (SSDD). Other devices that may be used for the storage758include flash memory cards, such as Secure Digital (SD) cards, microSD cards, eXtreme Digital (XD) picture cards, and the like, and Universal Serial Bus (USB) flash drives. In an example, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin-transfer torque (STF)-MRAM, a spintronic magnetic junction memory-based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin-Orbit Transfer) based device, a thyristor-based memory device, or a combination of any of the above, or other memory. In low power implementations, the storage758may be on-die memory or registers associated with the processor752. However, in some examples, the storage758may be implemented using a micro hard disk drive (HDD). Further, any number of new technologies may be used for the storage758in addition to, or instead of, the technologies described, such resistance change memories, phase change memories, holographic memories, or chemical memories, among others. The components may communicate over the interconnect756. The interconnect756may include any number of technologies, including industry-standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), or any number of other technologies. The interconnect756may be a proprietary bus, for example, used in an SoC based system. Other bus systems may be included, such as an Inter-Integrated Circuit (I2C) interface, a Serial Peripheral Interface (SPI) interface, point to point interfaces, and a power bus, among others. The interconnect756may couple the processor752to a transceiver766, for communications with the connected edge devices762. The transceiver766may use any number of frequencies and protocols, such as 2.4 Gigahertz (GHz) transmissions under the IEEE 802.15.4 standard, using the Bluetooth® low energy (BLE) standard, as defined by the Bluetooth® Special Interest Group, or the ZigBee® standard, among others. Any number of radios, configured for a particular wireless communication protocol, may be used for the connections to the connected edge devices762. For example, a wireless local area network (WLAN) unit may be used to implement Wi-Fi® communications under the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard. Also, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, may occur via a wireless wide area network (WWAN) unit. The wireless network transceiver766(or multiple transceivers) may communicate using multiple standards or radios for communications at a different range. For example, the edge computing node750may communicate with close devices, e.g., within about 10 meters, using a local transceiver based on Bluetooth Low Energy (BLE), or another low power radio, to save power. More distant connected edge devices762, e.g., within about 50 meters, may be reached over ZigBee® or other intermediate power radios. Both communications techniques may take place over a single radio at different power levels or may take place over separate transceivers, for example, a local transceiver using BLE and a separate mesh transceiver using ZigBee®. A wireless network transceiver766(e.g., a radio transceiver) may be included to communicate with devices or services in the edge cloud795via local or wide area network protocols. The wireless network transceiver766may be a low-power wide-area (LPWA) transceiver that follows the IEEE 802.15.4, or IEEE 802.15.4g standards, among others. The edge computing node750may communicate over a wide area using LoRaWANT (Long Range Wide Area Network) developed by Semtech and the LoRa Alliance. The techniques described herein are not limited to these technologies but may be used with any number of other cloud transceivers that implement long-range, low bandwidth communications, such as Sigfox, and other technologies. Further, other communications techniques, such as time-slotted channel hopping, described in the IEEE 802.15.4e specification may be used. Any number of other radio communications and protocols may be used in addition to the systems mentioned for the wireless network transceiver766, as described herein. For example, the transceiver766may include a cellular transceiver that uses spread spectrum (SPA/SAS) communications for implementing high-speed communications. Further, any number of other protocols may be used, such as Wi-Fi® networks for medium speed communications and provision of network communications. The transceiver766may include radios that are compatible with any number of 3GPP (Third Generation Partnership Project) specifications, such as Long Term Evolution (LTE) and 5th Generation (5G) communication systems, discussed in further detail at the end of the present disclosure. A network interface controller (NIC)768may be included to provide a wired communication to nodes of the edge cloud795or other devices, such as the connected edge devices762(e.g., operating in a mesh). The wired communication may provide an Ethernet connection or may be based on other types of networks, such as Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others. An additional NIC768may be included to enable connecting to a second network, for example, a first NIC768providing communications to the cloud over Ethernet, and a second NIC768providing communications to other devices over another type of network. Given the variety of types of applicable communications from the device to another component or network, applicable communications circuitry used by the device may include or be embodied by any one or more of components764,766,768, or770. Accordingly, in various examples, applicable means for communicating (e.g., receiving, transmitting, etc.) may be embodied by such communications circuitry. The edge computing node750may include or be coupled to acceleration circuitry764, which may be embodied by one or more artificial intelligence (AI) accelerators, a neural compute stick, neuromorphic hardware, an FPGA, an arrangement of GPUs, an arrangement of xPUs/DPUs/IPU/NPUs, one or more SoCs, one or more CPUs, one or more digital signal processors, dedicated ASICs, or other forms of specialized processors or circuitry designed to accomplish one or more specialized tasks. These tasks may include AI processing (including machine learning, training, inferencing, and classification operations), visual data processing, network data processing, object detection, rule analysis, or the like. These tasks also may include the specific edge computing tasks for service management and service operations discussed elsewhere in this document. The interconnect756may couple the processor752to a sensor hub or external interface770that is used to connect additional devices or subsystems. The devices may include sensors772, such as accelerometers, level sensors, flow sensors, optical light sensors, camera sensors, temperature sensors, global navigation system (e.g., GPS) sensors, pressure sensors, barometric pressure sensors, and the like. The hub or interface770further may be used to connect the edge computing node750to actuators774, such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like. In some optional examples, various input/output (I/O) devices may be present within or connected to, the edge computing node750. For example, a display or other output device784may be included to show information, such as sensor readings or actuator position. An input device786, such as a touch screen or keypad may be included to accept input. An output device784may include any number of forms of audio or visual display, including simple visual outputs such as binary status indicators (e.g., light-emitting diodes (LEDs)) and multi-character visual outputs, or more complex outputs such as display screens (e.g., liquid crystal display (LCD) screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the edge computing node750. A display or console hardware, in the context of the present system, may be used to provide output and receive input of an edge computing system; to manage components or services of an edge computing system; identify a state of an edge computing component or service, or to conduct any other number of management or administration functions or service use cases. A battery776may power the edge computing node750, although, in examples in which the edge computing node750is mounted in a fixed location, it may have a power supply coupled to an electrical grid, or the battery may be used as a backup or for temporary capabilities. The battery776may be a lithium-ion battery, or a metal-air battery, such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, and the like. A battery monitor/charger778may be included in the edge computing node750to track the state of charge (SoCh) of the battery776, if included. The battery monitor/charger778may be used to monitor other parameters of the battery776to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery776. The battery monitor/charger778may include a battery monitoring integrated circuit, such as an LTC4020 or an LTC2990 from Linear Technologies, an ADT7488A from ON Semiconductor of Phoenix Arizona, or an IC from the UCD90xxx family from Texas Instruments of Dallas, TX The battery monitor/charger778may communicate the information on the battery776to the processor752over the interconnect756. The battery monitor/charger778may also include an analog-to-digital (ADC) converter that enables the processor752to directly monitor the voltage of the battery776or the current flow from the battery776. The battery parameters may be used to determine actions that the edge computing node750may perform, such as transmission frequency, mesh network operation, sensing frequency, and the like. A power block780, or other power supply coupled to a grid, may be coupled with the battery monitor/charger778to charge the battery776. In some examples, the power block780may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the edge computing node750. A wireless battery charging circuit, such as an LTC4020 chip from Linear Technologies of Milpitas, California, among others, may be included in the battery monitor/charger778. The specific charging circuits may be selected based on the size of the battery776, and thus, the current required. The charging may be performed using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, or the Rezence charging standard, promulgated by the Alliance for Wireless Power, among others. The storage758may include instructions782in the form of software, firmware, or hardware commands to implement the techniques described herein. Although such instructions782are shown as code blocks included in the memory754and the storage758, it may be understood that any of the code blocks may be replaced with hardwired circuits, for example, built into an application-specific integrated circuit (ASIC). Also in a specific example, the instructions782on the processor752(separately, or in combination with the instructions782of the machine-readable medium760) may configure execution or operation of a trusted execution environment (TEE)790. In an example, the TEE790operates as a protected area accessible to the processor752for secure execution of instructions and secure access to data. Various implementations of the TEE790, and an accompanying secure area in the processor752or the memory754may be provided, for instance, through the use of Intel® Software Guard Extensions (SGX) or ARM® TrustZone® hardware security extensions, Intel® Management Engine (ME), or Intel® Converged Security Manageability Engine (CSME). Other aspects of security hardening, hardware roots-of-trust, and trusted or protected operations may be implemented in device750through the TEE790and the processor752. In an example, the instructions782provided via memory754, the storage758, or the processor752may be embodied as a non-transitory, machine-readable medium760including code to direct the processor752to perform electronic operations in the edge computing node750. The processor752may access the non-transitory, machine-readable medium760over the interconnect756. For instance, the non-transitory, machine-readable medium760may be embodied by devices described for the storage758or may include specific storage units such as optical disks, flash drives, or any number of other hardware devices. The non-transitory, machine-readable medium760may include instructions to direct the processor752to perform a specific sequence or flow of actions, for example, as described with respect to the flowchart(s) and block diagram(s) of operations and functionality depicted above. As used herein, the terms “machine-readable medium” and “computer-readable medium” are interchangeable. In further examples, a machine-readable medium also includes any tangible medium that is capable of storing, encoding, or carrying instructions for execution by a machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. A “machine-readable medium” thus may include but is not limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The instructions embodied by a machine-readable medium may further be transmitted or received over a communications network using a transmission medium via a network interface device utilizing any one of several transfer protocols (e.g., Hypertext Transfer Protocol (HTTP)). A machine-readable medium may be provided by a storage device or other apparatus which is capable of hosting data in a non-transitory format. In an example, information stored or otherwise provided on a machine-readable medium may be representative of instructions, such as instructions themselves or a format from which the instructions may be derived. This format from which the instructions may be derived may include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., split into multiple packages), or the like. The information representative of the instructions in the machine-readable medium may be processed by processing circuitry into the instructions to implement any of the operations discussed herein. For example, deriving the instructions from the information (e.g., processing by the processing circuitry) may include: compiling (e.g., from source code, object code, etc.), interpreting, loading, organizing (e.g., dynamically or statically linking), encoding, decoding, encrypting, unencrypting, packaging, unpackaging, or otherwise manipulating the information into the instructions. In an example, the derivation of the instructions may include assembly, compilation, or interpretation of the information (e.g., by the processing circuitry) to create the instructions from some intermediate or preprocessed format provided by the machine-readable medium. The information, when provided in multiple parts, may be combined, unpacked, and modified to create the instructions. For example, the information may be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers. The source code packages may be encrypted when in transit over a network and decrypted, uncompressed, assembled (e.g., linked) if necessary, and compiled or interpreted (e.g., into a library, stand-alone executable, etc.) at a local machine, and executed by the local machine. One-Touch Inline Cryptography Adding support for TLS/SSL (or other secure protocols) to an application (e.g., a Web server or a video streaming service) significantly increases the memory bandwidth use and limits the system throughput capacity. The increased memory bandwidth use is due to increased touches (e.g., processing instances) on the data, including reading the data, writing to memory, retrieving the data for encryption, storing encrypted data back to memory, retrieving the encrypted data from memory, and sending out the retrieved encrypted data (e.g., as illustrated inFIG.9B). Techniques disclosed herein for one-touch inline secure data processing use TLS/SSL (or other cryptographic protocols) that use asymmetric cryptography to establish a shared session key between the client and the server. This key is subsequently used as a symmetric key for securely transmitting messages between the client and the server. Adding support for TLS/SSL to an application such as a web server or video streaming service adds significant overhead in terms of memory bandwidth. In some cases, memory bandwidth can become a bottleneck, limiting the throughput capacity that can be served by a platform. This is because encrypting the data requires additional “touches” of the data in memory, increasing memory bandwidth, and potentially polluting the cache. Example embodiments provide secure data management functions including configuring TLS/SSL in a fashion that uses only one touch to encrypt the data. Example embodiments can be implemented in systems similar to those shown in any of the systems described below with reference toFIGS.1-7B. FIG.8illustrates a block diagram of an Edge-as-a-Service (EaaS) architecture using at least one secure data manager (SDM)816to perform secure data management functions, according to an example. A more detailed diagram of an SDM is illustrated in connection withFIG.10. The EaaS architecture800includes client compute nodes802,804, . . . ,806communicating with a plurality of edge devices (or nodes) operating as part of node clusters in different edge layers. For example, node cluster808includes edge devices associated with an edge devices layer. Node cluster810includes edge devices associated with a network access layer, and node cluster812includes edge devices associated with a core network layer. A core server (e.g., a server associated with a core data center) may be part of the node cluster812. The global network cloud814may be located at a cloud data center layer. Although an illustrative number of client compute nodes802,804, . . . ,806, edge devices in node clusters808,810,812, and a global network cloud814are shown inFIG.8, it should be appreciated that the EaaS architecture800may include more or fewer components, devices, or systems at each layer. Additionally, the number of components of each layer (e.g., the layers of node clusters808,810, and812) may increase at each lower level (i.e., when moving closer to endpoints). Consistent with the examples provided herein, each of the client compute nodes802,804, . . . ,806may be embodied as any type of endpoint component, device, appliance, or “thing” capable of communicating as a producer or consumer of data. Further, the label “node” or “device” as used in the EaaS architecture800does not necessarily mean that such node or device operates in a client (primary) role or another (secondary) role; rather, any of the nodes or devices in the EaaS architecture800refer to individual entities, nodes, or subsystems which include discrete or connected hardware or software configurations to facilitate or use the edge cloud110. The client compute nodes802,804, . . . ,806can include computing devices at an endpoint (devices and things) layer, which accesses the node clusters808,810,812to conduct data creation, analysis, and data consumption activities. In an example embodiment, the EaaS architecture800can include at least one SDM816configured to perform secure data management functions111in connection with disclosed techniques. The secure data management functions may be performed by the at least one SDM as configured within one or more management nodes (e.g., an edge orchestrator node or a meta-orchestrator node within any of the node clusters808-812) and/or within one or more connectivity nodes (e.g., an edge connectivity node within any of the node clusters808-812). In some embodiments, the SDM816is configured as an intermediary node connecting a data requesting/processing node and a data source node. The data requesting node and the data source node may be nodes within the EaaS architecture800. In some embodiments, the data requesting node may be a node executing a Web service application, a data streaming application, or the like. The data source node may be a storage node, a distributed storage architecture, or another type of data storage. In some embodiments, the data source node may be part of the SDM816, and the SDM816may be configured to process the data using one-touch inline secure data processing techniques disclosed herein (e.g., as described in connection withFIG.10). Some of the systems described above can act as Web servers. Other systems can perform video streaming or other operations that provide data and services to users (e.g., as a content delivery network (CDN) node). A web server is server software or hardware dedicated to running this software, that can satisfy client requests on the World Wide Web. These systems can process network requests using HTTP or related protocols. Requests are received and fulfilled by the receiving or transmitting of data packets. Each packet includes information for routing that packet, sequence information, other information, and a data payload.FIG.9Aillustrates communication and data flow within a system900for unsecured HTTP streaming. Some components of such a system can be similar to the components discussed above, for example, components of compute node700(FIG.7A). The system900can store data, such as Web pages, video data, etc., on a hard disk (or another storage device)902. The hard disk902can be the same as, or similar to, data storage devices710(FIG.7A). This data can be processed by processing circuitry (e.g., CPU)904and written (in operation903) to memory906. Memory906can include cache memory or other types of memory similar to or the same as memory706(FIG.7A). The processing circuitry904can also read the data from the memory906in operation907and provide the data to the network using a network interface card (NIC)908. The data can be communicated using HTTP or other protocols. Some applications, users or operators consume or provide services similar to that shown inFIG.9Abut using secure communication.FIG.9Billustrates communication and data flow within a system910for secure data streaming. In the system910ofFIG.9B, the HTTP protocol is encrypted using TLS/SSL to provide for authentication of data access, privacy, and integrity of exchanged data, etc. In implementations according to the system ofFIG.9B, the processing circuitry904can perform the encryption, or the encryption can be performed by another device or circuitry such as an accelerator. Unencrypted data can be read from the hard disk902and written to memory906in operation912. In operation914, the processing circuitry904(or an accelerator) reads data from the memory906and performs encryption using the cryptographic engine (CE)916, before writing the encrypted data back to the memory906in operation918. The processing circuitry904can also read the data from the memory906in operation920and provide the data to the network using a NIC908. In some embodiments, Intel SGX and TDX may be used for providing access to cryptographic engine functions in the CPU using Intel AES-Native Instruction (NI) and CRYPTO-NI instructions. In this regard, a cryptographic engine involving storage to TEE to memory may be using the CPU as the crypto engine (CE)916. Hence the interactions between916and904may be within the CPU904. In some embodiments, Intel Quick Assist Technology (QAT) may be an example of the CE916. In some embodiments, the processing circuitry therefore may require sharing of keys (symmetric SEKs) with the other xPUs that may originate ciphertext data or may be the terminus of ciphertext and the TEE. TLS/SSL encryption to data provided using web services, streaming services, etc., can add significant overhead to system910. In particular, memory operations can be slowed significantly as memory906is accessed at least two more times than would have been done without encryption. This can cause deterioration in throughput and other measures of quality of service (QoS). Cache pollution and other ill effects can occur as well. Some solutions perform encryption and decryption within the NIC908, or to a device between the processing circuitry904and NIC908. This can result in fewer “touches” to memory, thereby speeding memory operations and preventing cache pollution. However when encryption is performed in the NIC908, and there is subsequently a need to retransmit, then a NIC908in the receiving device which receives the dropped packets out of sequence cannot decrypt them; this forces software at the receiving device to intervene which is expensive. FIG.10illustrates a secure data management system configured for secure data management, according to an example embodiment. Referring toFIG.10, the secure data manager (SDM)1000can be the same edge computing device as the SDM816ofFIG.8, which can be in communication with other edge computing devices (e.g., edge computing device1001). The SDM1000includes a storage device1002, a memory device1004, enhanced direct memory access (DMA) engine1006, a network interface card (NIC)1008, and a CPU1010. The enhanced DMA engine1006includes a DMA engine1012configured to perform DMA functionalities (e.g., processing DMA requests for storing and retrieving data) and a cryptographic engine (CE)1014. The CE1014comprises suitable circuitry, logic, interfaces, and/or code and is configured to perform secure data processing (e.g., encryption or decryption of data using one or more types of secure keys) in connection with one-touch inline secure data processing techniques discussed herein. In operation, the SDM1000receives a request for data (e.g., data1028) via the NIC1008from the edge computing device1001. Based on the received requests for data, the SDM1000performs a secure exchange (e.g., TLS or SSL based secure protocol exchange) with the edge computing device1001and negotiates a symmetric encryption key (SEK)1020which is shared between the two devices. For example, the secure exchange can be performed between the CPU1010of the SDM1000and the CPU1016of the edge computing device1001. Based on the negotiated SEK1020, the CPU1010generates an inline encryption command1019which is communicated to the enhanced DMA engine1006. In an example embodiment, the inline encryption command1019includes a read address (RA)1022, a write address (WA)1024, and the SEK1020. The RA1022indicates an address associated with the storage device1002where the requested data1028is stored. The WA1024indicates an address associated with a memory location within the memory device1004where encrypted data may be stored. After the enhanced DMA engine1006receives the inline encryption command1019, the DMA engine1012uses the RA1022to retrieve the data1028from the storage device1002. After the data1028is retrieved, the CE1014encrypts the data using the shared SEK1020to generate encrypted data1030. The DMA engine1012then stores the encrypted data1030at the memory location of the memory device1004indicated by the WA1024. The inline encryption command1019further indicates other cryptographic parameters such as an encryption technique (or algorithm), encryption mode, etc. for use by the CE1014to generate the encrypted data1030. In an example embodiment, CPU1010generates a transport layer security (TLS) record template (RT)1026(which can be stored in the memory device1004) and communicates the TLS RT1026to the NIC1008after the data to be encrypted and sent has been read from storage, encrypted inline and written to the memory indicated by WA1024. The TLS RT1026includes the TLS record header and other information. In some aspects, the TLS RT1026is configured as a data structure and includes one or more of the following information: a sender IP address of the SDM1000, a destination IP address of the edge computing device1001, a port address of the SDM1000, and a port address of the edge computing device1001. Additionally, the TLS RT1026further includes a data pointer (e.g., such as the WA1024) to a location in the memory device1004storing the encrypted data1030. In this regard, the NIC1008uses the TLS RT1026to retrieve the encrypted data1030and configure a header for the retrieved data. In an alternative embodiment, the TLS RT1026may include space to store the entire encrypted payload of the encoded data, up to a configured size such as 16 KB. The retrieved encrypted data1030is then packetized with the generated header and communicated as data output1036to the edge computing device1001. Since the TLS RT1026includes information that does not change during data exchange between the SDM1000and the edge computing device1001, the use of the TLS RT1026reduces memory bandwidth usage and increases the processing efficiency of the SDM1000. In an example embodiment, the TLS RT1026is generated during the initial secure exchange between the SDM1000and the edge computing device1001. In an example embodiment, data1028retrieved from the storage device1002may already be encrypted with a different SEK than the SEK1020that has been negotiated. In this case, the CE1014within the enhanced DMA engine1006or the CPU1010can detect (e.g., via software or a configuration parameter) that the retrieved data1028is differently encrypted and can communicate with the CPU1010to obtain the different decryption key. The CE1014then performs data transcription which includes decrypting the data based on the decryption key obtained from the CPU1010and encrypting the decoded data using the SEK1020. In some embodiments in which the storage device1002is self-encrypting/decrypting to protect data at rest, the storage device1002shall first decrypt the data1028and then the CE1014shall encrypt the resulting decrypted data using the SEK1020. Even though the CE1014is illustrated inFIG.10as part of the enhanced DMA engine1006, the disclosure is not limited in this regard and the CE1014may be implemented differently. For example, the CE1014may be implemented as part of the storage device1002, as part of processing circuitry along the data communication path1032between the storage device and the memory device1004, or as part of processing circuitry along the data communication path1034from the memory device1004to the NIC1008. In other aspects, the CE1014may be part of the CPU1010or may be implemented separately as a standalone circuit on a motherboard of the SDM1000. In some embodiments, the CE1014and-or the DMA engine1012may be implemented as a component of a Data Processing Unit (DPU), or as a component of an Infrastructure Processing Unit (IPU) or as a component of a Network Processing Unit (NPU). In some aspects, the elements DPU, NPU, IPU may be special-purpose accelerators for performing various common data transformations on data in transit from storage, network, or a GPU device to memory, or from memory to storage, network, or a GPU device. The various common data transformations on data in transit include encryption or decryption, compression or decompression, and, various format conversions such as big-endian to small-endian or small-endian to big-endian representations. In the SDM1000, encryption is performed using CE1014within the enhanced DMA engine1006after reading data1028from the storage device1002, and the encrypted data1030is then written to the memory device1004. This reduces the number of “touches” to memory. The CPU1010then provides the data pointer information in the TLS RT1026for the encrypted data1030to the NIC1008, and the NIC1008uses the data pointer information in the TLS RT1026to retrieve the encrypted data from the memory device1004, generate the header, and communicate the data output1036to the edge computing device1001. In some embodiments, computer instructions (e.g., firmware, software, or a combination thereof) execute on the CE1014and/or CPU1010to implement methods according to embodiments described herein. In some embodiments, the CE1014supports the functionalities of the TLS protocol, which was briefly described above. In more detail, TLS is a protocol that guarantees privacy and data integrity between client/server applications communicating over the Internet. The TLS protocol is made up of two layers: 1) The TLS Record Protocol—layered on top of a reliable transport protocol, such as TCP, it ensures that the connection is private by using symmetric data encryption and it ensures that the connection is reliable. The TLS Record Protocol also is used for encapsulation of higher-level protocols, such as the TLS Handshake Protocol; and 2) The TLS Handshake Protocol—allows authentication between the server and client and the negotiation of an encryption algorithm and cryptographic keys before the application protocol transmits or receives any data. TLS is application protocol-independent. Higher-level protocols can layer on top of the TLS protocol transparently. In some embodiments, authentication keys (such as the SEK1020) can be protected in the storage device1002or the CE1014using Intel Key Protection Technology (KPT) or similar technology. Embodiments are not limited to TLS implementations and other secure protocols may be used as well. For example, other transport layer cryptographic protocols can be used, and cryptographic protocols related to other layers besides the transport layer can be used (e.g., IP Security (IP SEC) protocol, Datagram TLS (DTLS) protocol, etc.). In some embodiments, the payload may already be encrypted on the storage802. In at least these embodiments (e.g., in examples of self-encrypting drives (SEDs)) storage802is configured to perform encryption (on write) and decryption (on read) of this data. In some embodiments, using the one-touch inline secure data processing techniques discussed herein can result in a reduction in memory bandwidth usage as fewer reads and writes in cache memory are performed, resulting in less cache pollution. Additionally, using the disclosed techniques allows for a reduction in computation needs as the CPU is not used for performing cryptographic functions. FIG.11is a flowchart of a method1100for one-touch inline secure data processing performed by an edge computing device, according to an example embodiment. The method1100can be performed by an edge computing device (e.g., the SDM1000inFIG.10) in an edge computing system. At operation1102, a secure exchange is performed with an edge computing device to negotiate a shared symmetric encryption key, based on a request for data received from the edge computing device. For example, the SDM1000performs a secure exchange (e.g., using TLS handshake protocol) with the edge computing device1001to negotiate the shared symmetric encryption key1020. At operation1104, an inline encryption command is generated based on the completion of the secure exchange. For example, based on the completion of the secure exchange, the CPU1010generates the inline encryption command1019. The inline encryption command1019includes a first address (e.g., RA1022) associated with a storage location (e.g., storage device1002) storing the data (e.g., data1028), a second address (e.g., WA1024) associated with a memory location in at least one memory device (e.g., memory device1004), and the shared symmetric encryption key (e.g., SEK1020). At operation1106, the data is retrieved from the storage location using the first address of the inline encryption command. For example, the CE1014retrieves data1028from the storage device1002based on the RA1022. At operation1108, the data is encrypted using a cryptographic engine within a plurality of hardware components and based on the shared symmetric encryption key. For example, the retrieved data1028is encrypted by the CE1014based on the SEK1020. At operation1110, the encrypted data is stored in the memory location using the second address. For example, the encrypted data1030is stored in the memory device1004by the DMA engine1012using the WA1024. It should be understood that the functional units or capabilities described in this specification may have been referred to or labeled as components, circuits, or modules, to more particularly emphasize their implementation independence. Such components may be embodied by any number of software or hardware forms. For example, a component or module may be implemented as a hardware circuit comprising custom very-large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A component or module may also be implemented in programmable hardware devices such as field-programmable gate arrays, programmable array logic, programmable logic devices, or the like. Components or modules may also be implemented in software for execution by various types of processors. An identified component or module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified component or module need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the component or module and achieve the stated purpose for the component or module. Indeed, a component or module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices or processing systems. In particular, some aspects of the described process (such as code rewriting and code analysis) may take place on a different processing system (e.g., in a computer in a data center) than that in which the code is deployed (e.g., in a computer embedded in a sensor or robot). Similarly, operational data may be identified and illustrated herein within components or modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. The components or modules may be passive or active, including agents operable to perform desired functions. Additional examples of the presently described method, system, and device embodiments include the following, non-limiting implementations. Each of the following non-limiting examples may stand on its own or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure. ADDITIONAL EXAMPLES AND ASPECTS Example 1 is an edge computing device operable in an edge computing system, the edge computing device including network communications circuitry (NCC); an enhanced direct memory access (DMA) engine coupled to a memory device, the enhanced DMA engine comprising a cryptographic engine; and processing circuitry coupled to the NCC and the enhanced DMA engine, the processing circuitry configured to: perform a secure exchange with a second edge computing device to negotiate a shared symmetric encryption key, based on a request for data received via the NCC from the second edge computing device; and generate an inline encryption command for communication to the enhanced DMA engine, the inline encryption command including: a first address associated with a storage location storing the data, a second address associated with a memory location in the memory device, and the shared symmetric encryption key; wherein the enhanced DMA engine is configured to retrieve the data from the storage location using the first address, encrypt the data using the cryptographic engine and based on the shared symmetric encryption key, and store the encrypted data in the memory location using the second address. In Example 2, the subject matter of Example 1 optionally includes subject matter where the processing circuitry is configured to generate the inline encryption command to further specify an encryption algorithm for encrypting the data by the cryptographic engine. In Example 3, the subject matter of Examples 1-2 optionally includes subject matter where the secure exchange with the second edge computing device is based on a Transport Layer Security (TLS) protocol exchange. In Example 4, the subject matter of Examples 1-3 optionally includes subject matter where the processing circuitry is configured to generate a Transport Layer Security (TLS) record template (RT) based on the request for the data received from the second edge computing device; and communicate the TLS RT and the second address associated with the memory location to the NCC. In Example 5, the subject matter of Examples 3-4 optionally includes subject matter where the NCC is configured to retrieve the encrypted data from the memory location using the second address and generate a header based on the TLS RT. In Example 6, the subject matter of Examples 4-5 optionally includes subject matter where the NCC is further configured to communicate the header with a payload comprising the encrypted data to the second edge computing device using a destination IP address specified in the TLS RT. In Example 7, the subject matter of Examples 1-6 optionally includes subject matter where the enhanced DMA engine is further configured to detect a configuration that the data retrieved from the storage location is encrypted. In Example 8, the subject matter of Examples 1-7 optionally includes subject matter where the enhanced DMA engine is further configured to perform data transcription to encrypt the data using the cryptographic engine and based on the shared symmetric encryption key. In Example 9, the subject matter of Example 8 optionally includes subject matter where to perform the data transcription the enhanced DMA engine is further configured to retrieve a decryption key from the processing circuitry. In Example 10, the subject matter of Example 9 optionally includes subject matter where to perform the data transcription the enhanced DMA engine is further configured to decode, using the cryptographic engine, the encrypted data based on the retrieved decryption key to obtain decoded data. In Example 11, the subject matter of Example 10 optionally includes subject matter where to perform the data transcription the enhanced DMA engine is further configured to: encode, using the cryptographic engine, the decoded data based on the shared symmetric encryption key. Example 12 is a secure data management system comprising: a plurality of hardware components, including a processing circuitry, a direct memory access (DMA) engine, and a cryptographic engine; and at least one memory device including instructions embodied thereon, wherein the instructions, which when executed by the processing circuitry, configure the hardware components to perform operations to: perform a secure exchange with an edge computing device to negotiate a shared symmetric encryption key, based on a request for data received from the edge computing device; generate an inline encryption command based on completion of the secure exchange, the inline encryption command including a first address associated with a storage location storing the data, a second address associated with a memory location in the at least one memory device, and the shared symmetric encryption key, retrieve the data from the storage location using the first address of the inline encryption command; encrypt the data using the cryptographic engine within the plurality of hardware components and based on the shared symmetric encryption key; and store the encrypted data in the memory location using the second address. In Example 13, the subject matter of Example 12 optionally includes subject matter where the instructions further configure the hardware components to perform operations to generate a Transport Layer Security (TLS) record template (RT) based on the request for the data received from the edge computing device; retrieve the encrypted data from the memory location using the second address, and generate a header based on the TLS RT. In Example 14, the subject matter of Example 13 optionally includes subject matter where the instructions further configure the hardware components to perform operations to communicate the header with a payload comprising the encrypted data to the edge computing device using a destination IP address specified in the TLS RT. In Example 15, the subject matter of Examples 12-14 optionally includes subject matter where the instructions further configure the hardware components to perform operations to generate the inline encryption command to specify an encryption algorithm for encrypting the data by the cryptographic engine. In Example 16, the subject matter of Examples 12-15 optionally includes subject matter where the instructions further configure the hardware components to perform operations to detect the data retrieved from the storage location is encrypted; and perform data transcription to encrypt the data using the cryptographic engine and based on the shared symmetric encryption key. In Example 17, the subject matter of Example 16 optionally includes subject matter where to perform the data transcription, the instructions further configure the hardware components to perform operations to retrieve a decryption key; and decode, using the cryptographic engine, the encrypted data based on the retrieved decryption key to obtain decoded data. In Example 18, the subject matter of Example 17 optionally includes subject matter where to perform the data transcription, the instructions further configure the hardware components to perform operations to encode, using the cryptographic engine, the decoded data based on the shared symmetric encryption key to obtain the encrypted data. Example 19 is at least one non-transitory machine-readable storage device comprising instructions stored thereupon, which when executed by processing circuitry of an edge computing system, cause the processing circuitry to perform operations comprising: performing a secure exchange with an edge computing device to negotiate a shared symmetric encryption key, based on a request for data received from the edge computing device; generating an inline encryption command based on completion of the secure exchange, the inline encryption command including a first address associated with a storage location storing the data, a second address associated with a memory location in at least one memory device, and the shared symmetric encryption key; retrieving the data from the storage location using the first address of the inline encryption command; encrypting the data based on the shared symmetric encryption key; and storing the encrypted data in the at least one memory device using the second address. In Example 20, the subject matter of Example 19 optionally includes subject matter where the instructions further cause the processing circuitry to perform operations comprising: generating a Transport Layer Security (TLS) record template (RT) based on the request for the data received from the edge computing device; retrieving the encrypted data from the memory location using the second address and generating a header based on the TLS RT. In Example 21, the subject matter of Example 20 optionally includes subject matter where the instructions further cause the processing circuitry to perform operations comprising: communicating the header with a payload comprising the encrypted data to the edge computing device using a destination IP address specified in the TLS RT. In Example 22, the subject matter of Examples 19-21 optionally includes subject matter where the instructions further cause the processing circuitry to perform operations comprising: generating the inline encryption command to further specify an encryption algorithm for encrypting the data. In Example 23, the subject matter of Examples 19-22 optionally includes subject matter where the instructions further cause the processing circuitry to perform operations comprising: detecting the data retrieved from the storage location is encrypted, and performing data transcription to encrypt the data based on the shared symmetric encryption key. In Example 24, the subject matter of Example 23 optionally includes subject matter where to perform the data transcription, the instructions further cause the processing circuitry to perform operations comprising: retrieving a decryption key, and decoding the encrypted data based on the retrieved decryption key to obtain decoded data. In Example 25, the subject matter of Example 24 optionally includes subject matter where to perform the data transcription, the instructions further cause the processing circuitry to perform operations comprising: encoding the decoded data based on the shared symmetric encryption key to obtain the encrypted data. Example 26 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-25. Example 27 is an apparatus comprising means to implement of any of Examples 1-25. Example 28 is a system to implement of any of Examples 1-25. Example 29 is a method to implement of any of Examples 1-25. Example 30 is a multi-tier edge computing system, comprising a plurality of edge computing nodes provided among on-premise edge, network access edge, or near edge computing settings, the plurality of edge computing nodes configured to perform any of the methods of Examples 1-25. Example 31 is an edge computing system, comprising a plurality of edge computing nodes, each of the plurality of edge computing nodes configured to perform any of the methods of Examples 1-25. Example 32 is an edge computing node, operable in an edge computing system, comprising processing circuitry coupled to enhanced DMA circuitry configured to implement any of the methods of Examples 1-25. Example 33 is an edge computing node, operable as a server hosting the service and a plurality of additional services in an edge computing system, configured to perform any of the methods of Examples 1-25. Example 34 is an edge computing node, operable in a layer of an edge computing network as an aggregation node, network hub node, gateway node, or core data processing node, configured to perform any of the methods of Examples 1-25. Example 35 is an edge provisioning, orchestration, or management node, operable in an edge computing system, configured to implement any of the methods of Examples 1-25. Example 36 is an edge computing network, comprising networking and processing components configured to provide or operate a communications network, to enable an edge computing system to implement any of the methods of Examples 1-25. Example 37 is an access point, comprising networking and processing components configured to provide or operate a communications network, to enable an edge computing system to implement any of the methods of Examples 1-25. Example 38 is a base station, comprising networking and processing components configured to provide or operate a communications network, configured as an edge computing system to implement any of the methods of Examples 1-25. Example 39 is a road-side unit, comprising networking components configured to provide or operate a communications network, configured as an edge computing system to implement any of the methods of Examples 1-25. Example 40 is an on-premise server, operable in a private communications network distinct from a public edge computing network, configured as an edge computing system to implement any of the methods of Examples 1-25. Example 41 is a 3GPP 4G/LTE mobile wireless communications system, comprising networking and processing components configured as an edge computing system to implement any of the methods of Examples 1-25. Example 42 is a 5G network mobile wireless communications system, comprising networking and processing components configured as an edge computing system to implement any of the methods of Examples 1-25. Example 43 is an edge computing system configured as an edge mesh, provided with a microservice cluster, a microservice cluster with sidecars, or linked microservice clusters with sidecars, configured to implement any of the methods of Examples 1-25. Example 44 is an edge computing system, comprising circuitry configured to implement services with one or more isolation environments provided among dedicated hardware, virtual machines, containers, or virtual machines on containers, the edge computing system configured to implement any of the methods of Examples 1-25. Example 45 is an edge computing system, comprising networking and processing components to communicate with a user equipment device, client computing device, provisioning device, or management device to implement any of the methods of Examples 1-25. Example 46 is networking hardware with network functions implemented thereupon, operable within an edge computing system, the network functions configured to implement any of the methods of Examples 1-25. Example 47 is acceleration hardware with acceleration functions implemented thereupon, operable in an edge computing system, the acceleration functions configured to implement any of the methods of Examples 1-25. Example 48 is storage hardware with storage capabilities implemented thereupon, operable in an edge computing system, the storage hardware configured to implement any of the methods of Examples 1-25. Example 49 is computation hardware with compute capabilities implemented thereupon, operable in an edge computing system, the computation hardware configured to implement any of the methods of Examples 1-25. Example 50 is an edge computing system configured to implement services with any of the methods of Examples 1-25, with the services relating to one or more of: compute offload, data caching, video processing, network function virtualization, radio access network management, augmented reality, virtual reality, autonomous driving, vehicle assistance, vehicle communications, industrial automation, retail services, manufacturing operations, smart buildings, energy management, internet of things operations, object detection, speech recognition, healthcare applications, gaming applications, or accelerated content processing. Example 51 is an apparatus of an edge computing system comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform any of the methods of Examples 1-25. Example 52 is one or more computer-readable storage media comprising instructions to cause an electronic device of an edge computing system, upon execution of the instructions by one or more processors of the electronic device, to perform any of the methods of Examples 1-25. Example 53 is a computer program used in an edge computing system, the computer program comprising instructions, wherein execution of the program by a processing element in the edge computing system is to cause the processing element to perform any of the methods of Examples 1-25. Example 54 is an edge computing appliance device operating as a self-contained processing system, comprising a housing, case or shell, network communication circuitry, storage memory circuitry, and processor circuitry adapted to perform any of the methods of Examples 1-25. Example 55 is an apparatus of an edge computing system comprising means to perform any of the methods of Examples 1-25. Example 56 is an apparatus of an edge computing system comprising logic, modules, or circuitry to perform any of the methods of Examples 1-25. Another example implementation is an edge computing system, including respective edge processing devices and nodes to invoke or perform the operations of Examples 1-25, or other subject matter described herein. Another example implementation is a client endpoint node, operable to invoke or perform the operations of Examples 1-25, or other subject matter described herein. Another example implementation is an aggregation node, network hub node, gateway node, or core data processing node, within or coupled to an edge computing system, operable to invoke or perform the operations of Examples 1-25, or other subject matter described herein. Another example implementation is an access point, base station, road-side unit, street-side unit, or on-premise unit, within or coupled to an edge computing system, operable to invoke or perform the operations of Examples 1-25, or other subject matter described herein. Another example implementation is an edge provisioning node, service orchestration node, application orchestration node, or multi-tenant management node, within or coupled to an edge computing system, operable to invoke or perform the operations of Examples 1-25, or other subject matter described herein. Another example implementation is an edge node operating an edge provisioning service, application or service orchestration service, virtual machine deployment, container deployment, function deployment, and compute management, within or coupled to an edge computing system, operable to invoke or perform the operations of Examples 1-25, or other subject matter described herein. Another example implementation is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of Examples 1-25, or other subject matter described herein. Another example implementation is an edge computing system adapted for supporting client mobility, vehicle-to-vehicle (V2V), vehicle-to-everything (V2X), or vehicle-to-infrastructure (V2I) scenarios, and optionally operating according to European Telecommunications Standards Institute (ETSI) Multi-Access Edge Computing (MEC) specifications, operable to invoke or perform the use cases discussed herein, with use of Examples 1-25, or other subject matter described herein. Another example implementation is an edge computing system adapted for mobile wireless communications, including configurations according to a 3GPP 4G/LTE or 5G network capabilities, operable to invoke or perform the use cases discussed herein, with use of Examples 1-25, or other subject matter described herein. Another example implementation is an edge computing node, operable in a layer of an edge computing network or edge computing system as an aggregation node, network hub node, gateway node, or core data processing node, operable in a close edge, local edge, enterprise edge, on-premise edge, near edge, middle, edge, or far edge network layer, or operable in a set of nodes having common latency, timing, or distance characteristics, operable to invoke or perform the use cases discussed herein, with use of Examples 1-25, or other subject matter described herein. Another example implementation is networking hardware, acceleration hardware, storage hardware, or computation hardware, with capabilities implemented thereupon, operable in an edge computing system to invoke or perform the use cases discussed herein, with use of Examples 1-25, or other subject matter described herein. Another example implementation is an apparatus of an edge computing system comprising: one or more processors and one or more computer-readable media comprising instructions that, when deployed and executed by the one or more processors, cause the one or more processors to invoke or perform the use cases discussed herein, with use of Examples 1-25, or other subject matter described herein. Another example implementation is one or more computer-readable storage media comprising instructions to cause an electronic device of an edge computing system, upon execution of the instructions by one or more processors of the electronic device, to invoke or perform the use cases discussed herein, with use of Examples 1-25, or other subject matter described herein. Another example implementation is an apparatus of an edge computing system comprising means, logic, modules, or circuitry to invoke or perform the use cases discussed herein, with the use of Examples 1-25, or other subject matter described herein. Although these implementations have been described with reference to specific exemplary aspects, it will be evident that various modifications and changes may be made to these aspects without departing from the broader scope of the present disclosure. Many of the arrangements and processes described herein can be used in combination or parallel implementations to provide greater bandwidth/throughput and to support edge services selections that can be made available to the edge systems being serviced. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show, by way of illustration, and not of limitation, specific aspects in which the subject matter may be practiced. The aspects illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other aspects may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various aspects is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Such aspects of the inventive subject matter may be referred to herein, individually and/or collectively, merely for convenience and without intending to voluntarily limit the scope of this application to any single aspect or inventive concept if more than one is disclosed. Thus, although specific aspects have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific aspects shown. This disclosure is intended to cover any adaptations or variations of various aspects. Combinations of the above aspects and other aspects not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. | 127,460 |
11943208 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS InFIGS.1-3, reference numeral1refers to an Internet of Things (IoT) device. As illustrated schematically inFIG.1, the IoT device1comprises a processor10and an electronic communication circuit12connected to the processor10. The IoT device1further comprises a data store11, e.g. memory, having stored therein securely a unique identifier111of the IoT device1and a cryptographic key112. In an embodiment, the processor10and/or the data store11are implemented as a hardware secure element. The IoT device1is a mobile, portable device, implemented as a self-contained unit arranged in a housing, e.g. a dongle, a key fob, a tag, or the like, or a device arranged in another mobile or stationary physical device, e.g. a machine, a vehicle, a home appliance, and other items embedded with electronics, software, sensors, and/or actuators. The IoT device1is powered by a battery included in the IoT device1, by a power supply of the physical device having integrated the IoT device1therein, or by the mobile communication device2through induction. The electronic communication circuit12is configured for close range communication R with a stationary or mobile communication device2, within the close range of the Internet of Things device1. The electronic communication circuit12comprises an RFID (Radio Frequency Identification), Bluetooth, or BLE (Bluetooth Low Energy) circuit, or another circuit for wireless data communication over a close range, such as up to a few meters, e.g. up to one to five meters, up to ten meters, or even up to a hundred meters. The mobile communication device2is implemented as a mobile radio telephone (cellular phone), a laptop computer, a tablet computer, a smart watch, or another mobile electronic device configured for wireless communication via close range R and via a communication network5, specifically via a mobile radio network. For that purpose, the mobile communication device2comprises a communication circuit22for close range communication, compatible to the communication circuit12of the IoT device1, and a communication module21for communicating via a mobile radio network, as illustrated inFIG.1. The communication network5comprises a mobile radio network such as a GSM (Global System for Mobile Communication) network, a UMTS (Universal Mobile Telephone System) network, and/or another cellular radio communication network. As illustrated inFIG.1, the mobile communication device2further comprises a processor20and a data store23having stored therein program code, configured to control the processor20, and a secured data package, as described later in more detail. The communication network5further comprises the Internet and LAN (local Area Network) and WLAN (Wireless LAN) for accessing the Internet. InFIGS.1-3, reference numeral3refers to a computer system, which is arranged remotely from the IoT device1and the mobile communication device2. The remote computer system3comprises one or more computers with one or more processors30and a communication module31configured to communicate via the communication network5with the mobile communication device2and a partner back-end system4associated with the remote computer system3. The remote computer system3is configured as a trusted service provider for the partner back-end system4and associated IoT devices1. The remote computer system3further comprises a data store32for storing IoT device data and “communication relay addresses”321assigned to IoT devices1. The partner back-end system4comprises one or more computers with one or more processors40and a communication module41configured to communicate via the communication network5with the remote computer system3associated with the back-end system4. In an embodiment, the computer system3and the partner back-end system4are configured in one common computer centre, e.g. as a cloud-based computing centre. In the following paragraphs, described with reference toFIGS.2and3are possible sequences of steps performed by the IoT device1, the mobile communication device2, the computer system3, and the partner back-end system4, or their processors10,20,30,40, respectively, for exchanging data securely via the communication network5between the IoT device1, the mobile communication device2, the remote computer system3, and/or the partner back-end system4, respectively, for communicating between the IoT device1and the remote computer system3and/or the associated partner back-end system4. FIG.2illustrates an exemplary sequence of steps for an initial setup of the IoT device1and for registering the IoT device1via the mobile communication device2with the remote computer system3and the partner back-end system4associated with the remote computer system3. In step S1, the IoT device1is initialized. Specifically, in step S11, an initial setup of the IoT device1is performed. Performing the initial setup includes storing securely in the data store11of the IoT device1a unique identifier111of the IoT device1and a cryptographic key112for the IoT device1. In step S12, the unique identifier111of the IoT device1and the cryptographic key112of the IoT device1are recorded (stored) in the remote computer system3. For example, the unique identifier111of the IoT device1and the cryptographic key112of the IoT device1are generated and stored in the data store11of the IoT device1in a secured environment, e.g. in facilities with secured access and strict access control, and the unique identifier111and the cryptographic key112of the IoT device1are stored in the data store32of the remote computer system3either through a secured communication line or in situ inside the secured environment. In step S2, the IoT device1is customized for the partner back-end system4. Specifically, via the close range communication interface, established by the close range communication circuits12,22of the IoT device1and the mobile communication device2, the IoT device1is customized by transferring partner customization data from the mobile communication device2to the IoT device1, e.g. by a partner customization app installed and executing on the processor20of the mobile communication device2. The partner customization data is transferred in a secured data container. The secured data container comprises the partner customization data in encrypted form and is part of the partner customization app, as provided by the partner back-end system4or a dedicated app server, for example. The processor10of the IoT device1receives and decrypts the secured data package from the mobile communication device2, using the cryptographic key112stored in the IoT device1. The processor10of the IoT device1extracts from the decrypted data package the partner customization data. In an embodiment, the partner customization data includes a replacement cryptographic key and/or an identifier of the partner back-end system4. The processor10of the IoT device1replaces the cryptographic key112stored securely in the IoT device1with the replacement cryptographic key extracted from the secured data package. The processor10of the IoT device1further stores in the IoT device1the identifier of the partner back-end system4extracted from the secured data package. InFIG.2, the steps of block S3relate to a registration process for registering the IoT device1with the remote computer system3and the associated partner back-end system4. In step S31, processor10of the IoT device1generates a registration request. Depending on the configuration and/or application scenario, generation of the registration request is initiated in response to a command from the mobile communication device2, as generated by the partner customization app, or to actuation by a user of an operating element of the IoT device1, e.g. a switch or button which is connected to the processor10of the IoT device1. The processor10of the IoT device1includes in the registration request the identifier of the partner back-end system4and a verification message. The verification message is generated by the processor10of the IoT device1encrypting the unique device identifier111using the cryptographic key112or its replacement key, respectively. The processor10of the IoT device1transmits the registration request in an upload data message via the electronic communication circuit12to the mobile communication device2. In step S32, the mobile communication device2or its processor20controlled by the partner customization app, respectively, receives from the user (user) customization information, such as a user name and access control information, e.g. a user password and/or a partner access code. In step S33, the IoT device1and its user are verified by the remote computer system3. The mobile communication device2or its processor20controlled by the partner customization app, respectively, forwards the upload data message, received from the IoT device1, and the user customization information via the communication network5, specifically via the mobile radio network, to the remote computer system3. The remote computer system3or its processor30, respectively, extracts the verification message from the registration request and verifies the device identifier of the IoT device1by decrypting the verification message, using the cryptographic key112, initially stored in the IoT device1, or its replacement key, provided securely by the partner back-end system4. The device identifier received in the uploaded verification message is verified by comparing it to the unique identifiers initially recorded for the IoT device1in the remote computer system3. Upon positive verification, the registration process is continued. In step S34, the remote computer system3or its processor30, respectively, stores, assigned to the verified device identifier of the IoT device1, the received identifier of the partner back-end system, the (user) customization information, including the user name, and the address of the mobile communication device2which forwarded the upload data message to the remote computer system3, e.g. a Mobile Subscriber Integrated Services Digital Network Number (MSISDN). The address of the mobile communication device2is stored as a current “communication relay address”321for forwarding download data messages to the IoT device1. The status of the IoT device1is set to “registration pending, awaiting approval from partner back-end system”. Furthermore, the remote computer system3or its processor30, respectively, transmits to the partner back-end system4(as defined by the received identifier of the partner back-end system) a registration message which includes the verified unique identifier of the IoT device1, and the user customization information, including the user name and access control information, e.g. a user password and/or a partner access code. The partner back-end system4verifies the access control information and, upon positive verification, approves and registers the IoT device1by storing the unique device identifier assigned to the user name. In step S35, registration of the IoT device1is completed by the partner back-end system4transmitting a registration confirmation message to the remote computer system3. At the remote computer system3, the status of the IoT device1is set to “registration pending, awaiting acknowledgement from IoT device”, and the remote computer system3transmits a download data message with a confirmation to the address of the mobile communication device2stored as the current “communication relay address”321for the IoT device1, for forwarding to the IoT device1. If the “communication relay address”321changes before the status of the IoT device is set to “registered”, because the IoT device1contacts the remote computer system3via another mobile communication device2, the remote computer system3retransmits the download data message with the confirmation to the “new” address of the mobile communication device2. Once the mobile communication device2and the IoT device1are within communication range, the mobile communication device2transmits the download message with the confirmation via the communication circuit22to the IoT device1. I an embodiment, the download data message with the confirmation includes user and/or partner customization information, e.g. the user name, included by the remote computer system3and/or the partner back-end system4, which is stored in the IoT device1by the processor10of the IoT device1. The processor10of the IoT device1transmits an upload data message with an acknowledgement via the communication circuit12to the mobile communication device2for forwarding to the remote computer system3. The mobile communication device2transmits the upload data message with the acknowledgement to the remote computer system3. The remote computer system3sets the status of the IoT device1to “registered”. FIG.3illustrates exemplary sequences of steps for transmitting a download data message from the partner back-end system4associated with the remote computer system3via the mobile communication device2to the IoT device1, as shown in block S4, and for transmitting an upload data message from the IoT device1via the mobile communication device2to the partner back-end system4associated with the remote computer system3, as shown in block S5. Transmitting a download data message from the partner back-end system4and/or the remote computer system3via the mobile communication device2to the IoT device1, makes it possible to transfer to the IoT device1executable code, e.g. for a firmware update of the IoT device1, and instructions to be executed by the IoT device1, e.g. a reset instruction, a firmware update instruction, or an access rights update instruction. The download data messages are end-to-end encrypted between either the partner back-end system4or the remote computer system3and the IoT device1. Correspondingly, the upload data messages are end-to-end encrypted between the IoT device1and either the remote computer system3or the partner back-end system4. The mobile communication device2is merely used to relay the secured data messages between the IoT device1and the remote computer system3. A user may use different mobile communication devices2as an intermediary communication relay device, which will be recorded in the remote computer system3with its address as the current “communication relay address”321, whenever upload data messages from the IoT device1are received at the remote computer system3. Download data messages which have not yet been confirmed by the IoT device1will be retransmitted by the remote computer system3whenever there is a change in the mobile communication devices2or the “communication relay address”321, respectively. To avoid that the IoT device1processes outdated download data messages received from a mobile communication device2, a version indicator is included in the download data message by the remote computer system3(or the partner back-end system4), enabling the IoT device1to detect outdated download data message, by comparing version indicator of a newly received download data message to the stored version indicator of a previously received download data message. The version indicator includes a sequential number and/or date and time information (time stamp). In step S41, the partner back-end system4or its processor40, respectively, generates and transmits to the remote computer system3a download data message for transmission to the IoT device1, identified by its unique identifier111. The remote computer system3includes a version indicator in the download data message, encrypts the download data message with the cryptographic key112or replacement key stored in the IoT device1, and stores the download data message assigned to the IoT device1for possible retransmissions at a later point in time. In step S42, the remote computer system3transmits the encrypted data message via the communication network5to the current “communication relay address”321assigned to the IoT device1for forwarding to the IoT device1by the respective mobile communication device2. In step S43, the mobile communication device2receives and stores the download data message for forwarding to the IoT device1(once it is within communication range). In step S44, when the mobile communication device2is within the communication range of the IoT device1(or vice versa), the mobile communication device2transmits the download data message via the communication circuit22to the IoT device1. In step S45, the processor10of the IoT device1processes the received download data message. The processor10decrypts the download data message, using the cryptographic key112stored in the IoT device1, and checks whether the version indicator of the received download data message indicates a newer version of download data message than previously received and stored in the IoT device1. If the download data message is outdated, it is ignored and optionally an error message is transmitted to the mobile communication device2. Otherwise, if the download data message is newer than previously received messages, the processor10continues processing the download data message and stores the version indicator of the received download data message. Depending on its contents, the processor10executes instructions, such as executing a firmware update by installing and executing received executable code, executing a reset of the IoT device1, replacing an encryption key, and/or performing an update of access rights with received access rights and/or access rights time information. For confirming receipt and processing of the download data message, the IoT device1transmits an upload data message with a confirmation (acknowledgement) message to the partner back-end system4. In step S51, the processor10of the IoT device1generates an upload data message for the partner back-end system4and transmits it via the communication circuit12to the mobile communication device2within communication range of the IoT device1. Depending on the scenario and/or application, the upload data message is encrypted by the processor10, using the cryptographic key112stored in the IoT device1, and may include a confirmation (acknowledgement) message, a status report message related to the status of the IoT device1(e.g. low battery), and/or a data payload with data values associated with the IoT device1, such as sensor data, operational data of an appliance or machine connected to the IoT device1, etc. In step S52, the mobile communication device2or its processor20, respectively, transmits the upload data message from the IoT device1via the communication network5to the remote computer system3for forwarding to the partner back-end system4. In step S53, the remote computer system3stores the address of the mobile communication device2which forwarded the upload data message as the current “communication relay address”321. In step S54, the remote computer system3transmits the upload data message to the partner back-end system4. In step S55, the partner back-end system4processes the upload data message from the IoT device1. If encrypted, the upload data message is decrypted by the partner back-end system4. It should be noted that, in the description, the computer program code has been associated with specific functional modules and the sequence of the steps has been presented in a specific order, one skilled in the art will understand, however, that the computer program code may be structured differently and that the order of at least some of the steps could be altered, without deviating from the scope of the disclosure. | 19,633 |
11943209 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The embodiments of the invention can be implemented in numerous ways, such as a process, an apparatus, a system, a computer readable medium such as a computer readable storage medium or as a computer network wherein program instructions are sent over various types of e.g. optical or electronic communication links. In this description, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, and components, modules, units and/or circuits have not been described in detail so as not to obscure the invention. Although embodiments of the invention are not limited in this regard, discussions utilizing terms such as, for example, “processing,” “calculating,” “determining,” “establishing”, “creating”, “checking”, or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory storage medium that may store instructions to perform operations and/or processes. As is shown inFIG.1A, pseudocode may be used to establish an encryption process including the establishment of certain attributes such as a key length. As shown inFIG.1B, a rekey flowchart happens after the IKE and IPSec SA have been established between the first network device (e.g., the requester or some time called initiator) and the second network device (e.g., the responder). In the embodiments of the present disclosure, the first network device or the second network device may comprise any one from the group of a computer, mobile device (e.g., mobile phone), remote health monitoring devices, gateway, router, server, and access point (AP) device embedded sensors, home or personal devices with an IP stack. In particular, one of the network devices may be a device with a limited power source, processing capability or bandwidth capability. In such cases, the invention is particularly advantageous since the size and/or number of payloads can be reduced overall, which saves processing power, time and hence power consumption. Also in the embodiments of the present disclosure, the other of the network devices may be a security gateway/ePDG or CRAN/Cloud based device, which can support many multiples of IKE/IPSec tunnels. In such cases, by reducing the data transmitted, bandwidth and packet fragmentation and consequently processing requirements can be reduced. The IKE SA and IPSec SA are established (operation110) after initial exchanges which includes IKE_SA_INIT and IKE_AUTH exchanges (operations102-108) are performed. These initial exchanges normally comprise four messages, though in some scenarios that number can grow. The first pair of messages (IKE_SA_INIT) negotiates cryptographic suites, exchanges nonces, and a Diffie-Hellman (DH). The second pair of messages (IKE_AUTH) authenticates the previous messages, exchanges identities and certificates, negotiates cryptographic suites and Traffic Selector (TS) and establishes the first Child SA. Parts of these messages are encrypted and integrity protected with keys established through the IKE_SA_INIT exchange. Messages following the initial exchange are cryptographically protected using the cryptographic algorithms and keys negotiated in the IKE_SA_INIT exchange. For each of the IKE SA and IPSec SA, the secret keys are used usually in a limited time, which may be called as the lifetime of the SA. When the lifetime is to be expired, the SA will be rekeyed through creating a new SA and deleting the old SA. For detailed procedures of the initial exchanges, the skilled person is referred to RFC 7296, which is incorporated by reference for all purposes as if fully set forth herein except where contrary to the explicit disclosure herein. After the IKE SA and IPSec SA are established (operation110), if any one of the lifetime of the IKE SA and IPSec SA is about to expire, the first network device and the second perform the SA rekey procedure. It should be understood that either the first network device or the second network device can initiate the SA rekeying request, as each of the device may maintain a lifetime policy governing the lifetime for the SA on its own side. In another embodiment, the both sides may have the same lifetime for the SA they shared. The first network device or the second network device may periodically trigger the SA rekeying. In other scenarios, the device may detect the future expiration of each SA associated with the device, and then initiate the SA rekeying process if the device detects that the secret keys of the IKE SA or the IPSec SA(s) is about to expire. As the name suggests, SA rekey refers to creating a SA with new Key with same SA attributes as the current SA, unless the policy is changed. Changing a policy may, for example be when an end user changes the cryptographic policy (may also be called cryptographic suite(s)), and/or the life time of the cryptographic suite(s), or changes the flow policy (may also be called flow information) in the case of the child SAs rekey. The flow information may comprise, e.g., source and destination IP address, port range or port number. Rekeying SA comprises recreating the key for the SA, i.e., the key is changed, the other elements of the established SA may or may not change. Take the first network device initiating an IKE SA rekeying as an example. The first network device sends a rekey request to the second network device for rekeying IKE SA. In one embodiment, a CREATE_CHILD_SA exchange is used to rekey the IKE SA. This exchange comprises a request/response pair. It may be initiated by either end (e.g., the first network device or the second network device) of the IKE SA after the initial exchanges are completed. The end initiating the SA rekey can be regarded as the initiator and the peer side of the initiator is regarded as the responder. According to an embodiment, rekeying an IKE SA may include at least the following operations: Operation112, the initiator sends CREATE_CHILD_SA request for rekeying an IKE SA to the responder. The CREATE_CHILD_SA request includes a HDR which is an IKE header (not a payload) and payloads. The payloads comprise a SA payload, a Ni payload, and a KEi payload. The SA payloads comprises an SA offer (s), for example, one or more the cryptographic suites which the initiator supports. The cryptographic suite may comprise authentication algorithms, encryption algorithms, and/or DH group for Key calculation. Furthermore, the SA payload may also comprise a new initiator Security Parameter Indexes (SPI) which is supplied in the SPI field of the SA payload. The new initiator SPI in the SA payload will be taken by the responder and new Key is calculated at responder side. The Ni payload includes a nonce, the KEi payload includes a Diffie-Hellman value. In this disclosure, the term “cryptographic suite” refers to a set of algorithms used to protect an SA. In the situation of the IPSEC SA or IKE SA, the cryptographic suite may also call an IPSec proposal or IKE proposal in certain circumstance. The new initiator SPI may be used to identify the new IKE SA after rekey at the initiator side. Operation114, once the responder receives a request to rekey an IKE SA, The responder sends a CREATE_CHILD_SA response for rekeying an IKE SA to the initiator. The CREATE_CHILD_SA response includes a HDR and payloads, and the payloads comprise a SA payload, an Nr payload, and a KEr payload. The SA payload includes a new responder SPI in the SPI field of the SA payload. The SA payload also includes the responder selected cryptographic suite from the offer of the initiator. The Nr payload includes a nonce, and the KEr payload includes a Diffie-Hellman value if the selected cryptographic suite includes that group. The new responder SPI may be used to identify the new IKE SA after rekey at the responder side. As such, the combination of the new responder SPI and the new initiator SPI are used to identify the new IKE SA. In addition, the new responder SPI in the SA payload is taken by the initiator and new Key is calculated at initiator side. Operation116, a new IKE SA is established. The new IKE SA is used to protect IKE control packets. The new IKE SA, i.e., the rekeyed IKE SA inherits all the child SAs of the IKE SA, which means the existing child SAs that are linked with old IKE SA will be moved to new IKE SA after rekey is successful. After IKE CREATE_CHILD_SA exchange as shown in operations112and114, a new IKE SA is created with new keys and selected cryptographic suite and is identified with the new initiator SPI and the new responder SPI which are exchanged in the SA payload as discussed above. Operation118, the initiator sends an old IKE SA delete request to the responder to delete the old SA. The old IKE delete request may include a HDR and a D payload. The D payload may include information, such as protocol identifier (ID) indicating the SA to be deleted. The deletion of SA may be implemented through the INFORMATIONAL exchange between the initiator and the responder according to the RFC 7296. As an example,FIG.1Cillustrates an example of the structure of the delete request according to the RFC 7296. Operation120, upon receiving the old IKE SA delete request, the responder sends an old IKE SA delete response to the initiator. The old IKE SA delete response may include a HDR and a D payload. The D payload may include information, such as protocol ID indicating the SA to be deleted. The deletion of SA may be implemented through the INFORMATIONAL exchange between the initiator and the responder according to the RFC 7296. FIG.1Cprovides an example of the structure of SA payloads. The first SA payload contains no attribute, and the second SA payload contains an attribute, and each of the two SA payloads contains one proposal. The third SA payload contains two proposals, each of which contains an attribute. Payload size will increase proportionally for multiple cryptographic suites in rekeying IKE and/or IPSec SA. This rekey is triggered periodically. Each rekey consumes bandwidth and power to process these payloads. In reference toFIG.2, an embodiment is illustrated in which the first network device initiates a child SA or IPSec SA rekeying. Same as the IKE SA rekey, the CREATE_CHILD_SA exchange may also be used to rekey the child SA. According to the embodiment, rekeying an IKE SA may include at least the following operations: Operations202-210may refer to the operations102-110. Operation212, the initiator sends CREATE_CHILD_SA request for rekeying a child SA to the responder. The CREATE_CHILD_SA request includes a HDR, which is an IKE header, and payloads. The payloads comprise N(REKEY_SA) payload, a SA payload, a Ni payload, TSi and TSr payload, and an optional KEi payload. The REKEY_SA payload, which is defined in RFC7296, is used to notify other peer that rekey is happening for the existing child SA. The SPI of existing child SA which is being rekeyed is added in the SPI field of the REKEY_SA payload and the responder can identify the SA using the SPI included. Furthermore, a Protocol ID field of the REKEY_SA payload is set to match the protocol of the SA to be rekeyed, for example, 3 for ESP and 2 for AH. The SA payloads comprises an SA offer (s), for example, one or more cryptographic suites which the initiator supports. The SA payload may also comprise a new initiator SPI which is supplied in the SPI field of the SA payload. The new initiator SPI may be used as an inbound SPI in the initiator for the new IPSec SA after rekey and used as an outbound SPI in the responder for the new IPSec SA after rekey. The Ni payload includes a nonce, and the optional KEi payload includes a Diffie-Hellman value. The proposed Traffic Selectors for the proposed Child SA is included in the TSi and TSr payloads. The Traffic Selectors comprises flow information associated with the initiator to be rekeyed which is used by the initiator for traffic communication, such as an address range (IPv4 or IPv6), a port range, and an IP protocol ID. Assuming that the proposal is acceptable to the responder, the responder sends identical TS payloads back. In another case, the responder is allowed to choose a subset of the traffic proposed by the initiator. This could happen, for example, when the flow configurations of the two ends are being updated but only one end has received the new information. Since the two ends may be configured by different end users, such as network administrator, the incompatibility may persist for an extended period even in the absence of errors. When the responder chooses a subset of the traffic proposed by the initiator, it narrows the Traffic Selectors to some subset of the initiator's proposal (provided the set does not become the null set). Operation214: once the responder receives a request to rekey a child SA, The responder sends a CREATE_CHILD_SA response for rekeying a child SA to the initiator. The CREATE_CHILD_SA response includes a HDR and payloads, and the payloads comprise a SA payload, an Nr payload, TSi and TSr payload, and an optional KEr payload. The SA payload includes a new responder SPI in the SPI field of the SA payload. The new responder SPI may be used as an inbound SPI in the responder for the new IPSec SA after rekey and used as an outbound SPI in the initiator for the new IPSec SA after rekey. The SA payload also includes the responder selected cryptographic suite from the offer of the initiator. The Nr payload include a nonce, and the KEr payload includes a Diffie-Hellman value if the selected cryptographic suite includes that group. As discussed above, the responder may send identical TS payloads back to the initiator, or also may choose a subset of the traffic proposed by the initiator to send back to the initiator. In one embodiment, the responder performs the narrowing as follows: If the responder's policy does not allow it to accept any part of the proposed Traffic Selectors, it responds with a TS_UNACCEPTABLE Notify message. If the responder's policy allows the entire set of traffic covered by TSi and TSr, no narrowing is necessary, and the responder can return the same TSi and TSr values. If the responder's policy allows it to accept the first selector of TSi and TSr, then the responder will narrow the Traffic Selectors to a subset that includes the initiator's first choices. If the responder's policy does not allow it to accept the first selector of TSi and TSr, the responder narrows to an acceptable subset of TSi and TSr. Operation216, a new IPSec SA is created. After the new IPSec SA is established, the new IPSec SA, i.e., the rekeyed IPsec SA is added to the IKE SA with which the IPSec SA to be rekeyed is associated, which means there will be a link from IKE SA to its corresponding child SA. So the new child SA created after rekey is added to IKE SA. After IKE CREATE_CHILD_SA exchange as shown in operations212and214, a new IPSec SA is created with new keys and selected cryptographic suite and identified with new initiator SPI and the new responder SPI which are exchanged in the SA payload as discussed above. Operation218, the initiator sends an old child SA delete request to the responder to delete the old SA. The old child delete request may include a HDR and a D payload. The D payload may include information, such as protocol ID indicating the SA to be deleted. The deletion of SA may be implemented through the INFORMATIONAL exchange between the initiator and the responder according to the RFC 7296. Operation220, upon receiving the old child SA delete request, the responder sends an old child SA delete response to the initiator. The old child SA delete response may include a HDR and a D payload. The D payload may include information, such as protocol ID indicating the SA to be deleted. The deletion of SA may be implemented through the INFORMATIONAL exchange between the initiator and the responder according to the RFC 7296. As can be seen from the above prior art methods ofFIGS.1and2, at the time of rekeying IKE SA, the exchange between the initiator and the responder includes the SA payload containing a single or multiple cryptographic suites, even though there is no change in the cryptographic policy (e.g., the cryptographic suite) associated with the rekeying SA. In other words, even though the initiator and/or the responder changes the cryptographic suite(s) associated therewith, the rekeying exchange between the initiator and the responder still includes the SA payload containing a single or multiple cryptographic suites. For the IPSec SA rekey, at the time of rekeying, the exchange between the initiator and the responder includes the SA payload containing a single or multiple cryptographic suites and along with TSi & TSr payloads, even though there is no change in the cryptographic policy associated with the rekeying SA and flow policy. As the SA rekey is triggered periodically, it consumes bandwidth and power to process these payloads. The problem becomes more serious in the case of the multiple cryptographic suites, as the payload size will increase proportionally for multiple cryptographic suites in both IKE SA and IPSec SA. Decreasing the size of IKEv2 messages is highly desirable, e.g., for the Internet of Things (IoT) devices utilizing lower power consumption technology. For some of such devices the power consumption for transmitting extra bits over network is prohibitively high. Many such devices are battery powered without an ability to recharge or to replace the battery which serves for the life cycle of the device (often several years). For this reason, the task of reducing the power consumption for such devices is very important. Furthermore, large UDP messages may also cause fragmentation on IP level, which may interact badly with Network Address Translators (NAT). In particular, some NATs drop IP fragments that don't contain TCP/UDP port numbers (non-initial IP fragments). Most of IOT devices will have single set of suits or they don't prefer to change selected suits time of rekey. In one example, in case of rekeying IKE SAs with the CREATE_CHILD_SA Exchange, the minimum size of (single set of cryptographic suite) SA payload is 52 bytes, whereas in the embodiments of the invention these payloads are replaced with Notify payload N(NEW_SPI) to get SPI which is of size 16 bytes. So 36 bytes are saved. In another example, in case of rekeying Child SAs with the CREATE_CHILD_SA Exchange, minimum size of SA payload 40 is bytes, each TS size is 24 bytes (2*24=48 bytes), accordingly, the total size 88 is bytes. Whereas in the embodiments of the invention these payloads are replaced with Notify payload N(NEW_SPI) to get SPI which of size is 12 bytes, so in total 76 bytes are saved. The disclosure provides a lightweight rekey solution to address the problem as mentioned above. In this solution, when there is no change in cryptographic policy, the exchange between the initiator and the responder carries no SA payload in either of the IKE rekey and IPSec SA rekey procedures. Instead, the SPI is transferred, such as in a new notification type payload called herein “NEW_SPI notification” (which may be in place of the existing SA payload) or a new payload called herein “Lightweight SA payload” or any other payload to send SPI. The NEW_SPI notification uses fewer bits than the existing SA payload. In an additional further saving of transmitted bits, when there is no change the flow information, the exchange between the initiator and the responder also carries no TS payload in IPSec SA rekey. As an example, the “Lightweight SA payload” format may only contain the SA payload header and proposal payload. Thus compared to conventional payloads, such as those sent in Sai1 and SAr1, the payload is trimmed. To implement the lightweight rekey approach, the two ends may exchange their respective capabilities of performing the lightweight rekey method. This exchange may be performed before the rekey process, for example during the initial exchanges for setting up the IKE or IPSec SA as discussed above. According to an embodiment of the disclosure, the exchange of the capability of the lightweight rekey may comprise the following operations between two peers: The initiator sends a notification to the second network device to indicate that the initiator supports the lightweight rekey, for example supports a rekey optional payload. The responder sends a response to indicate that the responder also supports the lightweight rekey, for example supports a rekey optional payload. Through this initial support process, the two ends could know between each other whether the peer supports the lightweight rekey approach. FIGS.3A-5illustrate three different ways to negotiate the support of the lightweight rekey between the initiator and the responder. All the three ways use the initial exchange process to implement the lightweight rekey negotiation. As mentioned above, the present disclosure does not limit to only use the initial exchange process to implement the lightweight rekey negotiation, other ways may be envisioned by the skilled person in the art after reading the disclosure. In the embodiment as illustrated inFIG.3A, the notification of supporting lightweight rekey is carried in a notification payload in a INIT request, e.g., the IKE_SA_INIT request message, sent by the initiator to the responder, and accordingly a confirmation of supporting lightweight rekey is carried in a notification payload in the INIT response, e.g., the IKE_SA_INIT response message, sent from the responder to the initiator As an example,FIG.3Billustrates the structure of the notification payload in which the notify message type is the IKEV2_REKEY_OPTIONAL_PAYLOAD_SUPPORTED, which is a new “Notify Message Type” as compared to the conventional notify payload. The new Notify Message Type, such as the IKEV2_REKEY_OPTIONAL_PAYLOAD_SUPPORTED, is contained in the notification payload and indicates that the initiator or the responder supports the lightweight rekey. In the embodiment as illustrated inFIG.4, the notification of supporting lightweight rekey is carried in a notification payload in a AUTH request, e.g., the IKE_SA_AUTH request message, sent by the initiator to the responder, and accordingly a notification of supporting lightweight rekey is carried in a notification payload in a AUTH request, e.g., the IKE_SA_AUTH response message, sent from the responder to the initiator. The type of the notification payload may be an IKEV2_REKEY_OPTIONAL_PAYLOAD_SUPPORTED payload. In the embodiment as illustrated inFIG.5, the notification of supporting lightweight rekey is carried in a notification payload in a INIT request, e.g., the IKE_SA_INIT request message, sent by the initiator to the responder, and accordingly a notification of supporting lightweight rekey is carried in a notification payload in a AUTH response, e.g., the IKE_SA_AUTH response message, sent from the responder to the initiator. The type of the notification payload may be an IKEV2_REKEY_OPTIONAL_PAYLOAD_SUPPORTED payload. It should be noted that the capability of the lightweight rekey negotiation may be not performed before the rekey process, e.g., during the IKE INIT or AUTH, which means both parties did not agree for this, then if any either of the initiator or the responder should not be allowed to send NEW_SPI or Light Weight SA Payload, the other party can drop this and treat as an error. FIG.6illustrates a flowchart of rekeying SA using the lightweight rekey approach. After the IKE and IPSec SA(s) between the initiator and a responder are established, if the lifetime of any one of the IKE and IPSec SA(s) are close to expiration according to the SA lifetime policy on each end, the initiator initiates the rekey process which may include the following operations: Operation602, the initiator determines whether there is a change in a cryptographic suite associated with the initiator. In this disclosure, a cryptographic suite associated with the initiator means a cryptographic suite is supported by the initiator and used for a certain SA established by the initiator, for example the rekeyed SA (i.e., the new SA) in the SA rekey situation. After the SA associated with a weak cryptographic suite is established, and if the lifetime of any one of the SA is close to expiration, the initiator wants to rekey the SA by creating a new SA and deleting the to be rekeyed SA (may be also called the old SA) with or without changing the cryptographic suite used for the old SA (or may be also called associated with the old SA). In other words, a cryptographic suite associated with the initiator not changed means that the cryptographic suite which the old SA used may be still used for the rekeyed SA (i.e., the new SA). A cryptographic suite associated with the initiator changed means that the cryptographic suite with which the old SA associated is changed to another cryptographic suite which is supported by the initiator and used for the new SA (i.e., associated with the new SA). The following provides some situations for the change of the cryptographic suite associated with the initiator. One situation is that the cryptographic suite supported by the initiator is changed. For example, the initiator current only supports a first cryptographic suite (e.g., a weak cryptographic suite). After the old SA is established, the support of the cryptographic suite by the initiator is changed to a second cryptographic suite (a strong cryptographic suite) for some reasons (e.g., the initiator takes a more important role with higher security requirement) by the network administrator by configuring the initiator via the user interface. After the cryptographic suite is changed, if the initiator wants to rekey the old SA, the initiator needs to change the cryptographic suite for the new SA, since the initiator now only support the strong cryptographic suite. Another situation is that the cryptographic suite supported by the initiator is not changed. For example, the initiator current supports two or more cryptographic suites. When making the rekeying, the initiator wants to use a new cryptographic suite for the new SA rather than using the cryptographic suite associated with the old SA for some reason, for example the requirement of the SA is improve, which needs a stronger cryptographic suite used for the new SA. In this situation, the rekey request the initiator sends to the responder needs to carry the second cryptographic suite for the new SA, since the first cryptographic suite associated with the old SA is no longer associated with the new SA. The detailed implementation is disclosed in the description ofFIGS.7A-12. Further, another situation is that if the initiator only supports a first cryptographic suite, for example the weak cryptographic suite, and the initiator wants to change the first cryptographic suite to a second cryptographic suite when rekeying the old SA (for example, change to a strong cryptographic suite) for some reasons. In this situation, there need to reconfigure the initiator since the initiator currently only supports the first cryptographic suite. The network administrator may select to reconfigure the initiator with supporting the second cryptographic suite, or supporting both the first cryptographic suite and the second cryptographic suite. And the configuration is stored in the initiator or some other data base or devices. For example, there is a correspondence between the initiator and the cryptographic suite (s) supported by the initiator may be stored in the initiator or some other places. The configuration process at the responder side is similar as the above. After the reconfiguration, the initiator can select the supporting second cryptographic suite and put the second cryptographic suite in the rekey request for the SA rekeying exchanges. The following provides some situations for no change in the cryptographic suite associated with the initiator. One situation is that initiator only supports a first cryptographic suite, for example the weak cryptographic suite, and the initiator wants to keep the first cryptographic suite on change when rekeying the old SA (i.e., the first cryptographic suite is still used for the rekeyed SA), the rekey request the initiator sends to the responder does not carry a cryptographic suite for the new SA, since the first cryptographic suite associated with the old SA is still associated with the new SA. Another situation is that the initiator supports two or more cryptographic suites, for example, a first cryptographic suite (e.g., the weak cryptographic suite) and a second cryptographic suite (e.g., a strong cryptographic suite), and the initiator does not want to change the first cryptographic suite to a second cryptographic suite when rekeying the old SA. In this situation, the rekey request the initiator sends to the responder needs not carry the cryptographic suite for the new SA, since the first cryptographic suite associated with the old SA is still used for the new SA. The detailed implementation please refer to the description ofFIGS.7A-12. Operation604, the initiator sends a first rekey request to the responder for rekeying SA when there is no change in the cryptographic suite associated with the initiator. As discussed above, no change in a cryptographic suite associated with the initiator means that the cryptographic suite which the old SA used may still be used for the rekeyed SA (i.e., the new SA). There is no need for the initiator to carry a cryptographic suite for the new SA. The first rekey request carries a first SPI and does not carry a cryptographic suite associated with the initiator, since the initiator does not change the cryptographic suite after the SA is established, or the initiator changed ever the cryptographic suite and the changed cryptographic suite is changed back to the original one and the current cryptographic suite is the same as the cryptographic suite used when the SA is established. In the IKE rekey scenario, the first SPI is a new initiator SPI. Since the IKE SA can be identified by pair of SPI on both ends. So when one end initiates rekey process, it will include the new SPI in the rekey request, the new SPI is used as Initiator SPI for the new IKE SA after rekey. When the responder is responding for IKE rekey in reply, the responder adds its new SPI in rekey response, which should be used as responder SPI for the new IKE SA after rekey. In the IPSec SA rekey scenario, the first SPI is used as an inbound SPI in the initiator for the new IPSec SA after rekey and is used as an outbound SPI in the responder for the new IPSec SA after rekey. Furthermore, the first rekey request also carries a SPI in the N [REKEY_SA] payload to identify the SA to be rekeyed as discussed above. The detailed implementation of this operation may be referred to the following IKE rekey process as illustrated inFIGS.7-9. Operation606, the responder sends a first rekey response to the initiator. The first rekey response carries a second SPI and does not carry a cryptographic suite associated with the second network device when there is no change in a cryptographic suite associated with the responder. As discussed above, in the IKE rekey scenario, the second SPI is a new responder SPI. When the responder is responding for the IKE rekey in reply, the responder adds the new responder SPI in the rekey response, which should be used as responder SPI for the new IKE SA after rekey. In the IPSec SA rekey scenario, the second SPI is used as an inbound SPI in the responder for the new IPSec SA after rekey and is used as an outbound SPI in the initiator for the new IPSec SA after rekey. The detailed implementation of this operation may be referred to the following IKE rekey process as illustrated inFIGS.8-10. Operation608, rekeying, by the initiator, the SA according to the first SPI and second SPI when there is no change in the cryptographic suite associated with the first network device and in the cryptographic suite associated with the second network device. The rekeying comprise creating a new SA and deleting the old SA which is to be rekeyed. Specifically, the initiator rekey the SA by using the initiator SPI, the responder SPI, and the unchanged cryptographic suite which is used for the old SA to obtain the new key for the new SA. The detailed implementation may be referred to the following IKE rekey process as illustrated inFIGS.7-9. FIG.7Ais a flowchart that illustrates rekeying the IKE SA according to an embodiment of the present disclosure. In this embodiment, the initiator does not change the cryptographic suite which is used when the SA to be rekeyed is established, for example, a strong a higher cryptographic suite with higher cryptographic algorithm set. In this embodiment, there are two scenarios, the first scenario is that the responder also does not change the cryptographic suite. The second scenario is that the responder wants to changes the cryptographic suite. According to the first scenario of this embodiment, the IKE rekey process comprises the following operations. Operation702, the initiator sends an INIT request to the responder. The INIT request carries a notification payload with the exception of the normal HDR head and payloads as mentioned above. In this embodiment, the notification payload is an IKEV2_REKEY_OPTIONAL_PAYLOAD_SUPPORTED payload, which indicates that the initiator supports the lightweight rekey. Operation704, the responder sends an INIT response to the initiator. The INIT response carries a notification payload with the exception of the normal HDR head and payloads as mentioned above. In this embodiment, the notification payload is an IKEV2_REKEY_OPTIONAL_PAYLOAD_SUPPORTED payload, which indicates that the responder supports the lightweight rekey approach. According to the initial exchange as discussed, after the Operation706and708are performed, the IKE SA and IPSec SA are established between the initiator and the responder. It should be understood that other ways of negotiating the capability of lightweight rekey as described inFIGS.3-5may also applied to this embodiment. Operation710, IKE SA and IPSec SA are established and the initiator periodically triggers the IKE rekey. The initiator may periodically detect whether the lifetime of the IKE SA is about to expire. As discussed above, the initiator may maintain the lifetime policy on its side. The lifetime policy may set different lifetime for different SAs. When the initiator detects that the lifetime of the IKE SA is about to expire, the operation712is performed. Operation712, the initiator sends a CREATE_CHILD_SA request for rekeying an IKE SA to the responder. The CREATE_CHILD_SA request comprises a HDR head, a Ni payload and a KEi payload. Instead of carrying a SA payload which carries one or more cryptographic suites, the CREATE_CHILD_SA request carries a notification payload, for example a NEW_SPI notification. The NEW_SPI payload may have a SPI filed field which carries the new initiator SPI and does not carry a cryptographic suite.FIG.7Bshows an example of the rekey request packet format. Alternatively, a Lightweight SA payload or other payload may be used to carry the new initiator SPI. The NEW_SPI is a newly defined notification payload which carries the initiator SPI identifying the new IKE SA after rekey. As an example,FIG.7Cillustrates a NEW_SPI for IKE. As an example,FIG.7Dillustrates a Lightweight SA payload for the IKE. In the example, Lightweight SA payload contains single proposal payload and no transforms and attributes. According to this structure, value mentioned in “SPI” in case of IKE rekey is used as Initiator/Responder SPI for IKE SA. 714, the responder sends a CREATE_CHILD_SA response for rekeying an IKE SA to the initiator. The CREATE_CHILD_SA response comprises a HDR head, an Nr payload and a KEr payload. Instead of carrying a SA payload which carries one or more cryptographic suites, the CREATE_CHILD_SA response carries a notification payload, for example a NEW_SPI notification. The NEW_SPI payload may have a SPI filed field which carries the new responder SPI and does not carry a cryptographic suite. Operation716, a new IKE SA is created. Specifically, as discussed above, in this scenario, both the initiator and responder do not change their cryptographic suite. The new IKE SA is created according to the initiator SPI and the responder SPI, and the originally cryptographic suite used for the old SA being rekeyed. The new IKE SA is used to protect the IKE control packets. The new IKE SA, i.e., the rekeyed IKE SA inherits all the child SAs of the IKE SA, which means the existing child SAs that are linked with the old IKE SA will be moved to the new IKE SA after rekey is success. Operation718, the initiator sends an old IKE SA delete request to the responder to delete the old IKE SA. The old IKE delete request may include a HDR and a D payload. The D payload may include information, such as a protocol identifier (ID) indicating the old SA to be deleted. The detailed implementation may refer to the operation216. Operation718, upon receiving the old IKE SA delete request, the responder sends an old IKE SA delete response to the initiator. The old IKE SA delete response may include a HDR and a D payload. The D payload may include information, such as a protocol ID indicating the SA to be deleted. The detailed implementation may refer to the operation218. By using the lightweight rekey approach as described in this embodiment, this NEW_SPI notification payload may save, for example, minimum 36 bytes, and the number of bytes saved will be increased proportionally in multiple cryptographic suite and this reduces processing of complex validation and thus processing of SA payload in the IKE rekey exchanges. For example, the size of the NEW_SPI notification payload may be in the range 12-16 bytes. FIG.8illustrates the second scenario in which the responder changes the cryptographic suite that the responder currently used (e.g., the cryptographic suite used when the SA to be rekeyed is established). In this situation, the responder may sends a no proposal chosen notification payload in the CREATE_CHILD_SA response instead of a NEW_SPI notification or a SA payload carried the changed cryptographic suite to the initiator at operation814. The no chosen notification payload may be a NO_PROPOSAL_CHOSEN payload to indicate that there is no matching cryptographic suite present in the cryptographic suite carried in CREATE_CHILD_SA request. Then, after the initiator receives the indication, the initiator will resend the CREATE_CHILD_SA request, but this time carrying the SA payload carrying the updated cryptographic suite to renegotiate the cryptographic suite with the responder until the initiator and the responder achieve an agreement on the cryptographic suite. The process of the renegotiation of the cryptographic suite may refer to any one of scenario as described in the disclosure. One example of the renegotiation is described below. In this example, the operations802-812are the same as above operations702-712. But at the operation814, the CREATE_CHILD_SA response carries a HDR head, a notification payload. The notification payload may be a NO_PROPOSAL_CHOSEN type payload to indicate that there is no matching cryptographic suite present in the cryptographic suite carried in CREATE_CHILD_SA request. As an example,FIG.7Eillustrates the structure of the notification payload in which the notify message type is NO_PROPOSAL_CHOSEN. Thus, as for other new notifications disclosed herein, a conventional notification structure is used but with a new notification type. Then, after the initiator receives the indication, the initiator will resend the CREATE_CHILD_SA request which carries the SA payload carrying the updated cryptographic suite to renegotiate the cryptographic suite with the responder until the initiator and the responder achieve an agreement on the cryptographic suite. The process of the renegotiation of the cryptographic suite may refer to any one of scenario as described in the disclosure. One example of the renegotiation is described as below. Then at operation816, the initiator resends a CREATE_CHILD_SA request to the responder. The second CREATE_CHILD_SA request comprises a HDR head, N(REKEY_SA) payload, a SA payload a Ni payload, an optional KEi payload. The content of the Ni payload and the Kei payload may be referred to operation212and operation712. The SA payload carries a SPI field which carries the new initiator SPI and one or more cryptographic suites the initiator proposed. At operation818, the responder sends a CREATE_CHILD_SA response to the initiator. The CREATE_CHILD_SA response carries a HDR head, an SA payload, an Nr payload, KEr payload. Operation820, a new IKE SA is created. The implementation of this operation may refer to operation716as discussed above. Operation820, the initiator sends an old IKE SA delete request to the responder to delete the old IKE SA. The old IKE delete request may include a HDR and a D payload. The D payload may include information, such as protocol ID indicating the SA to be deleted. The detailed implementation may refer to the operation216and718as discussed above. Operation822, upon receiving the old IKE SA delete request, the responder sends an old IKE SA delete response to the initiator. The old child SA delete response may include a HDR and a D payload. The D payload may include information, such as protocol ID indicating the SA to be deleted. The detailed implementation may refer to the operation218and720as discussed above. FIG.9is a flowchart of rekeying the IKE SA according to an embodiment of the present disclosure. In this embodiment, the initiator changes the cryptographic suite. There are three scenarios in this embodiment. The first scenario is that the responder does not change the cryptographic suite. In this scenario, for example, the initiator may have two supporting suites (e.g., weak cryptographic suite and strong cryptographic suite) and likes to change from the weak cryptographic suite to strong cryptographic suite, but the responder supports only weak cryptographic suite and does not want to change the cryptographic suite. In this case, the responder may not negotiate the cryptographic suite with the initiator and use the lightweight rekey approach. For example the responder may use the NEW_SPI notification or Lightweight SA payload or any other payload to contain the responder SPI. The second scenario is that the responder changes the cryptographic suite. In this scenario, for example, when the initiator has two supporting suites (e.g., weak cryptographic suite and strong cryptographic suite) and likes to change the weak cryptographic suite to strong cryptographic suite, and the responder also wants to change the weak cryptographic suite (which is used when the SA to be rekeyed is first established) to the strong cryptographic suite. In this case, the responder may carry the SA payload carrying the strong cryptographic suite in the rekey response. The third scenario is that the responder changes the cryptographic suite. In this scenario, for example, when the initiator has only one supporting suite (e.g., strong cryptographic suite) and wants to change the weak cryptographic suite to strong cryptographic suite, and the responder only supports the weak cryptographic suite. In this case, the responder sends a notification payload to indicate that there is no matching cryptographic suite present in the cryptographic suite carried in rekey request sent by the initiator. Then, after the initiator receives the indication, the initiator will resend the rekey request but this time carrying the SA payload carrying the updated cryptographic suite to renegotiate the cryptographic suite with the responder until the initiator and the responder achieve an agreement on the cryptographic suite. The process of the renegotiation of the cryptographic suite may refer to any one of scenario as described in the disclosure. According to this embodiment, the IKE rekey process comprises the following operations. The detailed implementation of operations902-910may refer to the operations702-710as described inFIG.7. Operation912, the initiator sends a CREATE_CHILD_SA request for rekeying an IKE SA to the responder. The CREATE_CHILD_SA request comprises a HDR head, a SA payload, a Ni payload and a KEi payload. The information carried in each payload may be referred to operation112. In this case, for example, the SA payload carries two cryptographic suites, for example, a weak cryptographic suite and a strong cryptographic suite. Operation914, the responder sends a CREATE_CHILD_SA response for rekeying an IKE SA to the initiator. The CREATE_CHILD_SA response comprises a HDR head, an Nr payload and a KEr payload. Instead of carrying a SA payload which carries cryptographic suites, the CREATE_CHILD_SA request carries a notification payload, for example a NEW_SPI notification. The NEW_SPI payload have a SPI field which carries the new responder SPI and does not carry a cryptographic suite. It should be understood that in the operation914, the responder may, as an optional way, send the CREATE_CHILD_SA response carrying the SA payload which carries the cryptographic suite it currently used (i.e., it used when the SA is established) and the new responder SPI even though the responder does not wants to change the cryptographic suite it currently used. According to the second scenario of this embodiment, when the responder wants to change the cryptographic suite it currently used, for example, changing from weak cryptographic suite to strong cryptographic suite. The CREATE_CHILD_SA response carries the SA payload carrying the changed cryptographic suite, i.e., the strong cryptographic suite, which is supported by the initiator. The detailed implementation of operations916-918may refer to the operations716-718as described inFIG.8, and operation216-218described inFIG.2. According to the third scenario of this embodiment, in which, for example, the initiator only supports one cryptographic suite (e.g., the strong cryptographic suite) and changes the cryptographic suite, e.g., from the weak cryptographic suite to strong suite. And the responder supports only weak cryptographic suite. In this scenario, the cryptographic suite in the responder is not matched with the cryptographic suite proposed by the initiator. After determining that there is no match with the cryptographic suite between the responder and the initiator, the responder may sends a no proposal chosen notification payload in the CREATE_CHILD_SA response instead of the SA payload to the initiator at operation914. The no proposal chosen notification payload may be a NO_PROPOSAL_CHOSEN payload to indicate that there is no matching cryptographic suite present in the cryptographic suite carried in CREATE_CHILD_SA request. Then, after the initiator receives the indication, the initiator will resend the CREATE_CHILD_SA request which carries the SA payload carrying the updated cryptographic suite to renegotiate the cryptographic suite with the responder until the initiator and the responder achieve an agreement with the cryptographic suite. The process of the renegotiation of the cryptographic suite may refer to any one of scenario as described in the disclosure. By introducing the NEW_SPI notification payload in IKE rekey, it may save, for example, minimum of 36 bytes for each and every IKE rekey and thus reduce processing of complex validation and also the processing of SA payload. FIG.10is a flow chart of rekeying the IPSec SA according to an embodiment of the present disclosure. In this embodiment, the initiator does not change the cryptographic suite which is used when the SA to be rekeyed is established, for example, a strong cryptographic suite with higher cryptographic algorithm set. Regarding the flow information, the initiator may change the flow information or not change the flow information. When the initiator does not change the flow information, the rekey request does not need to carry the TS payload, in contrast, when the initiator changes the flow information, the rekey request will carries the flow information to reflect the changed flow information, such as, an address range, a port range, and an IP protocol ID etc. In this embodiment, there are two scenarios, and the first scenario is that the responder also does not change the cryptographic suite. The second scenario is that the responder changes the cryptographic suite. It should be understood that in case of simultaneous rekeys of IKE SA and IPSec SA, the operations (i.e. Negotiation of IKEV2_REKEY_OPTIONAL_PAYLOAD_SUPPORTED in the IKE_SA_INIT or AUTH request message, and/or “NEW_SPI notification” or “Lightweight SA payload” or any other payload contains SPI to rekey the SA) remain similar as the embodiments described in this disclosure. If simultaneous rekeying, preferably the rekey processes are carried out independently without combining the messages. According to the first scenario of this embodiment, the IPSec rekey process comprises the following operations. Operation1002-1010, the detailed implementation of operations1002-1010may refer to the operations802-810as described inFIG.8. Operation1012, the initiator sends a CREATE_CHILD_SA request for rekeying a child SA to the responder. The CREATE_CHILD_SA request comprises a HDR head, N(REKEY_SA) payload, a NEW_SPI payload, a Ni payload and optional a KEi payload and no TS payload when the initiator does not change the flow information. The N(REKEY_SA) payload carries a SPI to indicates which child SA to be rekeyed. The content of the Ni payload and the KEi payload may be referred to operation212. Instead of carrying a SA payload which carries one or more cryptographic suites, the CREATE_CHILD_SA request carries a notification payload, for example the NEW_SPI notification payload. The NEW_SPI payload has a SPI fled field which carries the new initiator SPI and does not carry a cryptographic suite. Alternatively, a Lightweight SA payload or other payload may be used to carry the new initiator SPI. The NEW_SPI is a newly defined NOTIFY payload which carries the initiator SPI identifying the new IPSec SA after rekey.FIG.10Bshows an example of the rekey request packet format showing the new field e.g. NEW_SPI notify payload in the IPSec rekey request. As an example,FIG.10Cillustrates a NEW_SPI for AH. As an example,FIG.10Dillustrates two kinds of Lightweight SA which are newly defined payload for the IPSec SA. The uppermost Lightweight SA payload contains single proposal payload and no TRANSFORMS and attributes, and the lower Lightweight SA payload contains SA bundling as shown inFIG.10D. When creating IPSec tunnel there are two ways. One is using either AH or ESP, and the other is using both AH and ESP which is also called SA bundling. The SA bundling is used when user want to use both AH and ESP at a time. Value mentioned in “SPI” field in case of IPSec rekey is used as Inbound/Outbound SPI for AH/ESP SA. Alternatively, when the initiator changes the flow configuration, for example, change the flow information, such as the IP address range for the new SA after rekey, the CREATE_CHILD_SA request may further carries the TS payload with exception of the payloads as mentioned above. The content of TS payload may be referred to the operation212. Operation1014, the responder sends a CREATE_CHILD_SA response for rekeying a child SA to the initiator. The CREATE_CHILD_SA response comprises a HDR head, a Nr payload and an optional KEr payload in which is contingent on whether the CREATE_CHILD_SA request carries the KEi payload. Instead of carrying a SA payload which carries one or more cryptographic suites, the CREATE_CHILD_SA response carries a notification payload, for example a NEW_SPI notification. The NEW_SPI payload may have a SPI field which carries the new responder SPI and does not carry a cryptographic suite. As the CREATE_CHILD_SA request does not carry the TS payload carrying the flow information associated with the initiator, the responder may have the following two selections with respect to whether to carry the TS payload in the CREATE_CHILD_SA response. The first selection is when there is no change in a flow information associated with the responder and the currently used flow information is still used for the new IPSec SA after rekey, the CREATE_CHILD_SA response does not carry a TS payload associated with the responder. The second selection is when there is change in a flow information associated with the responder, the CREATE_CHILD_SA response carries a TS unacceptable notification. The TS unacceptable notification may be a TS_UNACCEPTABLE notification payload which is used to indicate that there is not matching for information between the initiator and the responder. Then, after the initiator receives the TS unacceptable notification, the initiator will resend the CREATE_CHILD_SA request which this time carries the TS payload carrying the updated flow information to renegotiate the flow information with the responder until the initiator and the responder achieve an agreement with the flow information. The process of the renegotiation of the flow information may refer to any one of scenario of cryptographic suite or flow information negotiation as described in the disclosure. It should be noted that when any one of the cryptographic suite or flow information negotiation fails in the first negotiation turn, the two ends may renegotiate both the cryptographic suite and flow information negotiation in a second negotiation turn. In that case, the resend CREATE_CHILD_SA request may be with or without the SA payload to renegotiate the cryptographic suite together with the flow information negotiation with the responder. The details of the cryptographic suite may be referred to any one of scenario of IPSec SA cryptographic suite as described in the disclosure. It should be understood that the cryptographic suite and flow information negotiation may be performed independently. In that case, the two ends may record the already achieved agreement of the cryptographic suite in the first negotiation turn. And the resent CREATE_CHILD_SA request in the second negotiation turn may not renegotiate the cryptographic suite through carrying with or without the SA payload. When the initiator changes the flow information and accordingly the CREATE_CHILD_SA request carries the TS payload carrying the changed flow information associated with the initiator, the responder may have the following three selections with respect to whether to carry the TS payload in the CREATE_CHILD_SA response. The first selection is when there is no change in a flow information associated with responder and the currently used flow information associated with the responder is present in the flow information associated with the initiator carried in the CREATE_CHILD_SA request, the CREATE_CHILD_SA response does not carry the TS payload. The second selection is when there is change in the flow information associated with the responder and the responder selects flow information in the flow information associated with the initiator carried in the CREATE_CHILD_SA request as the flow information associated with the responder, the CREATE_CHILD_SA response carries the TS payload. As discussed in operation212, the responder may choose a subset of the traffic proposed by the initiator, i.e., narrow the Traffic Selectors to some subset of the initiator's proposal. The responder may also send identical TS payloads to the initiator. The third selection is when there is no matching flow information between the flow information the initiator proposed carried in the CREATE_CHILD_SA request and the flow information support, the CREATE_CHILD_SA response carries a TS unacceptable notification to indicate that there is no matching flow information. Then, after the initiator receives the notification, the initiator will resend the CREATE_CHILD_SA request which carries the TS payload carrying the updated flow information to renegotiate the flow information with the responder until the initiator and the responder achieve an agreement with the flow information. The process of the renegotiation of the flow information may refer to any one of scenario of cryptographic suite or flow information negotiation if need as described in the disclosure. As discussed above, the renegotiation process may take the cryptographic suite and flow information negotiation together or may only perform the failed flow information negotiation in the first negotiation turn without renegotiating the already achieved agreement of the cryptographic suite. The detailed implementation of operations1016-1020may refer to the operations716-720as described inFIG.8, and operations216-220described inFIG.2. In IPSec rekey, this NEW_SPI notification payload may save, for example, minimum 76 bytes, and the number of bytes saved will be increased proportionally in multiple cryptographic suite and/or TS payload. This reduces processing of complex validation and thus processing of SA, TSi and TSr payloads. In reference toFIG.11, it illustrates the second scenario of this embodiment in which the responder changes the cryptographic suite, for example the responder changes the cryptographic suite that the responder currently used when the SA to be rekeyed is established. In this scenario, the operations1102-1112are the same as above. But at the operation1114, the CREATE_CHILD_SA response carries a HDR head, a no proposal chosen notification payload, or a TS unacceptable notification payload. The no proposal chosen notification payload may be a NO_PROPOSAL_CHOSEN payload to indicate that there is no matching cryptographic suite present in the cryptographic suite carried in CREATE_CHILD_SA request. The TS unacceptable notification payload may be a TS_UNACCEPTABLE notification to indicate that there is no matching flow information present in the flow information carried in CREATE_CHILD_SA request. Then, after the initiator receives the indication, the initiator will resend the CREATE_CHILD_SA request which carries the SA payload carrying the updated cryptographic suite to renegotiate the cryptographic suite with the responder until the initiator and the responder achieve an agreement with the cryptographic suite. The process of the renegotiation of the cryptographic suite may refer to any one of scenario as described in the disclosure. One example of the renegotiation is described as below. At operation1116, the initiator resends a CREATE_CHILD_SA request (which may be called a second CREATE_CHILD_SA request) to the responder. The second CREATE_CHILD_SA request comprises a HDR head, N(REKEY_SA) payload, a SA payload, a Ni payload, an optional KEi payload, an optional TS payload which depends on whether the initiator changes the flow information. The N(REKEY_SA) payload carries a SPI to indicates which child SA to be rekeyed. The content of the Ni payload and the Kei payload may be referred to operation212and operation1012. The SA payload carries a SPI field which carries the new initiator SPI and one or more cryptographic suites the initiator proposed. At operation1118, the responder sends a CREATE_CHILD_SA response to the initiator. The CREATE_CHILD_SA response carries a HDR head, an N(REKEY_SA) payload, a NEW_SPI notification payload or an SA payload, a NR payload, an optional KEr response (which depends on whether the CREATE_CHILD_SA request carries the Kei payload) and an optional TS payload carrying flow information associated with the responder (which depends on whether the responder changes the flow information associated with the responder). It should be understood that in the renegotiation process, the cryptographic suite and flow information negotiation may be performed according to any one of the cryptographic suite and flow information negotiation approach as described in this disclosure. Operation1120, the implementation of this operation may refer to operation216as discussed above. Operation1122, the initiator sends an old IPsec SA delete request to the responder to delete the old IPSec SA. The old child delete request may include a HDR and a D payload. The D payload may include a SPI identifying the SA to be deleted. The detailed implementation may refer to the operation216. Operation1124, upon receiving the old PSec SA delete request, the responder sends an old IPSec SA delete response to the initiator. The old child SA delete response may include a HDR and a D payload. The D payload may include the SPI identifying the SA to be deleted. The detailed implementation may refer to the operation218. The above renegotiation process takes the cryptographic suite and flow information negotiation together, in which when any one of the cryptographic suite and flow information negotiation fails, the renegotiation process is triggered, and the renegotiation process will renegotiate both the cryptographic suite and flow information. As discussed above, the cryptographic suite negotiation and the flow information negotiation may be performed separately. The following described the flow information negotiation in the second scenario. When the CREATE_CHILD_SA request at operation1112does not carry the TS payload carrying the flow information associated with the initiator, the responder may have the following two selections with respect to whether to carry the TS payload in the CREATE_CHILD_SA response at operation1114. The first selection is when there is no change in a flow information associated with the responder and the currently used flow information is still used for the new IPSec SA after rekey, the CREATE_CHILD_SA response does not carries a TS payload associated with the responder. The second selection is when there is change in a flow information associated with the responder, the CREATE_CHILD_SA response carries a TS unacceptable notification. The TS unacceptable notification may be a TS_UNACCEPTABLE notification payload which is used to indicate that there is not matching for information between the initiator and the responder. Then, after the initiator receives the TS unacceptable notification, the initiator will resend the CREATE_CHILD_SA request which carries the TS payload carrying the updated flow information to renegotiate the flow information with the responder until the initiator and the responder achieve an agreement with the flow information. The process of the renegotiation of the flow information may refer to any one of scenario of cryptographic suite or flow information negotiation if need as described in the disclosure. It should be noted that when any one of the cryptographic suite or flow information negotiation fails in the first negotiation turn, the two ends may renegotiate both the cryptographic suite and flow information negotiation in the second negotiation turn. In that case, the resend CREATE_CHILD_SA request may be with or without the SA payload to renegotiate the cryptographic suite together with the flow information negotiation with the responder. The detailed of the cryptographic suite may be referred to any one of scenario of IPSec SA cryptographic suite if need as described in the disclosure. It should be understood that the cryptographic suite and flow information negotiation may be performed independently. In that case, the two ends may records the already achieved agreement of the cryptographic suite in the first negotiation turn. And the resend CREATE_CHILD_SA request in the second negotiation turn may not renegotiate the cryptographic suite through carrying with or without the SA payload. It should be understood that during the second rekey request with updated cryptographic suite or TS, the device either can reuse the SPI in the NEW_SPI notify or Light Weight SA payload or completely generate a New SPI during the second rekey request with new cryptographic suites. This approach may apply in the renegotiation process as discussed in the disclosure. When the initiator changes the flow information and accordingly the CREATE_CHILD_SA request carries the TS payload carrying the changed flow information associated with the initiator, the responder may have the following three selections with respect to whether to carry the TS payload in the CREATE_CHILD_SA response. The first selection is when there is no change in a flow information associated with responder and the currently used flow information associated with the responder is present in the flow information associated with the initiator carried in the CREATE_CHILD_SA request, the CREATE_CHILD_SA response does not carry the TS payload. The second selection is when there is change in the flow information associated with the responder and the responder selects flow information in the flow information associated with the initiator carried in the CREATE_CHILD_SA request as the flow information associated with the responder, the CREATE_CHILD_SA response carries the TS payload. As discussed in operation212, the responder may choose a subset of the traffic proposed by the initiator, i.e., narrow the Traffic Selectors to some subset of the initiator's proposal. The responder may also send identical TS payloads to the initiator. The third selection is when there is no matching flow information between the flow information the initiator proposed carried in the CREATE_CHILD_SA request and the flow information support, the CREATE_CHILD_SA response carries a TS unacceptable notification to indicate that there is no matching flow information. Then, after the initiator receives the notification, the initiator will resend the CREATE_CHILD_SA request which carries the TS payload carrying the updated flow information to renegotiate the flow information with the responder until the initiator and the responder achieve an agreement with the flow information. The process of the renegotiation of the flow information may refer to any one of scenario of cryptographic suite or flow information negotiation if need as described in the disclosure. As discussed above, the renegotiation process may take the cryptographic suite and flow information negotiation together or may only perform the failed negotiation in the first negotiation turn. FIG.12is a flow chart of rekeying the IPSec SA according to an embodiment of the present disclosure. In this embodiment, the initiator changes the cryptographic suite which is used when the SA to be rekeyed is established, for example, changing from a weak cryptographic suite to a stronger e.g. a higher cryptographic suite with higher cryptographic algorithm set. Regarding the flow information, the initiator may change the flow information or may not change the flow information. When the initiator does not change the flow information, the rekey request does not need to carry the TS payload. In contrast, when the initiator changes the flow information, the rekey request will carry the flow information to reflect the changed flow information, such as any of an address range, a port range, and an IP protocol ID etc. There are three scenarios in this embodiment. The first scenario is that the responder does not change the cryptographic suite, e.g., the cryptographic suite. In this scenario, the responder may not negotiate the cryptographic suite with the initiator and use the lightweight rekey approach. For example, the responder may use the NEW_SPI notification or Lightweight SA payload or any other payload to contain the responder SPI. The second scenario is that the responder changes the cryptographic suite. In this case, the responder may carry the SA payload carrying the changed cryptographic suite in the rekey response. The third scenario is that the cryptographic suite(s) carried in the rekey request is not matched with the cryptographic suite supported by the responder. In this case, the responder carries a notification payload to indicate that there is no matching cryptographic suite present in the cryptographic suite carried in rekey request sent by the initiator. Then the two ends will renegotiate the cryptographic suite until achieving an agreement with the cryptographic suite. The detailed negotiation of the cryptographic suite may be referred to the detailed descriptions corresponding toFIG.9. Referring back toFIG.12, it illustrates the flow chart according to this embodiment which comprises the following operations. The detailed implementation of operations1202-1210may refer to the operations802-810as described inFIG.8. Operation1212, the initiator sends a CREATE_CHILD_SA request for rekeying an IPSec SA to the responder. The CREATE_CHILD_SA request comprises a HDR head, N(REKEY_SA) payload carrying a SPI to indicates which child SA to be rekeyed, a SA payload carrying one or more cryptographic suites and a new initiator SPI, a Ni payload, an optional KEi payload, and an optional TS payload (which depends on whether the initiator changes the flow information associated with the initiator). The detailed information carried in each payload may be referred to operation112. Operation914, the responder sends a CREATE_CHILD_SA response for rekeying an IPSec SA to the initiator. The CREATE_CHILD_SA response comprises a HDR head, a NEW_SPI notification payload or SA payload (which depend on whether the responder changes the cryptographic suite associated with the responder), a Nr payload, an optional KEr payload (which depends on whether the CREATE_CHILD_SA request carries the Kei payload), and an optional TS payload (which depends on whether the responder changes the flow information associated with the responder). The NEW_SPI payload may have a SPI field which carries the new responder SPI and does not carry a cryptographic suite. It should be understood that in the operation1214, the responder may, as an optional way, send the CREATE_CHILD_SA response carrying the SA payload which carries the cryptographic suite it currently used (i.e., it used when the SA is established) and the new responder SPI even though the responder does not change the cryptographic suite it currently used. According to the second scenario of this embodiment, when the responder changes the cryptographic suite it currently used. The CREATE_CHILD_SA response carries the SA payload carrying the changed cryptographic suite which is also supported by the initiator. The detailed implementation of operations1216-1218may refer to the operations816-818as described inFIG.8, and operation216-218described inFIG.2. According to the third scenario of this embodiment, in which, the cryptographic suite in the CREATE_CHILD_SA request is not matched with the cryptographic suite to which the responder wants to change. In this scenario, the responder may sends a no proposal chosen notification payload in the CREATE_CHILD_SA response instead of the SA payload to the initiator at operation1214. The no proposal chosen notification payload may be a NO_PROPOSAL_CHOSEN payload to indicate that there is no matching cryptographic suite present in the cryptographic suite carried in CREATE_CHILD_SA request. Then, after the initiator receives the indication, the initiator will resend the CREATE_CHILD_SA request which carries the SA payload carrying the updated cryptographic suite to renegotiate the cryptographic suite with the responder until the initiator and the responder achieve an agreement with the cryptographic suite. The process of the renegotiation of the cryptographic suite may refer to any one of scenario if need as described in the disclosure. The following described the flow information negotiation in this embodiment. When the CREATE_CHILD_SA request at operation1212does not carry the TS payload carrying the flow information associated with the initiator, the responder may have the following two selections with respect to whether to carry the TS payload in the CREATE_CHILD_SA response at operation1214. The first selection is when there is no change in a flow information associated with the responder and the currently used flow information is still used for the new IPSec SA after rekey, the CREATE_CHILD_SA response does not carries a TS payload associated with the responder. The second selection is when there is change in a flow information associated with the responder, the CREATE_CHILD_SA response carries a TS unacceptable notification. The TS unacceptable notification may be a TS_UNACCEPTABLE notification payload which is used to indicate that there is not matching for information between the initiator and the responder. Then, after the initiator receives the TS unacceptable notification, the initiator will resend the CREATE_CHILD_SA request which carries the TS payload carrying the updated flow information to renegotiate the flow information with the responder until the initiator and the responder achieve an agreement with the flow information. The process of the renegotiation of the flow information may refer to any one of the cryptographic suite or flow information negotiation approach if need as described in the disclosure. It should be noted that when any one of the cryptographic suite or flow information negotiation fails in the first negotiation turn, the two ends may renegotiate both the cryptographic suite and flow information negotiation in the second negotiation turn. In that case, the resend CREATE_CHILD_SA request may be with or without the SA payload to renegotiate the cryptographic suite together with the flow information negotiation with the responder. It should be understood that the cryptographic suite and flow information negotiation may be performed independently. In that case, the two ends may records the already achieved agreement of the cryptographic suite in the first negotiation turn. And the resend CREATE_CHILD_SA request in the second negotiation turn may not renegotiate the cryptographic suite through carrying with or without the SA payload. When the initiator changes the flow information and accordingly the CREATE_CHILD_SA request carries the TS payload carrying the changed flow information associated with the initiator, the responder may have the following three selections with respect to whether to carry the TS payload in the CREATE_CHILD_SA response. The first selection is when there is no change in a flow information associated with responder and the currently used flow information associated with the responder is present in the flow information associated with the initiator carried in the CREATE_CHILD_SA request, the CREATE_CHILD_SA response does not carry the TS payload. The second selection is when there is change in the flow information associated with the responder and the responder selects flow information in the flow information associated with the initiator carried in the CREATE_CHILD_SA request as the flow information associated with the responder, the CREATE_CHILD_SA response carries the TS payload. As discussed in operation212, the responder may choose a subset of the traffic proposed by the initiator, i.e., narrow the Traffic Selectors to some subset of the initiator's proposal. The responder may also send identical TS payloads to the initiator. The third selection is when there is no matching flow information between the flow information the initiator proposed carried in the CREATE_CHILD_SA request and the flow information support, the CREATE_CHILD_SA response carries a TS unacceptable notification to indicate that there is no matching flow information. Then, after the initiator receives the notification, the initiator will resend the CREATE_CHILD_SA request which carries the TS payload carrying the updated flow information to renegotiate the flow information with the responder until the initiator and the responder achieve an agreement with the flow information. The process of the renegotiation of the flow information may refer to any one of scenario of cryptographic suite or flow information negotiation if need as described in the disclosure. As discussed above, the renegotiation process may take the cryptographic suite or flow information negotiation together or may only perform the failed negotiation in the first negotiation turn. By introducing the NEW_SPI notification payload in IPSEC rekey, or not carry the TS payload, it may save, for example, minimum 76 bytes for each and every IPSec rekey and thus reducing processing of complex validation and also processing of SA, TSi and TSr payloads. Referring toFIG.13, it illustrates a schematic diagram of a network device1300according to an embodiment of the disclosure. The network device is configured to rekey a security association (SA) in a network system comprising a first network device (e.g., the initiator as described in the above embodiments) and a second network device (e.g., the responder described in the above embodiments), and an IKE tunnel and IPSec tunnel are established between the first network device and the second network device. In this embodiment, the network device is acting as the first network device and the network device comprises a determining module1302, a sending module1304, a receiving module1306, and a rekeying module1308. The determining module1302is configured to determine whether there is change in a cryptographic suite associated with the first network device. The sending module1304is configures to send a first rekey request to the second network device for rekeying SA when there is no change in the cryptographic suite associated with the first network device, wherein the first rekey request carries a first SPI and does not carry a cryptographic suite associated with the first network device. The receiving module1306is configured to receive a first rekey response from the second network device, wherein the first rekey response carries a second SPI and does not carry a cryptographic suite associated with the second network device when there is no change in a cryptographic suite associated with the second network device. The rekeying module1308is configured to rekey the SA according to the first SPI and second SPI when there is no change in the cryptographic suite associated with the first network device and in the cryptographic suite associated with the second network device. The detailed of the implementation of each modules in the network device of this embodiment may refer to the implementations of initiator in the embodiments ofFIGS.7and10. In another embodiment, the network device further comprises a renegotiating module1310. When there is change in the cryptographic suite associated with the second network device, the first rekey response carries a no proposal chosen notification from the second network device. The renegotiating module1310is configured to renegotiate with the second network device until obtaining a negotiated cryptographic suite, and the rekeying module1308is configured to rekey the SA further according to the renegotiated cryptographic suite. It should be understood that the renegotiating module1310is configured to determine whether to renegotiate the cryptographic suite or the flow information in the case of IPSec SA rekey, the renegotiation process is performed through the sending module1304and the receiving module1306. In some embodiments, the renegotiating module1310may be incorporated into the determining module1302. An example of the no proposal chosen notification may be a NO_PROPOSAL_CHOSEN notification payload. The detailed of the renegotiation implementation of the network device of this embodiment may refer to the implementations of initiator in the embodiments ofFIGS.8and11. It would be understood by that the network device1300may implement the operations performed by the initiator in the above embodiments ofFIGS.1-12. The detailed implementation may refer to the embodiments as described above. Here, there is no need to describe one by one. Referring toFIG.14, it illustrates a schematic diagram of another network device1400according to an embodiment of the disclosure. The network device1400is configured to rekey a security association (SA) in a network system comprising a first network device (e.g., the initiator as described in the above embodiment) and a second network device (e.g., the responder described in the above embodiment), an IKE tunnel and IPSec tunnel are established between the first network device and the second network device. In this embodiment, the network device is configured as the second network device and the network device comprises a receiving module1402, a determining module1404, a sending module1406, and a rekeying module1408. The receiving module1402is configured to receive a first rekey request from the first network device for rekeying SA, and the first rekey request carries a first SPI and does not carry a cryptographic suite associated with the first network device. The determining module1404is configured to determine whether there is change in a cryptographic suite associated with the second network device. The sending module1406is configured to send a first rekey response to the first network device, and the first rekey response carries a second SPI and does not carry a cryptographic suite associated with the second network device when there is no change in the cryptographic configuration associated with the second network device. Accordingly, the rekeying module1408is configured to rekey the SA according to the first SPI and second SPI. The detailed of the implementation of each modules in the network device of this embodiment may refer to the descriptions ofFIGS.7and10. In another embodiment of the network device1400is that the first rekey response carries a no proposal chosen notification when there is change in the cryptographic suite associated with the second network device. As such, the receiving module1402is further configured to receive a second rekey request from the first network device for rekey the SA carrying the first SPI and the cryptographic suite associated with the first network device. And the sending module1406is further configured to send a second rekey response which carries another no proposal chosen notification to indicate that there is no matching cryptographic suite present in the cryptographic suite associated with the first network device carried in the second rekey request. The network device further comprises a renegotiating module1410which is configured to renegotiate with the first network device until a negotiated cryptographic suite is obtained. Accordingly, the SA is rekeyed further according to the renegotiated cryptographic suite. It should be understood that the renegotiating module1410is configured to determine whether to renegotiate the cryptographic suite or the flow information in the case of IPSec SA rekey, the renegotiation process is performed through the sending module1406and the receiving module1402. In some embodiments, the renegotiating module1410may be incorporated into the determining module1404. The detailed of the renegotiation implementation of the network device of this embodiment may refer to the implementations of initiator in the embodiments ofFIGS.8and11. It would be understood that the network device1400may implement the operations performed by the responder in the above embodiments ofFIGS.1-12. The detailed implementation may refer to the embodiments as described above. Here, there is no need to describe one by one. Referring toFIG.15, it illustrates a schematic diagram of another network device1500according to an embodiment of the disclosure. The network device1500is configured to rekey a security association (SA) in a network system comprising a first network device (e.g., the initiator as described in the above embodiment) and a second network device (e.g., the responder described in the above embodiment), an IKE tunnel and IPSec tunnel are established between the first network device and the second network device. In this embodiment, the network device is configured as the second network device and the network device comprises a receiving module1502, a determining module1504, a sending module1506, and a rekeying module1508. The receiving module1502is configured to receive a first rekey request from the first network device for rekeying SA, wherein the first rekey request carries a first SPI and a cryptographic suite associated with the first network device. The determining module1504is configured to determine whether there is change in a cryptographic suite associated with the second network device. The sending module1506is configured to send a first rekey response to the first network device, wherein the first rekey response carries a second SPI and does not carry a cryptographic suite associated with the second network device when there is no change in the cryptographic suite associated with the second network device. The rekeying module is configured to rekey the SA according to the first SPI and second SPI. The implementation of the rekey may refer to the above detailed description, for example, inFIGS.6-12. In another embodiment of the network device1500is that the first rekey response carries a no proposal chosen notification to indicate that there is no matching cryptographic suite present in the cryptographic suite associated with the first network device carried in the first rekey request. In this case, the network device1500further comprises a renegotiating module1510which is configured to renegotiate with the first network device until a negotiated cryptographic suite is obtained. Accordingly the SA is rekeyed further according to the negotiated cryptographic suite. A person of ordinary skill in the art would understand that the renegotiating module1510is configured to determine whether to renegotiate the cryptographic suite or the flow information in the case of IPSec SA rekey, the renegotiation process is performed through the sending module1506and the receiving module1502. In some embodiments, the renegotiating module1510may be incorporated into the determining module1504. The detailed of the renegotiation implementation in the network device1500of the above embodiment may refer to the implementations of responder in the embodiments ofFIGS.9and12. It would be understood by those skilled in the art that the network device1500may implement the operations performed by the responder in the above embodiments ofFIGS.1-12. The detailed implementation may refer to the embodiments as described above. Here, there is no need to describe one by one. Referring toFIG.16, it illustrates a schematic diagram of another network device1600according to an embodiment of the disclosure. The network device1600is configured to rekey a security association (SA) in a network system comprising a first network device (e.g., the initiator as described in the above embodiment) and a second network device (e.g., the responder described in the above embodiment), an IKE tunnel and IPSec tunnel are established between the first network device and the second network device. In some embodiments, network device1600may act as the initiator as described in the embodiments ofFIGS.1-12and perform the operations of the initiator described in the embodiments ofFIGS.1-12. In other embodiments, the network device1600may act as the responder as described in the embodiments ofFIGS.1-12and perform the operations of the responder described in the embodiments ofFIGS.1-12. The network device comprises a processor1602, a memory1604coupled to processor1602, a transceivers (Tx/Rx)1606, and ports1608coupled to Tx/Rx1606. The processor1602may be implemented as a general processor or may be part of one or more application specific integrated circuits (ASICs) and/or digital signal processors (DSPs). The processor1602may refer to a single process or a plurality of processors. Memory1604may include a cache for temporarily storing content, e.g., a random-access memory (RAM). Additionally, the memory1604may include a long-term storage, e.g., a read-only memory (ROM). When acting as the initiator, the processor1602is configured to perform the operations of the initiator described in the embodiments ofFIGS.1-12. When acting as the responder, the processor1602is configured to perform the operations of the responder described in the embodiments ofFIGS.1-12. Furthermore, in one embodiment, the memory1604may include multiple software modules, such as the modules as described in the embodiments ofFIG.13. In another embodiment, the memory1604may include multiple software modules, such as the modules as described in the embodiments ofFIG.14. In a further embodiment, the memory1604may include multiple software modules, such as the modules as described in the embodiments ofFIG.15. By executing instructions of the software modules, the processor1602may perform a plurality of operations. In some embodiments, when a module is configured to perform an operation, it may actually mean that the processor1602is configured to execute instructions in the module to perform the operation. By executing the instructions in the memory1604, the processor1602may perform, completely or partially, all operations performed by the initiator or the responder as describes inFIGS.1-12. Referring toFIG.17, it illustrates a schematic diagram of a network system1700. The network system1700may comprise at least a first network device1702(i.e., the initiator) and a second network device1704(i.e., the responder). The first network device1702may be the network device1300as described in the embodiments ofFIG.13. The second network device1704may be the network device1400or network device1500as described in the embodiments ofFIGS.14and15. In another embodiment, the first network device may be the network device1600which act as the initiator as described in the embodiments ofFIGS.1-12and performs the operations of the initiator described the embodiments ofFIGS.1-12. And the second network device may be the network device1600which acts as the responder as described in the embodiments ofFIGS.1-12and performs the operations of the responder described in the embodiments ofFIGS.1-12. A person skilled in the art may understand after reading the disclosure that any known or new algorithms by be used for the implementation of the present disclosure. However, it is to be noted that, the present disclosure provides a method to achieve the above mentioned benefits and technical advancement irrespective of using any known or new algorithms. A person of ordinary skill in the art may be aware after reading the disclosure that in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on the particular inventions and design constraint conditions of the technical solution. After reading the disclosure, a person skilled in the art may use different methods to implement the described functions for each particular invention, but it should not be considered that the implementation goes beyond the scope of the present disclosure. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system and method may be implemented in other manners. For example, the described network device embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or modules may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms. When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. The functions may be expressed in computer code forming a computer program product, which may instruct suitable hardware to perform the functions. Based on such an understanding, the technical solutions of the present disclosure essentially, or the part contributing to the prior art, or a part of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer node (which may be a personal computer, a server, or a network node i.e. a processor) to perform all or a part of the steps of the methods described in the embodiment of the present disclosure. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disc. Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries. When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself. | 96,475 |
11943210 | DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS Recently there have been attempts to adapt token-based authentication systems for use within transaction authorization systems. However, a vast majority of the systems rely on symmetric or asymmetric cryptography (also referred to herein as crypto). The disclosed framework improves upon these systems by retaining the benefits of token-based authentication while eliminating the security risks associated with cryptographic keys. This is because, among other reasons, crypto-keys in these systems become a primary source of risk. Fundamentally, cryptographic keys are reusable, and there is a risk of compromise every time a key is used. In symmetric crypto, keys must be shared among multiple entities, and therefore the trustworthiness of the system is only as good as the least trustworthy entity. Asymmetric cryptography requires complex calculations that are difficult to implement on low-cost hardware devices. Moreover, public-key infrastructures can be expensive and difficult to maintain, and they use algorithms that are susceptible to quantum attacks. The disclosed systems and methods address these shortcomings, among others, and provide a novel electronic transaction framework where hardware and/or software devices can be used by participants (or users, used interchangeably) to authenticate themselves and authorize transactions in which they are participating. The disclosed framework operates as a distributed system in that it can be built and implemented without an entity that all parties must trust, which minimizes the trust needed between entities. The framework relies entirely on “hard to invert” one-way functions, which are mathematical functions that take variable length input strings and convert them into a fixed-length binary sequence, and avoids using both symmetric and asymmetric cryptography; therefore, the framework eliminates the security risks associated with using and managing cryptographic keys (e.g., the framework does not have nor does it use secret keys that can be compromised). According to some embodiments, the disclosed framework leverages distributed ledger technology and utilizes hash-chain-based signatures. As discussed herein, the disclosed systems and methods provide novel protocols for hash-chain-based digital signatures that do not require a reference clock to which all system components are synchronized. This enables simplification of the devices participants use to authorize a transaction. Thus, according to some embodiments, the disclosed framework provides mathematical proof that an entity was authenticated and subsequently authorized a transaction. As discussed herein, according to some embodiments, the framework can be implemented in a manner that allows entities to authorize electronic transactions using a low-cost device, such as, for example, a credit card. With reference toFIG.1, the disclosed framework100is modular which provides a benefit that minimal requirements are placed on the different components so that “best-of-breed” implementations can be selected for a particular type of transaction. Framework100includes four types of entities: transaction log entity102, coordinator entity104, witness entity106and participants1-M (which are operating on signing devices1-M). As discussed below, framework100can include any number of entities operating as a coordinator entity104, witness entity106and/or participant1-M. In some embodiments, transaction log entity102(which can be a distributed ledger, database(s), data structure hosted on a network, and the like) records evidence that a transaction took place. In some embodiments, prior recorded entries cannot be modified. Contrary to conventional systems that are built on specific databases (e.g., Blockchain, for example), the disclosed transaction log entity102is applicable to any type or form of record log, data structure or storage medium, whether known or to be known, as discussed below. In some embodiments, a coordinator entity104orchestrates the activities required to complete a transaction. In some embodiments, a witness entity106ensures the transaction log102has written its next entry. The witness entity106can securely receive transmission broadcasts from the transaction log entity102, and, as discussed below, keeps its hash chain's future elements (or values) confidential. In some embodiments, one or more participants (participants1-M) use a signing device(s) to record their participation in or approval of a transaction. The signing devices keep each participant's hash chain's future elements confidential, as discussed below. Non-limiting exemplary embodiments of the configurations and operations of each entity (e.g., entities102,104,106and1-M) operating within or as part of framework100are discussed in more detail below in relation toFIGS.3-5. In some embodiments, each entity of framework100can be embodied as a device, engine, module, or some combination thereof. According to some embodiments, framework100operates by utilizing hash chains built using one way functions.FIG.2provides an example embodiment200of improved transactional security using hash-chain-based signatures according to the disclosed systems and methods. Embodiment200illustrates hash chain ym202and system time204. In some embodiments, hash chain202is defined as the sequence y0m, y1m, . . . yNm, where yim=F(yi+1m) for i ∈ {0, . . . , N−1}, F(·) is a one-way function, where yNmis an initial seed, and where m identifies a specific hash chain under consideration. According to some embodiments, as discussed below, the functionality of the one-way function in hash chain202prevents a hash chain entry from being reverse engineered to its inverse image (or preimage). That is, an image of a function is the set of all output values it can produce from certain inputs. The preimage is the reverse engineering to the input values, which is rendered impossible due to hash chain202being a one-way function. Thus, according to some embodiments, it may not be possible to calculate a future element of the hash chain202given a previous element since that would require calculating the preimage of F, and that violates the properties of a one-way function. That is, given yim, it can be impossible to derive yi+jmfor any positive integer j. However, the disclosed framework can validate that yi+jmis a future element of the hash chain for yimiteratively applying F(·) to yi+jma total of j times, thereby ensuring the resulting output equals yim. Therefore, Fj(yi+jm)=yimcan be validated; however, F−j(yim) cannot be calculated. In some embodiments, system time204is assigned a notion of time that steps through timestamps, t1=i where i ∈ {0, 1, 2, . . . }. Each element of the hash chain202is associated with a timestamp. For example, a hash chain element yimcorresponds to timestamp ti, as illustrated inFIG.2. According to some embodiments, a unique hash chain is assigned to certain entities. For example, entity m is associated with the hash chain that has entries yim. According to some embodiments, each entity builds its hash chain internally using a random number generator (e.g., to create the hash chain's secret initial seed yNm) and a component for calculating F(·) (to derive the remaining yim). According to some embodiments, entities are required to keep the elements of their hash chain202confidential until system time204has stepped past the elements' corresponding timestamps. After the system has stepped past a timestamp, the corresponding element in a hash chain202is no longer secret and can be made public. In some embodiments, element yimin a hash chain202mush be kept secret before the system reaches timestamp ti, and can be made public thereafter. According to embodiments of the disclosure, the security constraints of keeping elements private until its corresponding timestamp has been observed (e.g., has passed) ensures entity m is the only entity in the system (or on the network) that knows the value of yimbefore that hash chain element is made public. By way of a non-limiting example, a transaction event occurs at timestamp ti. An entity authorizes that event by producing, at the time the event occurs, evidence that it possesses an element from its hash chain corresponding to a future timestamp. In this example, the entity produces evidence it possesses yjmwhere tj>ti; therefore, since the entity is the only one in possession of that hash chain, the entity keeps future elements of the hash chain secret until the proper time (e.g., tj>ti), and indeed, future elements of the hash chain cannot be calculated at the current time and/or by an outside entity, only that particular entity could have produced that specific piece of evidence at time ti. In some embodiments, the evidence produced can be in the form of the hash chain element itself or some derived value that cannot be forged (e.g., F (yjm)). As opposed to conventional transaction systems, in which security is contingent on multiple parties managing shared secrets (e.g., credit card number, symmetric encryption keys) or a public-key infrastructure (PKI) to support asymmetric cryptography, the disclosed type of hash-chain-based system is secure as long participants in a transaction keep their hardware or software token(s) secure. Thus, because the system is timestamp-synchronized to a transaction log, the need or reliance on a trusted server to timestamp signatures as in a PKI is eliminated. Turning toFIG.3, framework300is depicted which presents a non-limiting example of a system architecture according to some embodiments of the present disclosure. Framework300, which is an expanded view of framework100ofFIG.1, includes signing device302(which is associated with a participant1-M, as discussed above in relation toFIG.1), coordinator entity304, transaction log entity306and witness entity308. It should be understood that framework300is an example of the disclosed systems and methods, as each entity can represent more than one entity—for example, coordinator entity304can represent a plurality of coordinators; however, for purposes of explanation, a single coordinator entity304is depicted. Transaction log entity306serves as the source of truth about which transactions have occurred as well as the reference to which all other entities are timestamp-synchronized. As shown inFIG.3, the transaction log306contains a globally-readable, append-only log file with entries Liand has three functions, as follows: A first function is “Accumulate-Append” (also referred to as “Accumulate”), which includes instructions for receiving transaction submissions from coordinator entity304for recording in the log file, and replying with the timestamp, ti, that corresponds to the log entry where they were stored, along with evidence, ei, that can be used to prove they were recorded in log entry Li. A second function is “Publish”, which includes instructions to broadly disseminate the latest log entry, Liand timestamp, ti. A third function is to “Read”, which includes instructions that allow any entity to either request the current log entry and current timestamp, or a specific past entry by providing a historical timestamp. According to some embodiments, transaction log entity306operates by having its Accumulate-Append function collect transactions submitted by coordinator entity304and appending them to the log file such that log entry Liincludes all transactions the Accumulate-Append function received since it wrote log entry Li−1. According to some embodiments, the transaction log entity306can be synchronized as follows. When the transaction log entity306writes log entry Li, the current system time advances to timestamp tiand all entities advance their understanding of the current time to ti. According to some embodiments, the type of evidence the transaction log entity306provides can depend on its implementation. The type of evidence can be in reference to whether evidence is provided and/or in what form (e.g., a timestamp related to a log entry or a summary of all transactions, for example). An implementation can relate to a type of transaction, type of users involved, type of data being exchanged, and the like, or some combinations thereof. For example, if the Licontains a list of the transactions submitted, evidence e can be omitted since only the index where the transactions are stored is required to prove a transaction was recorded. In some embodiments, a summary of all transactions can be stored, and this can be based on a type of implementation. In such embodiments, the summary can be stored as evidence ei, which can include additional details to prove the transaction was included in the summary stored in Li. According to some embodiments, transaction log entity306comprises an append-only log file that can be accessed by coordinator entity304and witness entity308. According to some embodiments, the transaction log entity306can be configured in a distributed manner with multiple nodes such that no one node needs to be trusted by any entity. According to some embodiments, witness entity308acts as the trusted entity that assures a signing device(s)302it is safe to reveal their hash chain elements. As shown inFIG.3, the witness entity308contains a hash chain with elements yiwand has two functions, as follows: A first function is to “Observe”, which includes instructions to view the entries of the transaction log entity306as they are published and record the current system time. A second function is to “Attest”, which includes instructions to confirm its understanding of the current system time by responding to a request with ykw. According to some embodiments, witness entity308can have a secure communications channel to the transaction log entity306, which enables witness entity308to view legitimate log entries. According to some embodiments, the channel between the witness entity308and the transaction log entity306is the only secure channel the framework300requires. According to some embodiments, witness entity308allows for the elimination of the need for a signing device302to have a secure communications channel to the transaction log entity306. Eliminating this requirement means the signing device(s)302can be implemented with lower-cost hardware form factors. According to some embodiments, the hash chain elements of the witness entity308remain confidential before system time reaches their corresponding timestamp, as discussed above in relation toFIG.2. The minimal requirements associated with witness entity308enable it to embody a hardened device (e.g., a device with applied security features, such as, for example, binary hardening techniques, kernel patches, firewalls, and the like) with a minimized attack surface (e.g., access points or vectors where unauthorized access to a device is obtainable via compromised credentials, poor encryption, ransomware, phishing, brute force attacks, malware, and the like), therefore ensuring its hash chain data is securely maintained and hidden until the appropriate time. According to some embodiments, each participant (e.g., participants1-M fromFIG.1) are associated with a signing device302that contains a hash chain with elements yim. In some embodiments, since y0mis a public value, it can be used as a unique identifier for the hash chain. According to some embodiments signing device302stores the initial public value y0wfor each trusted witness entity308. As shown inFIG.3, signing device302uses two functions during transactions, as follows: A first function is to “Sign”, which includes instructions to label (or authorize) a received {F(T), tj} pair (where F(T) relates to a one-way hash chain function F for particular transaction T, and ti, represents an initial timestamp) for a transaction a participant intends to approve by responding with F(T), y0mand am, where am=F(F(T)∥yjm) is an authorization code. A second function is to “Confirm”, which includes instructions to verify a transaction with evidentiary timestamp tjby confirming ykwis from a hash chain of a trusted witness entity308and tk>tj, and then returning F(T), y0mand yjmif they are. According to some embodiments, determining how a participant will review T to ensure it is a transaction it wishes to authorize is a critical implementation decision. To maintain the framework300's security abilities, a participant must not use their signing device302to approve transactions they do not want to authorize because, once they do, they can only de-authorize it via some implementation-specific, out-of-band mechanism. According to some embodiments, one approach to addressing this, however, is by adding a display functionality to a signing device302(e.g., implementing it as a smartphone app) and having a coordinator entity304send Tto the signing device302instead of F(T). In some embodiments, another approach is for coordinator entity304to display or send T directly to a participant via an out-of-band mechanism. For example, in a retail transaction, the point-of-sale (POS) terminal can visually display the purchase information to a participant making a purchase or a merchant could text the purchase information to the participant. The primary consideration with this second approach is ensuring the T viewed by the participant corresponds to the F(T) used by the Sign function of the signing device302. The second approach involves the usage by the participant of a second, more capable device (e.g., smartphone) that can receive and display T, calculate and display F(T), and establish a trusted communications channel to the participant's signing device302. After the signing device302receives F (T) from the coordinator entity304, it can send that value to the second device where the participant can ensure it corresponds to the T it received. Because the channel between the signing device302and the second device is trusted, it is known that the value signed by the signing device302is legitimate; and, because the participant controls both devices, the secure channel can be established in a low-cost manner. The benefit of this approach over the first is that the signing device302can be implemented in a lower-capability form factor. In contrast to the Sign function, the participant's transaction approval cannot be contingent on the Confirm function. In some embodiments, this may be because all yimeventually becomes public information, and, therefore yimcannot be secured (e.g., kept confidential) indefinitely. In some embodiments, with regard to the Confirm function, a participant may only need to ensure the transaction log entity306has written an entry, Lk, where tk>tj. Therefore, if the participant does not want to trust a 3rd-party witness entity (e.g., a bank, the government, and the like), the participant can establish a secure channel directly to the Read function of the transaction log entity306. This option comes at the expense of potentially more complex implementation of the signing device302or viewing Lkvia an out-of-band mechanism. In some embodiments, coordinator entity304can adjust the timing of its communications or create decoy communications to prevent attackers from interfering with other transactions occurring within a similar time period. According to some embodiments, the hash chain elements of the signing device302remain confidential before system time reaches the corresponding timestamp. Since the hardware requirements on a signing device302are minimal, they can be implemented using a variety of form factors that are optimized for a particular type of transaction by trading off between size, functionality and physical hardening. For example, smart cards, Universal Serial Bus (USB) sticks and Near-Field Communication (NFC) FOBs all are capable of generating a random yNm, calculating one-way functions, validating two hash chain values are equal, and storing hash chains. According to some embodiments, coordinator entity304acts as the orchestrator of a transaction. In some embodiments, a coordinator entity304is configured to shut down (e.g., stop operating or halt a transaction entirely) should it detect it, or other entities within framework300, are compromised. This prevents compromised coordinators from executing fraudulent transactions. In some embodiments, coordinator entity304can have temporary storage for the different pieces of transaction information that it collects as a transaction executes. In some embodiments, however, it may not have permanent storage for the collected information, as when a transaction is complete and the transaction log entity306is updated, the information can be purged or deleted from coordinator entity304. As illustrated inFIG.3, coordinator entity304uses the following four functions to orchestrate a transaction: A first function is “Initiate”, which includes instructions to commence a transaction by compiling the transaction record and evidentiary timestamp (T and tj, respectively), storing the values, identifying participants that should authorize the transaction and sending F(T) and tjto the identified participant's signing devices302. The instructions further include composing the responses from the signing devices302into an authorization vector A={a0, . . . aM} and authorizers vector Y={y00, . . . , t0M}, storing Y and A, and then calling the Submit function after all required signing devices302have responded. A second function is “Submit”, which includes instructions for sending A to the transaction log entity306for recording, and storing the reply tiand ei. A third function is “Complete”, which includes instructions for finalizing a transaction by requesting the transaction's participants have their signing devices302send a confirmation, compiling the responses into a confirmation vector C={ . . . , yj0, . . . , yjM}, and then storing C after all required signing devices302have responded. A fourth function is to “Distribute”, which includes instructions for sending a transaction receipt message R={ti, tj, ei, T, Y, A, C} to required entities (or entities that need/request it). In some embodiments, coordinator entity304must be confident that t1is sufficiently far in the future so that transaction log entity306records A at a timestamp ti<ti. This requirement ensures a signing device(s)302is providing future information, which as discussed above, cannot be compromised. Therefore, if an amof a signing device302corresponds to an evidentiary time t1, the transaction log entity306must record it in an entry prior to entry Ljto prove amwas from the future. In some embodiments, coordinator entity304can use a variety of mechanisms to select a tjthat is in the future (e.g., get the current timestamp from the transaction log entity306). According to some embodiments, how far into the future to select tjcan either be a system parameter or a decision by coordinator entity304(which involves tradeoffs between robustness to denial-of-service (DoS) attacks and user-perceived responsiveness). As mentioned above, while coordinator entity304is depicted as a single entity, it should not be construed as limiting, as any number of coordinators can be used without departing from the scope of the instant disclosure. Moreover, the functions and storage capabilities of coordinator entity304can also be split up and distributed across any number of entities on the network. Turning toFIG.4, Process400provides a non-limiting example of transaction operations performed by framework300. The illustrated embodiment of Process400displays only one signing device; however, this should not be construed as limiting, as any number of signing devices can be implemented by participating entities without departing from the scope of the instant disclosure, as discussed above in relation toFIGS.1and3. Process400begins with Step402where the coordinator entity304calls its Initiate function to create T and tj, temporarily store the two values, and send {F(T}, tj} to signing device(s)302of the participant(s) for their authorization. In Step404, upon the signing device302receiving a transaction initiation request, a decision is made regarding whether to authorize the transaction (e.g., a participant(s) decides whether or not to authorize a transaction). If approved, the participant's signing device302calls its Sign function to generate and respond with F(T), y0mand am. If the transaction is not approved, the request can be ignored, and will time-out after a threshold amount of time (e.g., a time period expires). In some embodiments, the transaction is then nullified. In some embodiments, y0mcan be included in the response in Step404because some instances exist where the coordinator entity304may know how to communicate with a signing device302but not know an identifier of a signing device302. For example, in a POS transaction, the POS terminal identifies the smart card presented as the signing device302, but it does not know the smart card's ID until after the transaction is initiated. F(T) is included in the response so the coordinator entity304can associate it with the correct transaction without having to maintain a separate state for each transmitted message. In Step406, when coordinator entity304receives authorization codes from all required signing devices302, it forms and temporarily stores Y and A. Then, in Step408, coordinator entity304calls its Submit function to send A to the transaction log entity306for recording. In Step410, the Accumulate function of the transaction log entity306receives A and appends it to the log file entry L, (corresponding to timestamp ti). In Step412, tiand eiare sent back to the coordinator entity304for temporary storage. After system time has progressed to a time tkthat is beyond t1, in Step414, coordinator entity304calls its Complete function to send ykwand tjto the signing device302in Y for transaction confirmation. In Step416, upon receiving the completion request, signing device302calls its Confirm function to verify ykwis legitimate and tk>tj, and replies with F(T), y0m, and yjmif both conditions are true. In some embodiments, F(T) is included in the response so the coordinator entity304can associate it with the correct transaction without having to maintain separate state for each message. In Step418, after receiving all required confirmations, coordinator entity304calls its Complete function by compiling the confirmations into a confirmation vector C={yj0, . . . yjM}, and C is stored. In some embodiments, the coordinator entity304can then call its Distribute function to generate R={ti, tj, ei, T, Y, A, C}, which as discussed above, serves as portable evidence that can be used to definitively prove the participants authorized the transaction. Turning toFIG.5, Process500provides a non-limiting exemplary flow diagram of the transaction operations performed by framework300as discussed above in relation to Process400ofFIG.4. In some embodiments, any modification of the data and/or values exchanged between the different entities in framework300, temporarily stored by the controller entity304, and/or contained in the transaction receipt (e.g., the elements of T, A, C) can cause the validation protocol to fail. The values from a transaction cannot be used to spoof a different transaction since A is tied to a particular T via a one-way function, and C is only released once its values are public. Therefore, the only way an attacker can craft an illegitimate transaction that passes the disclosed validation protocol is by tricking or forcing a participant into using their signing device302to sign a transaction they did not wish to authorize. According to some embodiments, as mentioned above, DoS attacks can and may happen; however, framework300is configured to reduce the risks of them being successful through a minimized number of intermediaries. In some embodiments, the coordinator entity304and/or witness entity308can be incorporated into other system entities, thereby providing an option to eliminate separate entities (e.g., eliminating a witness, as the coordinator and witness are combined under the operations of a single entity, for example). In some embodiments, as mentioned above, a value of t1can be selected to tradeoff between robustness to DoS attacks and a longer time needed to complete a transaction. To make the framework300more robust, the coordinator entity304can select a t1further into the future to give the system ample time to respond to an attack and yet still record the transaction's A in the transaction log entity306before t1. Thus, framework300's robustness to DoS attacks can be measured by the number of timestamps it has to recover from an attack, Δ=tj-ti, where tiis the current time. Therefore, in some embodiments, A is also a measure of the framework's responsiveness and can be tuned for particular types of transactions. Process500begins with Step502where the framework300is initialized. In some embodiments, this can involve identifying participants, the signing devices, how many and a location of the coordinators, witness(es), the transaction log, a transaction type, transaction amount, and the like, or some combination thereof. In some embodiments, initialization involves identification of which signing devices are involved, and how to assign signing devices to participants in the transaction given each device's secret yNmand trusted y0w. In some embodiments, this can involve either pre-populating the values and using a trusted shipping method, or using a signature device with a random number generator (to generate yNm) and read-only memory (for pre-populating y0w). The former approach requires participants to trust more entities. The latter approach requires participants to have a more complex device. In some embodiments, initialization involves identifying a database that maps the signature devices' IDs, y0m, to a real-world entity to which the device corresponds. For example, in a system for property transfers, the government must associate a y0mwith a person or other legal entity. Depending on the type of transaction, the database could be public or private. How and whether it is populated for either device initialization or revocation also depends on the transaction type and is tied to the distribution of signature devices discussed above. In some embodiments, Step502involves receiving a request from a participant's signing device to execute a transaction of type n, where the request identifies another signing device or entity upon which data must be securely transferred. In some embodiments, the coordinator entity304can receive this request and orchestrate the transaction accordingly. The following steps of Process500provide non-limiting example embodiments of Process400ofFIG.4, and proceed according to the steps of Process400discussed above. For example, Steps504-520correspond to Steps402-418ofFIG.4, respectively. In Step504, the coordinator entity304calls its Initiate function to create T and tj, temporarily store the two values and send {F(T), tj} to the identified signing devices302of the participants of the transaction. Step504is performed in a similar manner as Step402ofFIG.4, as discussed above. In Step506, upon the signing devices302of the participants receiving a transaction initiation request, a decision is made regarding whether to authorize the transaction (e.g., a participant(s) decides whether or not to authorize a transaction). Step506is performed in a similar manner as Step404ofFIG.4, as discussed above. In Step508, when coordinator entity304receives authorization codes from the signing devices302of the participants, it forms and temporality stores Y and A. Then, in Step510, coordinator entity304calls its Submit function to send A to the transaction log entity306for recording. Steps508-510are performed in a similar manner as Steps406-408ofFIG.4, respectively. In Step512, transaction log entity306performs a record operation through the Accumulate function of the transaction log entity306receiving A and appending it to the log file entry Li(corresponding to timestamp ti). In Step514, transaction log entity306sends tiand eiback to the coordinator entity304for temporary storage. Steps512-514are performed in a similar manner as Steps410-412ofFIG.4, respectively. Process500proceeds to Step516where time is monitored until it has reached a timestep, tk, beyond tj, upon which Step516enables the coordinator entity304to call its Complete function to send ywkand tjto the signing device(s)302in Y for transaction confirmation. In Step518, upon receiving the completion request, signing device302calls it Confirm function to verify ykwis legitimate and tk>tj, and replies with F(T), y0mand yjmif both conditions are true. Steps516-518are performed in a similar manner as Steps414-416ofFIG.4, respectively. In Step520, after receiving all required confirmations, coordinator entity304forms C by calling the Complete function, and temporality stores C. In some embodiments, Step520can involve the coordinator entity304calling the Distribute function to generate transaction receipt R={ti, ti, ei, T, Y, A, C}. In Step522, transaction receipt R is validated. According to some embodiments, such validation involves determining (or validating): i) that tj>ti(e.g., validates that all evidence came from a future timestamp); ii) that there is at least one yjfl in C, for each y0min Y, such that yjm=F (y0m) (e.g., validates C came from the signing devices in Y); iii) there is at least one amin A, for each y0min Y, such that am=F(F(T)∥yjm) (e.g., validates A came from the signing devices in Y and the signing devices authorized T); and iv) that A, using ei, is recorded in the transaction log at Li(e.g., validates that A was created at timestamp ti<tj). If any of the determinations/validations of Step522, then the transaction is not legitimate. In some embodiments, a coordinator entity304may contact a signature device302at the conclusion of Step522to confirm that the signing device302used for the transaction was not lost or stolen before timestamp ti. According to some embodiments, detailed below is an example use case embodiment that provides a non-limiting example implementation of the disclosed systems and methods. For example, in a retail transaction, a consumer and a merchant are executing a transaction where the consumer is allowing the merchant to draw funds from the consumer's bank account in exchange for goods. The consumer possesses a signing device issued by their bank. In some embodiments, the bank can bootstrap the system by deciding how the signing device will be distributed to the consumer, deciding how y0mand y0wget placed on the signing device, and maintaining the mapping of y0mto the consumer's bank account. In some embodiments, the merchant can act as the coordinator. In some embodiments, for a transaction where it will take time for the merchant to deliver the goods, it may also be required (e.g., the consumer may decide) that the merchant use a signing device to authorize the transaction. This prevents the merchant from later denying authorizing the transaction when the merchant is acting as the coordinator entity. In some embodiments, the bank can act as the witness, since funds will be taken from the consumer's bank account and their bank has liability for properly maintaining the bank account. In some embodiments, the transaction log can be implemented using any ledger technology the bank feels is robust enough to lower the risk of fraud to a level with which they are comfortable. In terms of transaction execution, the merchant and consumer have multiple options and are only limited by the communications channels the signing device(s) supports. For example, if the signing device of the consumer is a secure smartphone, the transaction can be completed via text messages. In another non-limiting example, if the signing device is a smartcard, the transaction can be completed with a traditional POS system. From a business perspective, the primary benefits are that the bank reduces the number of resources it spends on antifraud efforts, and the consumer and merchant can execute the transaction with less friction since there are no intermediaries needed between the merchant and the bank once the merchant possesses receipt R. FIG.6is a block diagram of an example network architecture according to some embodiments of the present disclosure. In the illustrated embodiment, user equipment (UE)602accesses a data network608via an access network604and a core network606. In the illustrated embodiment, UE602comprises any computing device capable of communicating with the access network604. As examples, UE602may include mobile phones, tablets, laptops, sensors, Internet of Things (IoT) devices, autonomous machines, and any other devices equipped with a cellular or wireless or wired transceiver. One example of a UE is provided inFIG.7. In the illustrated embodiment, the access network604comprises a network allowing over-the-air network communication with UE602. In general, the access network604includes at least one base station that is communicatively coupled to the core network606and wirelessly coupled to zero or more UE602. In some embodiments, the access network604comprises a cellular access network, for example, a fifth-generation (5G) network or a fourth-generation (4G) network. In one embodiment, the access network604and UE602comprise a NextGen Radio Access Network (NG-RAN). In an embodiment, the access network604includes a plurality of next Generation Node B (gNodeB) base stations connected to UE602via an air interface. In one embodiment, the air interface comprises a New Radio (NR) air interface. For example, in a 5G network, individual user devices can be communicatively coupled via an X2 interface. In the illustrated embodiment, the access network604provides access to a core network606to the UE602. In the illustrated embodiment, the core network may be owned and/or operated by a mobile network operator (MNO) and provides wireless connectivity to UE602. In the illustrated embodiment, this connectivity may comprise voice and data services. At a high-level, the core network606may include a user plane and a control plane. In one embodiment, the control plane comprises network elements and communications interfaces to allow for the management of user connections and sessions. By contrast, the user plane may comprise network elements and communications interfaces to transmit user data from UE602to elements of the core network606and to external network-attached elements in a data network608such as the Internet. In the illustrated embodiment, the access network604and the core network606are operated by an MNO. However, in some embodiments, the networks (604,606) may be operated by a private entity and may be closed to public traffic. For example, the components of the network606may be provided as a single device, and the access network604may comprise a small form-factor base station. In these embodiments, the operator of the device can simulate a cellular network, and UE602can connect to this network similar to connecting to a national or regional network. In some embodiments, the access network604, core network606and data network608can be configured as a multi-access edge computing (MEC) network, where MEC or edge nodes are embodied as each UE602, and are situated at the edge of a cellular network, for example, in a cellular base station or equivalent location. In general, the MEC or edge nodes may comprise UEs that comprise any computing device capable of responding to network requests from another UE602(referred to generally as a client) and is not intended to be limited to a specific hardware or software configuration a device. FIG.7is a block diagram illustrating a computing device showing an example of a client or server device used in the various embodiments of the disclosure. The computing device700may include more or fewer components than those shown inFIG.7, depending on the deployment or usage of the device700. For example, a server computing device, such as a rack-mounted server, may not include audio interfaces752, displays754, keypads756, illuminators758, haptic interfaces762, GPS receivers764, or cameras/sensors766. Some devices may include additional components not shown, such as graphics processing unit (GPU) devices, cryptographic co-processors, artificial intelligence (AI) accelerators, or other peripheral devices. As shown inFIG.7, the device700includes a central processing unit (CPU)722in communication with a mass memory730via a bus724. The computing device700also includes one or more network interfaces750, an audio interface752, a display754, a keypad756, an illuminator758, an input/output interface760, a haptic interface762, an optional global positioning systems (GPS) receiver764and a camera(s) or other optical, thermal, or electromagnetic sensors766. Device700can include one camera/sensor766or a plurality of cameras/sensors766. The positioning of the camera(s)/sensor(s)766on the device700can change per device700model, per device700capabilities, and the like, or some combination thereof. In some embodiments, the CPU722may comprise a general-purpose CPU. The CPU722may comprise a single-core or multiple-core CPU. The CPU722may comprise a system-on-a-chip (SoC) or a similar embedded system. In some embodiments, a GPU may be used in place of, or in combination with, a CPU722. Mass memory730may comprise a dynamic random-access memory (DRAM) device, a static random-access memory device (SRAM), or a Flash (e.g., NAND Flash) memory device. In some embodiments, mass memory730may comprise a combination of such memory types. In one embodiment, the bus724may comprise a Peripheral Component Interconnect Express (PCIe) bus. In some embodiments, the bus724may comprise multiple busses instead of a single bus. Mass memory730illustrates another example of computer storage media for the storage of information such as computer-readable instructions, data structures, program modules, or other data. Mass memory730stores a basic input/output system (“BIOS”)740for controlling the low-level operation of the computing device700. The mass memory also stores an operating system741for controlling the operation of the computing device700. Applications742may include computer-executable instructions which, when executed by the computing device700, perform any of the methods (or portions of the methods) described previously in the description of the preceding Figures. In some embodiments, the software or programs implementing the method embodiments can be read from a hard disk drive (not illustrated) and temporarily stored in RAM732by CPU722. CPU722may then read the software or data from RAM732, process them, and store them to RAM732again. The computing device700may optionally communicate with a base station (not shown) or directly with another computing device. Network interface750is sometimes known as a transceiver, transceiving device, or network interface card (NIC). The audio interface752produces and receives audio signals such as the sound of a human voice. For example, the audio interface752may be coupled to a speaker and microphone (not shown) to enable telecommunication with others or generate an audio acknowledgment for some action. Display754may be a liquid crystal display (LCD), gas plasma, light-emitting diode (LED), or any other type of display used with a computing device. Display754may also include a touch-sensitive screen arranged to receive input from an object such as a stylus or a digit from a human hand. Keypad756may comprise any input device arranged to receive input from a user. Illuminator758may provide a status indication or provide light. The computing device700also comprises an input/output interface760for communicating with external devices, using communication technologies, such as USB, infrared, Bluetooth™, or the like. The haptic interface762provides tactile feedback to a user of the client device. The optional GPS transceiver764can determine the physical coordinates of the computing device700on the surface of the Earth, which typically outputs a location as latitude and longitude values. GPS transceiver764can also employ other geo-positioning mechanisms, including, but not limited to, triangulation, assisted GPS (AGPS), E-OTD, CI, SAI, ETA, BSS, or the like, to further determine the physical location of the computing device700on the surface of the Earth. In one embodiment, however, the computing device700may communicate through other components, provide other information that may be employed to determine a physical location of the device, including, for example, a MAC address, IP address, or the like. The present disclosure has been described with reference to the accompanying drawings, which form a part hereof, and which show, by way of non-limiting illustration, certain example embodiments. Subject matter may, however, be embodied in a variety of different forms and, therefore, covered or claimed subject matter is intended to be construed as not being limited to any example embodiments set forth herein; example embodiments are provided merely to be illustrative. Likewise, a reasonably broad scope for claimed or covered subject matter is intended. Among other things, for example, subject matter may be embodied as methods, devices, components, or systems. Accordingly, embodiments may, for example, take the form of hardware, software, firmware or any combination thereof (other than software per se). The following detailed description is, therefore, not intended to be taken in a limiting sense. Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, the phrase “in some embodiments” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter include combinations of example embodiments in whole or in part. In general, terminology may be understood at least in part from usage in context. For example, terms, such as “and”, “or”, or “and/or,” as used herein may include a variety of meanings that may depend at least in part upon the context in which such terms are used. Typically, “or” if used to associate a list, such as A, B or C, is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in the exclusive sense. In addition, the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context. The present disclosure has been described with reference to block diagrams and operational illustrations of methods and devices. It is understood that each block of the block diagrams or operational illustrations, and combinations of blocks in the block diagrams or operational illustrations, can be implemented by means of analog or digital hardware and computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer to alter its function as detailed herein, a special purpose computer, ASIC, or other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the functions/acts specified in the block diagrams or operational block or blocks. In some alternate implementations, the functions/acts noted in the blocks can occur out of the order noted in the operational illustrations. For example, two blocks shown in succession can in fact be executed substantially concurrently or the blocks can sometimes be executed in the reverse order, depending upon the functionality/acts involved. For the purposes of this disclosure, a non-transitory computer readable medium (or computer-readable storage medium/media) stores computer data, which data can include computer program code (or computer-executable instructions) that is executable by a computer, in machine readable form. By way of example, and not limitation, a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals. Computer readable storage media, as used herein, refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data. Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, optical storage, cloud storage, magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor. To the extent the aforementioned implementations collect, store, or employ personal information of individuals, groups, or other entities, it should be understood that such information shall be used in accordance with all applicable laws concerning the protection of personal information. Additionally, the collection, storage, and use of such information can be subject to the consent of the individual to such activity, for example, through well known “opt-in” or “opt-out” processes as can be appropriate for the situation and type of information. Storage and use of personal information can be in an appropriately secure manner reflective of the type of information, for example, through various access control, encryption, and anonymization techniques (for especially sensitive information). In the preceding specification, various example embodiments have been described with reference to the accompanying drawings. However, it will be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented without departing from the broader scope of the disclosed embodiments as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense. | 51,198 |
11943211 | The same reference number represents the same element or the same type of element on all drawings. It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. DESCRIPTION OF EMBODIMENTS The figures and the following description illustrate specific exemplary embodiments of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within the scope of the invention. Furthermore, any examples described herein are intended to aid in understanding the principles of the invention, and are to be construed as being without limitation to such specifically recited examples and conditions. As a result, the invention is not limited to the specific embodiments or examples described below, but by the claims and their equivalents. FIG.1illustrates a network scenario in accordance with at least some embodiments of the present invention. As shown inFIG.1, the network scenario may comprise a set of communication devices CD. Some of the communication devices may have the required functionality to act as a network manager NM or as a network controller NC. A communication device CD can be a corporate, authority, and/or user device, such as a server device, a desktop/tablet/laptop computer, smartphone, a machine-to-machine (M2M) device, a set-top box or other suitable electronic device. The communication devices CD can be virtual machines, for example implementing compute and storage functions. In an IoT environment, a communication devices CD can be any low-powered device such as a light-bulb. A communication device CD may need to use a service via a connection to a network (like a home network) that is controlled by a network manager. To that end, the communication device CD is able to send a request related to this service to the network manager that allows the service to be performed. A communication device CD can be considered as a network manager NM if it has the required functionality. A network manager NM is responsible to “bind” with a given communication device (i.e. to give connections, either intranet or internet ones) and to report on a blockchain network BN about its current behaviour as compared to a reference behaviour. The reference behaviour is signed by the manufacturer of the given communication device and shared with the network manager NM during the “bind” process. The reference behaviour related to a communication device describes how the communication device should behave. For example, a part of the reference behaviour could be “TERM=connect to French IPs”. Based on TERM, a network manager can then recognize if the communication device is good or bad behaving. In this example, if the communication device only connects to French IPs it will good-behave, and if it connects to other IPs from other countries then it will bad-behave. In the network scenario, only one network manager NM can be chosen by a communication device during a setup phase with a “bind” process and a network manager NM can be counted also as a network controller as explained later. The network manager NM allows communication devices to access a telecommunication network TN. For example, a network manager NM is an internet gateway. In one embodiment, the “bind” process is the process with which the communication device allows a network manager to start reporting about the behaviour of the communication device. This “bind” process can rely on a request sent by the communication device CD to the network manager NM to join the blockchain network. A network controller NC does not act as gateway/proxy for the internet (data) connection, unlike a network manager. The role of a network controller is to review previous requests being sent to the network manager NM and to upload them to a blockchain as witnesses of what happened. In one embodiment of the network scenario, it is assumed that a communication device should have at least 2f+1 network manager and network controllers NC in order to defend against f malicious network entities (network manager and network controllers). The network manager and the network controllers are equipped with an application able to connect to the blockchain network, in order to report a content of a request of a communication device. Data packets (e.g., traffic and/or messages sent between the network managers) may be exchanged among the network manager, the network controllers and the blockchain network BN using predefined network communication protocols such as certain known wired protocols, wireless protocols, or other shared-media protocols where appropriate. In this context, a protocol consists of a set of rules defining how the nodes interact with each other. Innovative decentralized data storage solutions, such as blockchains, enable to provide provenance and also to avoid the need to rely on third parties to regulate information and data systems. In addition, blockchain platforms can host “smart contracts” which could replace established methods based on human witnesses with logical software-implemented protocols. “Smart contracts” may be defined as computer programs designed to automate the execution of the terms of a machine-readable contract or agreement. Unlike a traditional contract which would be written in natural language, a smart contract is a machine executable program which comprises rules that can process inputs in order to produce results, which can then cause actions to be performed dependent upon those results. A blockchain network BN relies on a blockchain architecture that is a replicated computing architecture where every network node executes and records the same transactions in the same order. Only one transaction at a time is globally accepted and all those transactions create the blockchain data-set which is kept replicated across the whole network. This is achieved without the need of a central authority since each and every transaction (as well as its order within the global set of transactions) needs to be accepted and broadcasted by a fraction of the network (each blockchain implementation has its own fraction's size). The blockchains can work in different ways, as well as in different scales. The nodes of a blockchain network may comprise corporate, authority, and/or user devices, such as a server, a desktop/tablet/laptop computer, smartphone or other suitable electronic device. The system may comprise an administrator or management node, a relay or other kind of intermediate device for connecting a node to further networks or services, such as another distributed or centralized computing system or a cloud service. The nodes are mutually addressable in a suitable way, for example, they may be connected to an internet protocol, IP, network. Messages released into the IP network with a recipient address are routed by the network to the recipient node identified by the recipient address. IP is not the only suitable networking technology used, for example, other peer-to-peer networking models are also suitable. The blockchain state information shared by the nodes may store all the transactions and history carried out in the network. The blockchain state information is stored in or as a blockchain ledger. Each node comprises the ledger whose content is in sync with other ledgers. The nodes may validate and commit transactions in order to reach consensus. Each node may have their own copy of the ledger and is permission-controlled, so participants see only appropriate transactions. Application of blockchain technology and the ledger enable a way to track the unique history of transactions by the individual nodes in the network. A network manager provides a service of reporting on the blockchain a behavior of communication devices for which a bind process has been established. A blockchain begins with the creation of a ‘genesis’ block. Each subsequent block then includes a hash of the previous block in the blockchain. This has two effects: 1.) modifying an existing block would also require regenerating each block after it, which is highly impractical from a computational standpoint and prevents malicious changes and 2.) the hashing mechanism provides an ordering to the blocks that traces all the way back to the genesis block, allowing devices to track changes in the system. The actual data content of the blocks can also vary. For example, the data in the blocks typically include a listing of exchanges/transactions and can include any information. A block of the blockchain may comprise at least header fields and a set of transactions that forms an actual transaction data of the block. In terms of the present invention, the transactions may comprise top hash entries, optionally with their timestamps, provided for storage into the block chain. The transactions may also comprise different kinds of transactions, as the block chain need not be dedicated to one single type of transaction. Blockchain systems typically implement a peer-to-peer system based on some combination of encryption, consensus algorithms, and proof-of-X, where X is some aspect that is difficult to consolidate across the network, such as proof-of-work, proof-of-stake, proof-of-storage, etc. Typically, those actors on a network having proof-of-X arrive at a consensus regarding the validation of peer-to-peer transactions. Some private blockchains do not implement proof-of-X consensus, e.g., where the computing hardware implementing the blockchain is controlled by trusted parties. Chained cryptographic operations tie a sequence of such transactions into a chain that once validated, is typically prohibitively computationally expensive to falsify. The blockchain network can be public or private. A public blockchain is a blockchain that anyone can read, send transactions and expect to see them included if they are valid, and anyone can participate in a consensus process for determining what blocks get added to the chain. Different kinds of private blockchain may be distinguished. A fully private blockchain is a blockchain where write permissions are kept to one organization. Read permissions may be public or restricted to certain participants. A consortium blockchain is a blockchain where a consensus process is controlled by a pre-selected set of nodes, for example, a consortium of 15 financial institutions, each of which operates a node and of which 10 must sign every block in order for the block to be valid. The right to read the blockchain may be public or restricted to the participants. A semi-private blockchain is run by a single company who grants access to any user who qualifies, and it typically targets business-to-business users. Examples of semi-private blockchains could include ones for government entities for record-keeping, land titles, public records, etc. The network manager or the network controller is an apparatus that may be any suitable physical hardware configuration such as: a network gateway, one or more server(s), blades consisting of components such as processor, memory, network interfaces or storage devices. In some of these embodiments, the apparatus may include cloud network resources that are remote from each other. In some embodiments, the apparatus may be a virtual machine. In some of these embodiments, the virtual machine may include components from different machines or be geographically dispersed. The apparatus may comprise one or more network interfaces (e.g., wired, wireless, etc.), at least one processor, and a memory interconnected by a system bus and powered by a power source (e.g., one or more batteries or other charge storage devices, a power line, etc.). The network interface(s) contain the mechanical, electrical, and signaling circuitry for communicating data over links coupled to the blockchain network. The network interfaces may be configured to transmit and/or receive data using a variety of different communication protocols according to the blockchain network. The memory comprises a plurality of storage locations that are addressable by the processor and the network interfaces for storing software programs and data structures associated with the embodiments described herein. The processor may comprise hardware elements or hardware logic adapted to execute software programs. An operating system, portions of which are typically resident in memory and executed by the processor, functionally organizes the device by, inter alia, invoking operations in support of software processes and/or services executing on the apparatus. With reference toFIG.2, a method monitoring behavior of a communication device according to one embodiment of the invention comprises steps S1to S11. A communication network, like a local network, comprises a set of communication devices, including at least one communication device acting as a network manager. In a setup process corresponding to steps S1and S2, a communication device willing to connect to the Internet via a network manager for a given service sends a broadcast message to all communication devices within wireless distance advertising its presence. With all active communication devices that reply, a standard key-pair exchange protocol takes place for the communication devices to communicate in a secure/private way. In step S1, the communication device CD sends a broadcast message MesB within a wireless range of the communication device CD. The broadcast message MesB contains an identifier IdCD of the communication device, a public key Pk-CD of the communication device and a query for available network controller and network manager. For example, the identifier IdCD of the communication device is a MAC address of the communication device. In step S2, when the other communication devices are contacted by the communication device CD, i.e. when they receive the broadcast message, they choose whether they want to be a network controller NC or a network manager NM or nothing by sending such information to the communication device. Such other communication device can be a network controller or network manager if it has the required functionality and an authorization (for example via an agreement of the manufacturer of such other communication device). At the initiative of said other communication device, session cryptographic keys are exchanged between the communication device and each of said other communication devices being available as network manager or network controller. By receiving information that said other communication devices are available as network manager or network controller, the communication device further receives the public key Pk-N of said other communication devices. For the sharing of such cryptographic keys, a standard key-pair exchange protocol can take place for the devices to communicate in a secure/private way, being able to encrypt all future messages. When a contacted device chooses to be a network controller, it exchanges the key-pair with the communication device CD for future communications and it means it will behave like a network controller for said communication device CD by means of the identifier IdCD of the communication device and the exchanged cryptographic keys. In step S3, the communication device selects one network manager if there are many network managers and a number of network controllers. In one embodiment, the communication device CD knows that there are N devices within the same network (this can be done via PING or other HELLO protocols), the communication device CD can decide to select 2f+1 devices as network manager and network controllers among said N devices, f being any number such that “f<=N/2−1”. This number f is set to defend against f malicious devices. In another embodiment, that could be considered for a best practice scenario, the communication device can select the maximum number of network controllers among the other communication devices having replied to the broadcast message. In another embodiment, that could be considered for a worst case scenario the communication device, the communication device can select at least 3 network controllers among the other communication devices having replied to the broadcast message. In one example CD may select the NM and NCs automatically using predetermined parameters or by selecting NM and NCs in a list shown in user interface of CD. In another embodiment, the number f or the number of chosen network controllers can be decided/defined in many ways. It could be the user/owner of the communication device CD that selects said number, for example via an application on the communication device CD or it can be decided by a specific protocol or even by the manufacture of the communication device. As an example, the manufacturer could sell a set of communication devices CD that support “f=3” and those devices will only start to work when they receive “2f+1=7” replies from other communication devices acting as network controllers. In another embodiment, if such number is not pre-defined, the communication device CD can select to the majority of communication devices acting as network controllers as to provide the best service in terms of security. This configuration can remain fixed or changed according to a policy of the communication device. The more the configuration changes and the more devices are selected as network controllers, the more secure is the protocol. In step S4, the communication device CD sends a service request ReqS to the network manager NM. For example, the service request is a data-driven logical request related to the upload/download of data to/from the Internet, a socket access, a firmware upgrade or any service operable by the communication device. Furthermore, the communication device CD caches the sent service request and the hash of the service request for future validation use. The service request ReqS is encrypted and signed to guarantee integrity and authenticity. In step S5, the network manager NM receives the service request ReqS and performs the service related to the service request ReqS. As the network manager NM has the functionality to route the traffic, it is able to analyze the content of the service request and to allow the communication device to access a telecommunication network by routing the service request to the telecommunication network TN. The telecommunication network may further comprise one or more other communications networks, like Internet, for example. The network manager NM stores a copy of the service request to be sent later to other network controllers. It is assumed that the copy of the service request has the same content as the service request, eventually adapted to a desired format. In step S6, the network manager NM analyzes the content of the service request and determines the behavior of the communication device in view of an expected behavior. In some embodiments, the network manager NM can retrieve (or has already retrieved) a set of specifications of the communication device by interrogating an entity able to provide information about the expected behavior of the communication device, like the manufacturer of the communication device. The expected behavior can be described in a document of type terms of use. The determined behavior is for example a good behavior or a bad behavior in view of an interpretation of the content of the service request against the terms of use of the communication device, the determined behavior contributing to the evolution of the reputation of the communication device. The network manager NM sends a first report Rpt to the blockchain network BN, the report containing the content of the service request and the determined behavior of the communication device. Such first report Rpt about behavior is not final yet (not written within the blockchain) and has to be validated later (in order to be written in the blockchain) by miners of the blockchain network by means of other reports. In step S7, the network manager NM sends a broadcast message MesB′ to all communication devices in the local network (communication devices acting like network controller or not). The broadcast message MesB′ contains the copy of the service request ReqS, the hash of the service request and a query to report the service request retrieved from the communication device in order to validate the behavior of the communication device. In step S8, each network controller NC that has received the broadcast message MesB′ establishes an ad-hoc connection with the communication device CD, which is identified for example by means of the identifier IdCD of the communication device. The ad-hoc connection is for example a direct WiFi connection. The ad-hoc connection is encrypted by means of the public keys of the communication device and of the network controller. In step S9, each network controller NC asks the communication device CD for the latest request being sent to the network manager, i.e. the service request ReqS sent in step S4. The communication device CD retrieves the service request ReqS and the hash of the service request previously stored in cache. The communication device CD sends the service request ReqS and the hash of the service request to each network controller NC having an ad-hoc connection with the communication device. The cached requests are hash-sized which means a few kilobytes and they may be deleted as soon as the request is validated (no reason to keep a history of past, accepted, requests). As such, any “smart” IoT device sold nowadays could make it. A program or application may be used to delete cached request in IoT device. In step S10, each network controller NC creates its own report based on the service request as blockchain transaction and sends it to the blockchain network BN. In one embodiment, the network controller NC creates its own report only if the hash sent by the communication device corresponds to the hash sent by the network manager. The hash received by the CD must be the same received by the NM. If not, the request is rejected from the NC and eventually also by the blockchain since the NM won't be supported by any NC. The hash identifies the service request being created by the communication device but not the communication device itself. In one example, the CD's identity for privacy reasons is not provided, but information on the device “class” such as webcams, temperature sensors, smoke/gas sensors, etc., for example, is provided. In one embodiment, this is needed to verify if the communication device and the network manager are in agreement with the service request made by the communication device CD. Two cases are possible: 1) the hashes being sent by the communication device and the network manager match, then they agree on the service request of the communication device and the report transaction can be created by the network controllers as well or 2) the hashes does not match, which means that either the communication device is denying the service request made to the network manager or that the network manager is forging some fake requests on behalf of the communication device. If the hashes do not match, the service request is not forwarded by the network controller and as such it won't be accepted within the blockchain. To reach consensus for example in blockchain, votes are needed to agree. In this regard, the number of network controllers exceeds the number of network managers in an example embodiment such that the votes of the network controllers will control the validation process to the exclusion of the network manager(s) if the network controllers vote in a uniform manner. If the NCs do not forward the request due to a mismatch in the hash, the NM vote will be the only one and the consensus will not be reached, for example. The reports sent to the blockchain network BN are blockchain transactions and can be under the form of a smart contract to be executed by miners of the blockchain. The content of reports sent by the network controllers are in the same format as the first report sent by the network manager, i.e. should correspond to the content of the service request, except that the network manager adds the determined behavior of the communication device (as the network manager has the terms of use of the communication device and the network controllers do not have them). When sending a request of service first time the new communication device is validated by using the terms of use e.g. at least manufacturer's settings (address domain like location of server is in a country or region or territory) provided to NM, for example. Further network manager may broadcast that information to network controllers. In one example the terms of use may be stored in the NM and they are used for the service requests. Update of terms may be possible through software updates for change of the valid country or region or territory, for example. In step S11, the report transactions are executed on the blockchain network BN according to the type of blockchain and to a specific consensus in order to validate or invalidate the first report Rpt previously sent by the network manager. The network manager NM and the network controllers NCs send the reports to the same entity, like a validation entity. The entity may have the same network address. The network manager and the network controllers may have their IDs known by the entity. The reports may be identified by identifier IdCD of the communication device and time of service requests, or any other ways, like using identifiers of the network manager NM and the network controllers NCs or any combination of these, for example. The entity may be blockchain network BN as described in this patent application as an example. InFIG.3a flow chart of a method implemented by a communication device, in accordance with at least some examples, is illustrated. The flow chart begins at30. The communication device is configured to send a broadcast message. The communication device sends it, at least, to get knowledge of the other devices in the local network. The flow chart continues at32, where communication device receives reply/ies from other devices, such as from one or a plurality of network controllers and from a network manager. At34the devices that function as network controllers and as a network manager are determined from the reply/ies from the other devices. At36cryptographic keys are exchanged between the device and the determined devices, like the network controllers and the network manager. At38the devices encrypt future messages. For cryptographic keys, for example, a standard key-pair exchange protocol can take place for the communication device, networks controllers and network manager to communicate in a secure/private way, being able to encrypt all future messages. InFIG.4a flow chart, illustrating a method implemented by a communication device in accordance with at least some examples, is illustrated. The flow chart begins at40where the communication device is configured to send a service request depicted as ReqS, as one example. The flow chart continues at42where the communication device may cache, or store information on the sent service request and the hash of it. At44the device sends information on the service request and the hash of the service request stored in cache to the determined network controllers. FIG.5a flow chart, illustrating a method implemented by a network manager in accordance with at least some examples, is illustrated. Flow chart begins at50. The network manager receives a service request from the communication device. At52The network manager retrieves expected behavior of the communication device. Flow chart continues at54where the network manager analyzes the content of the service request. At56the access to a telecommunication network is determined and compared against the expected behavior of the communication device. The network manager may be configured to receive an updated version of the expected behavior and then the updated version will become the expected behavior. At58the network manager is configured to store a copy of information on the service request received from the communication device. At59the network manager determines a behavior of the communication device. The behavior may be determined as good or bad. If bad, the service request may be determined to be cancelled, for example. The network manager may generate first report for validation. FIG.6illustrates a flow chart illustrating a method implemented by a network controller accordance with at least some examples. The flow chart begins at60where the network controller is configured to receive from the network manager a message to report the service request retrieved from the communication device. At62upon reception of the message, the network controller is configured to establish a connection with the communication device. At64, the network controller is configured to ask from the communication device information on the service request sent to the network manager. At66, the network controller is configured to receive from the communication device information on the service request and the hash of the service request sent to the network manager. At68, the network controller may produce a report. The report may comprise the result of comparison of the information received from the communication device and the network manager. The determined network controllers are configured to operate similarly. At69, the network controller is configured to send the report for validation. The first report and the report/s may be sent to same entity and validated there. In a general manner, if the first report Rpt is validated, it yields to the creation of a transaction written in a block of the blockchain. In one embodiment, said block contains a set of reports about the communication device and is valid if and only if for the communication device associated to a specific first report, there are at least X number of transactions that confirm its validity, where X is the majority of the network controllers and the network manager communicating with the communication device CD. As such, for each assessed communication device, there can be a set Ciof network controllers where Ci={C1i, C2i, . . . , Cki}. The “majority of transactions supporting a specific behavior” is then expressed by the following formula Maj=|Ci/2|. By the usage of the blockchain technology and this new protocol for validating the first reports, it can be avoided malicious network managers to report fake/malicious behavior on communication devices. Indeed, since both the network manager and network controllers are providing feedbacks on the communication device's requests, if the majority of them is honest (an important assumption for any blockchain-based solution) then malicious, or fake or other reports, supported by the minority of network manager or network controllers, will be discarded. In an illustrative example described thereinafter, whenever an IoT device (or any other dumb device not able to manage the blockchain itself) wants to be part of a blockchain ecosystem/solution, it has to rely on somebody else for the creation of blockchain transactions. In this case, the IoT devices rely on a network manager (i.e. a modem/router) for the creation and broadcast of blockchain transactions, as the IoT device would not be able to do that themselves. This has the big advantage of making low-powered dumb devices capable of joining a blockchain solution but it also has the big drawback of putting all the trust in the network manager. Indeed, even if the IoT device (like the communication device CD in this case) is behaving as expected, a compromised network manager could report of it acting in a malicious way. The above issue is due to the fact that nowadays there is no way for other blockchain nodes to verify if the transactions being created by said network manager are trusted or not. Indeed, they are not directly connected to the IoT device and cannot verify what it is doing. To solve this issue, a network scenario according to one embodiment does not have one single network manager but it is required that each transaction (related to service request from an IoT device) is also reported by many network controllers. As such, other network managers or backend systems can verify if each transaction is supported by a required number of network controllers. In an illustrative example described thereinafter, a network scenario comprises a smart TV, a security camera, a laptop and a set-top box (acting as modem/router). Usually, the first three devices are connected to the internet through the fourth one (the set-top box). As such, all the “data requests” (service requests) would go through the set-top box as well. As a toy example, it is needed to monitor if the webcam is behaving as expected. The webcam is supposed (according to terms of use) to upload images to a certified server in France, some content of the service request would be under the form: webcam→images→set-top box→French server. Then at some point, the network manager then reports that the webcam is starting to upload images to a Chinese server, the reported service request becomes: webcam→images→set-top box→Chinese server. The last service request indicates a behavior that is not expected and has to be reported via the blockchain such that everybody in the world gets to know that a specific webcam model is not acting as expected (thus lowering down its reputation). As such, the report transaction created by the set-top box could be as follow “webcam→images→set-top box→Chinese server; webcam ID=X misbehaving”. The above process requires the miners to validate the transaction before storing it within the blockchain. According to some solutions, miners only control if the set-top box has been previously “paired” with the webcam (which indeed happened since the webcam is interacting with the set-top box) but cannot verify the trustworthiness of the transaction content since they cannot interact with the webcam themselves. Thanks to the network scenario, the laptop and the smart TV can act as network controllers and can check the real service request (especially by means of the hash the service request) and provide the service request as a report to the blockchain. In an illustrative manner including interpretation of the behavior, these final results have to be analyzed by the miners:set-top box→blockchain transaction (webcam ID=X; Chinese server; misbehaving)→blockchain miners→blockchainlaptop→blockchain transaction (webcam ID=X; French server)→blockchain miners→blockchainsmart TV→blockchain transaction (webcam ID=X; French server)→blockchain miners→blockchain The miners still cannot verify themselves the content of the transaction, i.e. verify if the webcam is uploading images to Chinese or French server. However, they can analyze the reported service request and detect that one device (set-top box as network manager) reported that webcam is communicating with China while two others (laptop and smart TV as network controllers) reported that the camera is communicating with France. As such, the miners will discard the transaction related for “misbehaving”. Indeed, the network controllers cannot verify what the communication device is doing as they do not have related terms of use and as such they only act as validators of what the network manager is reporting. As a result, a compromised set-top box cannot report fake behaviors on devices. In another specific illustrative example, the miners see the following transactions sent to the blockchain:the following oneset-top box→blockchain transaction (webcam ID=X; Chinese server; misbehaving)→blockchain minersor the following twoset-top box→blockchain transaction (webcam ID=X; Chinese server; misbehaving)→blockchain minerslaptop→blockchain transaction (webcam ID=X French server)→blockchain miners In all cases, the report of the set-top box will not be accepted and written within the blockchain since miners do not have a “majority” of service requests that correlate the report of the set-top box (there is only one request in the first case and two contradicting requests in the second one). As long as there is not a majority of network controllers reporting on the same service request as the one reported by the set-top box, the report of the set-top will not be mined (i.e. accepted) within the blockchain with the final result that the reputation of the communication device will not change. An embodiment comprises a communication device under the form of an apparatus comprising one or more processor(s), I/O interface(s), and a memory coupled to the processor(s). The processor(s) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. The processor(s) can be a single processing unit or a number of units, all of which could also include multiple computing units. Among other capabilities, the processor(s) are configured to fetch and execute computer-readable instructions stored in the memory. The functions realized by the processor may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non volatile storage. Other hardware, conventional and/or custom, may also be included. The memory may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory includes modules and data. The modules include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. The data, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the modules. As used in this application, the term “circuitry” may refer to one or more or all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) combinations of hardware circuits and software, such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.” This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device. A person skilled in the art will readily recognize that steps of the methods, presented above, can be performed by programmed computers. Herein, some embodiments are also intended to cover program storage devices, for example, digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, where said instructions perform some or all of the steps of the described method. The program storage devices may be, for example, digital memories, magnetic storage media, such as a magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. | 41,700 |
11943212 | Embodiments of the present disclosure and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein showings therein are for purposes of illustrating embodiments of the present disclosure and not for purposes of limiting the same. DETAILED DESCRIPTION Provided are methods utilized for authentication through multiple pathways based on device capabilities and user requests. Systems suitable for practicing methods of the present disclosure are also provided. A device may include one or more device authentication profiles that limit use of associated processes for the communication device and require matching user data to authenticate the user for each of the authentication profiles and allow access to the processes corresponding to each of the authentication profiles. In this regard, authentication profiles may be made for processes of the communication device. Processes of the communication device may include access to the device and/or the device's operating system, as well as features and capabilities of the device and/or device operating system. Thus, the processes of the device may allow for unlocking the device and operating the device. The processes may also include use of device hardware and/or software, such as a messaging and communication module (e.g., network interface), a camera, device applications (e.g., messaging and/or payment applications), and other types of executable processes by the hardware and/or software of the communication device. In other embodiments, the device processes may also correspond to connected devices with the communication device, such as remote sensors, databases or other storage resources, etc. One or more users of the communication device may wish to protect the processes of the communication device from unauthorized use. Thus, the user(s) may utilize an authentication module, which may establish the authentication profile(s) that prevent access and use of one or more processes associated with that authentication profile unless user data collected during current use of the communication device satisfies the requirements of the authentication profile. The authentication module may therefore create an authentication profile for one or more processes, such as unlocking the communication device or utilizing an application of the communication device. The authentication profile may also be specific to certain processes within the application, for example, by allowing access and use of some features of the application but preventing or limiting use of other features. The authentication profiles may allow access and/or prevent access for one user or a group of users. In this regard, an exemplary authentication profile may be associated with a payment module of the communication device and limit use of the payment module to certain users as well as place limits on an amount of payment/transfer using the payment module. The authentication module may be requested to establish authentication profiles for certain process by a user (e.g., owner, operator, and/or administrator) of the communication device. Thus, the user may specify the process(es) that require authentication to use. The user may also establish the required user data to be met in order to allow authenticate a user and allow access to the specified process(es). For example, the user may set the parameters for the user data that are required to be met to utilize the process. However, in other embodiments, the authentication profile may determine what user data is required to match the authentication profile and allow access to the device process(es). In such embodiments, the authentication module may determine user data for a user when the user utilizes the communication device. The authentication module may the associate this user data with a specific user. Thus, when the user data is replicated in the future, the authentication module may identify the user and authenticate that user for the processes for authentication profiles matching that user data. Moreover, each authentication profile having required user data to match the authentication profile and unlock the associated process(es) may have multiple pathways for authentication by requiring different user data in order to verify an identity of the user wishes to access those associated process(es) when using the communication device. In order to establish authentication profiles, the authentication module may profile the communication device to determine what device components are available to collect and determine user data. In this regard, the authentication module may review the communication device system and determine what device components are available to the authentication module. The device components may correspond to one or more of a network interface module, a communication module connected to an external device or external sensor, a keypad, a mouse, a touchscreen interface, a camera, a microphone, an accelerometer, a motion detector, an environmental detector (e.g., barometer, altimeter, GPS sensor, etc.), and/or a biometric sensor. Thus, user data determined by the device component(s) may include data about use of the device by the user (e.g., applications accessed/used, emails sent, messages sent, etc.), input actions by the user (e.g., typing, scrolling, mouse actions, touchscreen movements, etc.), motions of the user while using the device (e.g., hand the user holds the device in, angle of holding the device, whether the user utilizes the device while walking or moving, etc.), information about the user (e.g., height of the user, facial recognition and/or imaging, biometric readings, clothing of the user, etc.), and/or real-world/environmental conditions for an environment that the device is used within (e.g., location, temperature, humidity, time of day, light levels, noise, etc.). An authentication profile may therefore require user data collected and/or determined by a device component to match one of these set parameters for required user data for the authentication profile in order to unlock and/or allow access to process(es) associated with that authentication profile. Moreover, the authentication profile may further require a password to be added to the authentication profile with the required user data to meet that profile. After setting the required user data to be met for an authentication profile, during use of the communication device, the authentication module may actively and/or passively collect and determine user data during use of the communication device by a user. Based on the user data determined during the current session of use of the communication device, the authentication module may perform matching of that user data to authentication profiles to determine whether the user data (and therefore, the user utilizing the device) matches any of the authentication profiles. Based on the matching authentication profile(s), the authentication module may automatically (e.g., without user input) authenticate the user for the device process(es) associated with the matching authentication profile(s) and allow access to those process(es). The user may therefore seamlessly use such processes that the user is authorized to use based on their user data and the matching authentication profile(s). However, the user may be prevented from using device process(es) where the user has the incorrect or insufficient user data to meet the requirements of the associated authentication profile. If the user attempts to use such process(es), the user may be informed that their authentication is lacking. In various embodiments, the user may be further informed of how to satisfy the requirements of the authentication profile, such as what other user input to provide to meet the authentication profile (e.g., required to enter in a biometric reading). Moreover, the authentication module may establish authentication profiles for device processes without user input requesting establishment of the authentication profile to protect associated device processes. For example, the authentication module may execute in the background of the operating system of the communication device and may profile the device and the user of the device by determined components of the device available to the authentication module and collecting user data about past use of the communication module by the user. The authentication module may associate this past user data with processes used by the user concurrently with the user data (e.g., processes used when the user data was collected, active, and/or current). Thus, during future uses of the communication device and/or requests to access those processes, the authentication module may require user data to match the requirements (e.g., required user data/parameters) in the generated authentication profile in order for the user to access those processes. In this regard, the authentication module may also protect the device processes. The authentication module may also establish each authentication module to authenticate a user using an authentication profile based on available user data according to multiple pathways in order to provide access to the associated processes. Each pathway may correspond to required user data by the pathway in the authentication profile in order to authenticate the user and may require different user data. For example, access to a payment module may require one of two different authentication pathways in an authentication profile. The first authentication pathway may require a user fingerprint biometric, a location of the user, and a time of day. However, a second authentication pathway may require a certain motion to be performed by the user, detection of the users touch inputs to a touch screen, access of certain applications by the user, and height detected of the user or of the communication device while being held by the user. Thus, based on available user data and device components able to detect the user data, the user may be authenticated for an authentication profile using more than one pathway. In various embodiments, more than two pathways may be available. Moreover, fallback mechanisms may be used in the case of device malfunctioning and/or malfunctioning of one or more device components in order to allow for the user to access processes of the device. The fallback mechanisms may correspond to general multifactor authentication. Thus, different authentication pathways (or ways to authenticate a user) may be used to provide access to a user trying to obtain a certain authentication or to provide access to different authentication levels or access. For the former, when a first authentication pathway or first type of authentication method is unsuccessful (such as due to difficulty in entering a PIN/password, faulty sensor, etc.), a second different authentication pathway or second different type of authentication may be used, both for accessing the same level or providing the same level of authentication. If the second type of authentication fails, a third one may be used. As a result, a user device may be able to leverage different ways the device can authenticate a user to provide numerous advantages, including more flexibility for the user and more control of device access. This enables a processor or other computing device to operate more efficiently because the processor may not need to process multiple attempts to authenticate through a faulty or inefficient method, but may instead recognize this and provide one or more alternative authentication methods, which may allow less authentication attempts and thus less time processing authentication requests. FIG.1is a block diagram of a networked system100suitable for implementing the processes described herein, according to an embodiment. As shown, system100may comprise or implement a plurality of devices, servers, and/or software components that operate to perform various methodologies in accordance with the described embodiments. Exemplary device and servers may include device, stand-alone, and enterprise-class servers, operating an OS such as a MICROSOFT® OS, a UNIX® OS, a LINUX® OS, or other suitable device and/or server based OS. It can be appreciated that the devices and/or servers illustrated inFIG.1may be deployed in other ways and that the operations performed and/or the services provided by such devices and/or servers may be combined or separated for a given embodiment and may be performed by a greater number or fewer number of devices and/or servers. One or more devices and/or servers may be operated and/or maintained by the same or different entities. System100includes a user102, a communication device110, a payment provider server130, and a service provider server140in communication over a network150. User102may utilize communication device110to utilize the various features available for communication device110. Thus, user102may wish to access one or more processes of communication device100, which may be protected and require authentication to access the processes. Communication device110may determine user data for user102using device components. Such user data may also be determined from payment provider server130and/or service provider server140. Using the user data, communication device110may determine whether the user data matches one or more required user data for authentication of processes associated with one or more authentication profiles. Communication device110may then authorize access to the associated processes of the matching authentication profiles. In various embodiments, one or more of the processes may be associated with payment provider server130and/or service provider server140, such as a payment using a payment module of communication device110. Communication device110, payment provider server130, and service provider server140may each include one or more processors, memories, and other appropriate components for executing instructions such as program code and/or data stored on one or more computer readable mediums to implement the various applications, data, and steps described herein. For example, such instructions may be stored in one or more computer readable media such as memories or data storage devices internal and/or external to various components of system100, and/or accessible over network150. Communication device110may be implemented as a communication device that may utilize appropriate hardware and software configured for wired and/or wireless communication with payment provider server130and/or service provider server140. For example, in one embodiment, communication device110may be implemented as a personal computer (PC), a smart phone, laptop/tablet computer, wristwatch with appropriate computer hardware resources, eyeglasses with appropriate computer hardware (e.g. GOOGLE GLASS®), other type of wearable computing device, implantable communication devices, and/or other types of computing devices capable of transmitting and/or receiving data, such as an IPAD® from APPLE®. Although a communication device is shown, the communication device may be managed or controlled by any suitable processing device. Although only one communication device is shown, a plurality of communication devices may function similarly. Communication device110ofFIG.1contains an authentication module120, a data collection component112, other applications114, a database116, and a communication module118. Authentication module120and other applications114may correspond to executable processes, procedures, and/or applications with associated hardware. In other embodiments, communication device110may include additional or different hardware and software as required. Authentication module120may correspond to one or more processes to execute modules and associated devices of communication device110to establish authentication profiles to prevent unauthorized access to processes of communication device110, access determined user data including past user data for establishment of the authentication profiles, determine whether any authentication profiles match current user data when user102utilizes communication device110, and provide authorization and access to the corresponding processes for the matching authentication profiles to the user data. In this regard, authentication module120may correspond to specialized hardware and/or software utilized by communication device110to first establish authentication profiles for processes of communication device110. Each authentication profile may be associated with at least one process of communication device110, such as access to communication device110and/or an operating system of communication device110, use of a hardware or software component of communication device110(e.g., an optical camera, communication module, database or memory, a phone module, an audio component, and/or associated applications and application features including messaging, payment, and/or social networking applications). The processes may also correspond to features and sub-processes of a main process, such as access and/or authorization rights within an application or associated with a hardware component, for example, messaging rights, payment and transfer limits, phone privileges, etc. When establishing the processes to be protected by one or more authentication profiles, user102or another owner/administrator of communication device110may select the processes to protect. However, in other embodiments, authentication module120may instead automatically select one or more processes to protect from unauthorized use with an authentication profile. Authentication module120may select processes to protect based on a security requirement of the process, sensitive material associated with the process (e.g., personal or financial information), requirement by the process (e.g., set with an administrative entity for the process, such as a payment provider for a payment module), available types of authentication methods on the device, or other known information about the process. Once the process(es) are selected to protect with an authentication profile, authentication module120may then set parameters that may be required to authenticate a user and satisfy the authentication profile so that the user associated with such parameters may access and utilize the processes protected by the authentication profile. The parameters may correspond to required user data that must be met to authenticate a user (e.g., user102) with the authentication profile. The required user data may be set by user102or the owner/administrator of communication device110when creating the authentication profile. For example, user102may require a biometric, time, and location for access to a payment application and/or for payments over $100 in the payment application. However, in other embodiments, user102or the owner/administrator may allow authentication module120to select the required user data, such as based on service provider authentication/risk models. Moreover, in embodiments where authentication module120automatically creates one or more authentication profiles to protect communication device110, authentication module120may also set the required user data. In embodiments where authentication module120selects the required user data for an authentication profile, authentication module120may select the required user data for collected and/or determined user data at the time that an authorized user was utilizing the process. For example, authentication module120may detect a user height, location, user input characteristics (e.g., a way the user types, moves the mouse, utilizes a touch screen, scrolls through menus, etc.), and/or other user data when the authorized user is utilizing a payment application (e.g., not fraudulently). Authentication module120may require this user data for future uses of the payment application. Authentication module120may allow for different authentication pathways for an authentication profile to allow use of the corresponding processes. For example, if a biometric is not available for the user, authentication module120may allow for the time of day and location to match known user data for authorized users in order to satisfy the required user data for the authentication profile. Moreover, authentication module120may adapt and change required user data over time to accommodate changing conditions, such as a move to a new address, change in person's information, use of the process, etc. Once authentication profiles are established by authentication module120, the associated processes of communication device110may require corresponding authentication profiles to be established to allow access and use of those processes. Thus, unauthorized user who cannot satisfy the required user data may be prevented from accessing and utilizing such processes. In order to determine whether a user is authorized to use one or more processes, authentication module120may access current user data during use of communication device110by user102. In this regard, user102may generate current user data based on user102's use of communication device110. Thus, user data may include biometrics, environment factors, user information, user actions, user input, etc. For example, applications used by user102and/or actions within the applications may be included within the user data, as well as current biometrics for user102. A height and/or position user102holds or utilizes communication device110or a device component of communication device110(e.g., a mouse, keyboard, touch interface, etc.) may correspond to user data. User data may include ambient light, noise, location, pressure, humidity, and/or other environmental information. Moreover, user data may be captured by cameras, breathalyzers, scanners, or other types of sensors to determine user data (e.g., facial/body recognition). In various embodiments, user data may also correspond to data received over network150, for example, from payment provider server130and/or service provider server140. For example, user data may correspond to social networking interactions by user102on service provider server140, payments made and/or received using payment provider server130, or other online actions over network150with another device or server. In various embodiments, user data may be receives from a device connected to communication device110, for example, using short range wireless communications (e.g., a pedometer or other wearable device tracking user biometrics). Communication device110and the connected device may communicate over near field communication, Bluetooth, Bluetooth Low Energy, radio, infrared, LTE Direct, or other communication protocol. Once the current user data is accessed by authentication module120, authentication module120may determine authorizations for user102based on the user data and the authentication profiles in order to allow user102to access one or more processes. Authentication module120may receive a request to utilize one or more processes during use of communication device110, for example, from user102when user102attempts to access those processes. However, in other embodiments, authentication module120may constantly process current user data in order to determine authentications to allow access to processes by user102. Authentication module120may then determine if user102may access and use a process based on the corresponding authentication profile. As discussed herein, user102may be authenticated for a process based on one or more pathways within an authentication profile, providing a fallback mechanism to allow user102to access the process even where user data required by one pathway for authentication is unavailable or incorrect (e.g., device malfunction, different user data but authorized user, change in circumstances causing new user data, etc.). Once authenticated for an authentication profile, authentication module120may allow user102to access and use the corresponding process(es) for the authentication profile. Authentication module120may further continue tracking user data so that if the user data changes to no longer be compliant with an authentication profile, authentication module120may then lock or otherwise prevent access and use of the process(es) for the authentication profile until the required user data is again met. Data collection component112may correspond to one or more processes and/or specialized hardware of communication device110to collect and/or determine past user data for an authorized user of an authentication profile and current user data for user102during user102's use of communication device110. In this regard, data collection component112may correspond to specialized hardware and/or software that may collect and/or determine user data during a session of use of communication device110by utilizing one or more device components for communication device110. Data collection component112may correspond to one or more of a network interface module, a communication module connected to an external device or external sensor, a keypad, a mouse, a touchscreen interface, a camera, a microphone, an accelerometer, a motion detector, an environmental detector (e.g., barometer, altimeter, GPS sensor, etc.), and/or a biometric sensor. Thus, user data determined by data collection component112may include data about use of communication device110by user102(e.g., applications accessed/used, emails sent, messages sent, etc.), input actions by user102(e.g., typing, scrolling, mouse actions, touchscreen movements, etc.), motions of user102while using communication device110(e.g., hand user102holds communication device110in, angle of holding communication device110, whether user102utilizes communication device110while walking or moving, etc.), information about user102(e.g., height of user102, facial recognition and/or imaging, biometric readings, clothing of user102, etc.), and/or real-world/environmental conditions for an environment that communication device110is used within (e.g., location, temperature, humidity, time of day, light levels, noise, etc.). Data collection module112may store the determined user data to database116for use with authentication profiles. Authentication module120may further determine a device profile for communication device110based on the capabilities of data collection component112, for example, what user data can be collected and/or determined using data collection component112. The device profile may be used to determine an authentication pathway for an authentication profile. Thus, based on capabilities of data collection component112and the user data determined by data collection component112, authentication module120may determine authentication profiles and a device profile for available authentication mechanisms through required user data determined by data collection component112for that authentication mechanism. In various embodiments, communication device110includes other applications114as may be desired in particular embodiments to provide features to communication device110. For example, other applications114may include security applications for implementing client-side security features, programmatic client applications for interfacing with appropriate application programming interfaces (APIs) over network150, or other types of applications. Other applications114may also include email, texting, voice and IM applications that allow a user to send and receive emails, calls, texts, and other notifications through network150. In various embodiments, other applications114may include financial applications, such as banking, online payments, money transfer, or other applications associated with a payment provider, such as a payment application, which may be limited in use by authentication profiles requiring authentication through user data for use of various processes. As previously discussed, other applications may include social networking applications and/or merchant applications. Other applications114may include device interfaces and other display modules that may receive input from user102and/or output information to user102. For example, other applications114may contain software programs, executable by a processor, including a graphical user interface (GUI) configured to provide an interface to the user. Communication device110may further include database116stored to a transitory and/or non-transitory memory of communication device110, which may store various applications and data and be utilized during execution of various modules of communication device110. Thus, database116may include, for example, identifiers such as operating system registry entries, cookies associated with authentication module120and/or other applications114, identifiers associated with hardware of communication device110, or other appropriate identifiers, such as identifiers used for payment/user/device authentication or identification. Database116may include authentication profiles as well as authentication profile data, such as associated processes protected by the authentication profile and require user data to perform authentication for the profiles. Additionally, user data collected by data collection component112may be stored to database116for use with the authentication profiles. Communication device110includes at least one communication module118adapted to communicate with payment provider server130and/or service provider server140. In various embodiments, communication module118may include a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices. Communication module118may communicate directly with nearby devices using short range communications, such as Bluetooth Low Energy, LTE Direct, WiFi, radio frequency, infrared, Bluetooth, and near field communications. Payment provider server130may be maintained, for example, by an online payment service provider, which may provide payment services and/or processing for financial transactions on behalf of users. In this regard, payment provider server130includes one or more processing applications which may be configured to interact with communication device110, service provider server140, and/or another device/server to facilitate payment for a transaction. In one example, payment provider server130may be provided by PAYPAL®, Inc. of San Jose, CA, USA. However, in other embodiments, payment provider server130may be maintained by or include a credit provider, financial services provider, financial data provider, and/or other service provider, which may provide payment services to user102. Payment provider server130ofFIG.1includes payment account module132, other applications144, a database146, and a network interface component138. Payment account module132and other applications144may correspond to executable processes, procedures, and/or applications with associated hardware. In other embodiments, payment provider server130may include additional or different modules having specialized hardware and/or software as required. Payment account module132may correspond to one or more processes to execute modules and associated specialized hardware of payment provider server130to receive and/or transmit information from communication device110for establishment of payment accounts and processing and completion of one or more transactions initiated by user102using the payment accounts. In this regard, payment account module132may correspond to specialized hardware and/or software to establish payment accounts, which may be utilized to send and receive payments and monetary transfers and engage in other financial transactions. User102may establish a payment account with payment account module132by providing personal and/or financial information to payment provider server130and selecting an account login, password, and other security information. The payment account may be accessed through a browser application and/or dedicated payment application executed by communication device110. Thus, communication device110may protect and limit use of the payment account or other payment services offered by payment provider server130using authentication profiles, as discussed herein. Payment module132may further process a received transaction from communication device110by receiving the transaction from communication device110with a payment request for a payment for the transaction. The payment request may correspond to a payment token, including a payment instrument and identification of the transaction, and may be encrypted prior to transmission to payment account module132to prevent unauthorized receipt of a payment instrument. The payment token may include information corresponding to user identifiers, user financial information/identifiers, transaction information and/or other identifiers. Additionally, the payment token may include a payment amount and terms of payment for the transaction. Once received, payment account module132may utilize a payment account or financial information (e.g., a payment instrument such as a credit/debit card, bank account, etc.) of user102to render payment for the transaction. Payment account module132may receive purchase authorizations, in certain embodiments, and process payments for transaction in accordance with the purchase authorizations. Payment may be made to a merchant device or another user device using the payment instrument and the terms of the payment request. Additionally, payment account module132may provide transaction histories, including receipts, to communication device110. In various embodiments, payment provider server130includes other applications134as may be desired in particular embodiments to provide features to payment provider server134. For example, other applications134may include security applications for implementing server-side security features, programmatic client applications for interfacing with appropriate application programming interfaces (APIs) over network150, or other types of applications. Other applications134may contain software programs, executable by a processor, including a graphical user interface (GUI), configured to provide an interface to user102when accessing payment provider server134. In various embodiments where not provided by payment account module132, other applications134may include connection and/or communication applications, which may be utilized to communicate information to over network150. Additionally, payment provider server130includes database146. As previously discussed, user102and/or the merchant corresponding to merchant location130/merchant server140may establish one or more payment accounts with payment provider server130. Payment accounts in database146may include user/merchant information, such as name, address, birthdate, payment/funding information, additional user financial information, and/or other desired user data. User102and/or the merchant may link to their respective payment accounts through a user, merchant, and/or device identifier. Thus, when an identifier is transmitted to payment provider server130, e.g. from communication device110, merchant devices132, and/or merchant server140, a payment account belonging to user102and/or the merchant may be found. Payment amounts may be deducted from one payment account and paid to another payment account. In other embodiments, user102and/or the merchant may not have previously established a payment account and may provide other financial information to payment provider server130to complete financial transactions, as previously discussed. In various embodiments, payment provider server130includes at least one network interface component138adapted to communicate communication device110, merchant devices132, and/or merchant server140over network150. In various embodiments, network interface component138may comprise a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency (RF), and infrared (IR) communication devices. Service provider server140may be maintained, for example, by a service provider entity, which may provide services to user102, which may provide data and/or information to communication device110for determining user data and authorizing user102with communication device110. In this regard, service provider server140includes one or more processing applications which may be configured to interact with communication device110and/or payment provider server130over network150to provide information to communication device110. In one example, service provider server140may be provided by EBAY®, Inc. of San Jose, CA, USA and/or STUBHUB®, Inc, of San Francisco, CA, USA. However, in other embodiments, service provider server140may be maintained by or include another type of service provider, such as a social networking, messaging, location services, travel, biometric analysis, and/or other type of service provider. Service provider server140ofFIG.1includes a service module142, other applications144, a database146, and a communication module148. Service module142and other applications144may correspond to executable processes, procedures, and/or applications with associated hardware. In other embodiments, service provider server140may include additional or different modules having specialized hardware and/or software as required. Service module142may correspond to one or more processes to execute modules and associated specialized hardware of service provider server140to provide a service to user102, which may generate data and/or information used to determine user data for user102, and provide such information to communication device110. In this regard, service module142may correspond to specialized hardware and/or software to host and/or provide a service to user102, which may include or be associated with calendaring, social networking, messaging, mapping and travel routing, location, biometrics, shopping, and/or other types of services. Thus, service module142may host a website offering the aforementioned services or provide such services through a dedicated application available for use with a communication device (e.g., an email server accessible through a website/application, a social networking server, etc.). Service module142may facilitate the generation of data/information used to determine user data for user102by generating the data/information related to actions by user102using the service. The information may include exchanged messages, shopping actions, locations, etc. Service module142may also provide actions and/or interactions by user102using services. A map, location, and/or travel route by user102may be used to determine a location for user102. Service module142may also provide biometrics used to determine user data. For example, biometrics may be used to determine when user102is exercising, when user102sleeps, a user's heart rate, a user's fingerprint or facial recognition information, etc. Service module142may also provide shopping information, which may be used to determine a user's common actions. In various embodiments, service provider server140includes other applications144as may be desired in particular embodiments to provide features to service provider server140. For example, other applications144may include security applications for implementing server-side security features, programmatic client applications for interfacing with appropriate application programming interfaces (APIs) over network150, or other types of applications. Other applications144may contain software programs, executable by a processor, including a graphical user interface (GUI), configured to provide an interface to user102when accessing service provider server140. In various embodiments where not provided by service module142, other applications144may include connection and/or communication applications, which may be utilized to communicate information to over network150. Additionally, service provider server140includes database146. Database146may be utilized to store information utilized by one or more modules and/or applications of service provider server140, including service module142and/or other applications144. In this regard, database146may include received and/or determined information, including identifiers and other identification information. Information and data generated using service provider server140by user102and for use in determining user data for user102. In various embodiments, service provider server140includes at least one communication module148adapted to communicate communication device110and/or payment provider server130over network150. In various embodiments, communication module148may comprise a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency (RF), and infrared (IR) communication devices. Network150may be implemented as a single network or a combination of multiple networks. For example, in various embodiments, network150may include the Internet or one or more intranets, landline networks, wireless networks, and/or other appropriate types of networks. Thus, network150may correspond to small scale communication networks, such as a private or local area network, or a larger scale network, such as a wide area network or the Internet, accessible by the various components of system100. FIG.2is an exemplary device having device components for determining user data and a user interface displaying protected device processes requiring the user data to match one or more authentication profiles for use, according to an embodiment. Environment200includes communication device110from environment100ofFIG.1executing module and processes discussed in reference to environment100. Thus, communication device110includes a device interface1000that may be used to display one or more processes that may be protected by one or more authentication profiles from unauthorized use. Device interface1000may correspond to a display interface where user102may view processes executed by communication device110and interact with such processes through user input. Thus, device interface1000may include a device operating system1100, which may require a current user authentication1102for use of various processes of device operating system1100by a user while the user utilizes communication device110. For example, device operating system1100may execute a requested device process1104but require current user authentication to match a required authentication1106for requested device process1104. In this regard, communication device110may make an authentication determination1108using user data1110, which may be used to determine current user authentication1102, which may include an authorization to user requested device process1104. Moreover, based on user data1110, other authentication paths1112may be selected based on the capabilities of communication device110and/or available data in user data1110. In order to collect user data1110, communication device110may utilize one or more device components to collect and/or determine user data1110. Thus, communication device110may include input/output modules1102, such as a mouse, keyboard, touchscreen interface, etc., which may collect input from a user utilizing communication device110and output information to the user. Moreover, communication device110may include audio modules, such as a microphone and speaker, which may detect a user's voice, voice input, and/or environmental noise. Additionally, communication device110may detect user data through a biometric sensor1006and/or accelerometer1008, including fingerprints, facial recognition information, heartbeat, motions, height, etc. Communication device110may detect ambient light levels, atmospheric pressure, humidity, etc., through environmental sensor1010. Communication module110may also receive user data over a network connection or through short range wireless communications with a nearby device using communication module1012. In various embodiments, communication device110may include further or different sensors and device components. FIG.3is an exemplary device database storing authentication profiles for device processes and determined user data during use of the device by a user, according to an embodiment. Environment300ofFIG.3includes communication device110from environment100ofFIG.1executing module and processes discussed in reference to environment100. Communication device310executes a check-in module320corresponding generally to the specialized hardware and/or software modules and processes described in reference to authentication module120ofFIG.1. In this regard, database116stores information discussed in reference to environment100ofFIG.1, such as authentication profiles1200and user data1300. Authentication profiles1200includes a profile one1202, a profile two1210, device capabilities1218, and fallback mechanism1200. Profile one1202may correspond to an authentication profile that may allow access and use of one or more processes when user data matches the requirements of profile one1202. Thus, profile one1202includes required user data1204, which may include user data required to be authenticated under profile one1202. When user data1300matches required user data1204, communication device110may allow access to authorized processes1206. In various embodiments, required user data1204may include more than one authentication pathways, each requiring different user data for authentication under profile one1202. Profile one1202may also include establishment information1208generated during establishment of profile one1202, which may be used to update profile one1202when required user data1204changes based on changed circumstances. Similarly, profile two1210also includes required user data1212, authorized processes1214, and establishment information1216. An authorization module may also profile communication device to determine device capabilities1218, which may correspond to available device components used to determine user data and authorize a user under profile one1202and/or profile two1210. Moreover, a fallback mechanism1220may be used to authorize a user in the event that one or more device components are faulty. Database116also stores user data1300determined by device components of communication device110. User data1300may include collected data1302, which may also be required to be processed and determined in user data1300(e.g., detection of a motion used to determine a height of a user). Collected data1302includes sensor data1304collected from device sensor components (e.g., a biometric sensor, camera, etc.). Collected data1302may also include application data1306determined from applications executed by communication device110and network data1308received over a network connection by communication device110. In various embodiments, collected data1302may also include data received from connected device1310. FIG.4is a flowchart of an exemplary process for authentication through multiple pathways depending on device capabilities and user requests, according to an embodiment. Note that one or more steps, processes, and methods described herein may be omitted, performed in a different sequence, or combined as desired or appropriate. At step402, user data collected by the communication device for a user in possession of the communication device is accessed, by a user authentication module of a communication device comprising at least one hardware processor. The user data may be collected by a device component, such as one or more of a network interface module, a communication module connected to an external device or external sensor, a keypad, a mouse, a touchscreen interface, a camera, a microphone, an accelerometer, a motion detector, an environmental detector, and a biometric sensor. The communication device may further determine the user data using a communication device application. For example, the device application may comprise one of a messaging application, a social networking application, a payment application, a shopping application, an email application, a media sharing or editing application, and an imaging application associated with a camera device corresponding to the communication device. At step404, a plurality of authentication profiles for the communication device is accessed, by the user authentication module, wherein each of the authentication profiles allows access to at least one process executed by the communication device. The authentication profiles may be determined based on user actions. The user actions may comprise past actions by the user when using the communication device during at least one past use of the communication device by the user. The communication device may collect or determine the past actions during use of the at least one process associated with the at least one matching profile. The user actions may comprise at least one of use of the communication device, input actions with the communication device, motions while using the communication device, and real-world conditions of an environment for the communication device during the use of the communication device. In various embodiments, each of the authentication profiles may be different for at least one of a type of the at least one process associated with the each of the authentication profiles and a use of the at least one process associated with the each of the authentication profiles by the user. The use of the at least one process may be determined based on the user's request within the at least one process. The at least one process may comprise access to utilize the communication device or the communication device's operating system. In other embodiments, the at least one process may correspond to an application executed by the communication device. The application may comprise a payment application of the communication device utilizing a payment provider. At least one matching profile using the user data and the plurality of authentication profiles is determined, by the user authentication module, at step406. The at least one process associated with the at least one matching profile may correspond to a payment application executed by the communication device. Use of the payment application may be limited by the at least one matching profile. In various embodiments, the user may determine the authentication profiles based on requirements for device and application security. Thus, at step408, access to at least one process associated with the at least one matching profile is authorized, by the user authentication module. Moreover, a change in the user data may be received, and the authorization may further determine if the change in the user data is compliant with the at least one process associated with the at least one matching profile and currently authorized by the user authentication module. Additionally, the authorization module may further determine whether the change further matches additional profiles in the authentication profiles. If so, access to at least one process associated with the additional profiles nay be authorized. FIG.5is a block diagram of a computer system suitable for implementing one or more components inFIG.1, according to an embodiment. In various embodiments, the communication device may comprise a personal computing device (e.g., smart phone, a computing tablet, a personal computer, laptop, a wearable computing device such as glasses or a watch, Bluetooth device, key FOB, badge, etc.) capable of communicating with the network. The service provider may utilize a network computing device (e.g., a network server) capable of communicating with the network. It should be appreciated that each of the devices utilized by users and service providers may be implemented as computer system500in a manner as follows. Computer system500includes a bus502or other communication mechanism for communicating information data, signals, and information between various components of computer system500. Components include an input/output (I/O) component504that processes a user action, such as selecting keys from a keypad/keyboard, selecting one or more buttons, image, or links, and/or moving one or more images, etc., and sends a corresponding signal to bus502. I/O component504may also include an output component, such as a display511and a cursor control513(such as a keyboard, keypad, mouse, etc.). An optional audio input/output component505may also be included to allow a user to use voice for inputting information by converting audio signals. Audio I/O component505may allow the user to hear audio. A transceiver or network interface506transmits and receives signals between computer system500and other devices, such as another communication device, service device, or a service provider server via network150. In one embodiment, the transmission is wireless, although other transmission mediums and methods may also be suitable. One or more processors512, which can be a micro-controller, digital signal processor (DSP), or other processing component, processes these various signals, such as for display on computer system500or transmission to other devices via a communication link518. Processor(s)512may also control transmission of information, such as cookies or IP addresses, to other devices. Components of computer system500also include a system memory component514(e.g., RAM), a static storage component516(e.g., ROM), and/or a disk drive517. Computer system500performs specific operations by processor(s)512and other components by executing one or more sequences of instructions contained in system memory component514. Logic may be encoded in a computer readable medium, which may refer to any medium that participates in providing instructions to processor(s)512for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. In various embodiments, non-volatile media includes optical or magnetic disks, volatile media includes dynamic memory, such as system memory component514, and transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus502. In one embodiment, the logic is encoded in non-transitory computer readable medium. In one example, transmission media may take the form of acoustic or light waves, such as those generated during radio wave, optical, and infrared data communications. Some common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EEPROM, FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer is adapted to read. In various embodiments of the present disclosure, execution of instruction sequences to practice the present disclosure may be performed by computer system500. In various other embodiments of the present disclosure, a plurality of computer systems500coupled by communication link518to the network (e.g., such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks) may perform instruction sequences to practice the present disclosure in coordination with one another. FIGS.6A-6Eare exemplary authentication profiles built from user data collected from device components, according to an embodiment. For example,FIG.6Ashows a plurality of authentication profiles having various required user data for authentication. InFIG.6B, data may be output based on analysis of the data, for example, to recognize an entity.FIG.6Cshows multiple available types of user data, which may be used to build an authentication profile for a device. InFIG.6D, an exemplary flowchart shows how to determine a device authentication profile. Moreover, inFIG.6E, authentication profile chains may be built for authenticating users across various devices based available attributes and authentication profiles (security profiles). Where applicable, various embodiments provided by the present disclosure may be implemented using hardware, software, or combinations of hardware and software. Also, where applicable, the various hardware components and/or software components set forth herein may be combined into composite components comprising software, hardware, and/or both without departing from the spirit of the present disclosure. Where applicable, the various hardware components and/or software components set forth herein may be separated into sub-components comprising software, hardware, or both without departing from the scope of the present disclosure. In addition, where applicable, it is contemplated that software components may be implemented as hardware components and vice-versa. Software, in accordance with the present disclosure, such as program code and/or data, may be stored on one or more computer readable mediums. It is also contemplated that software identified herein may be implemented using one or more general purpose or specific purpose computers and/or computer systems, networked and/or otherwise. Where applicable, the ordering of various steps described herein may be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein. The foregoing disclosure is not intended to limit the present disclosure to the precise forms or particular fields of use disclosed. As such, it is contemplated that various alternate embodiments and/or modifications to the present disclosure, whether explicitly described or implied herein, are possible in light of the disclosure. Having thus described embodiments of the present disclosure, persons of ordinary skill in the art will recognize that changes may be made in form and detail without departing from the scope of the present disclosure. Thus, the present disclosure is limited only by the claims. | 59,020 |
11943213 | DETAILED DESCRIPTION OF THE DRAWINGS In the following, the embodiments of the present invention are explained in detail with reference to the drawings. First Embodiment FIG.1shows an apparatus for mediating configuration of authentication information for a service provided over an IP network according to the first embodiment of the present invention. The apparatus100is capable of communicating with the IoT device110over a computer network, not limited to a cellular network and including an IP network, and is also capable of communicating over an IP network with a service provider apparatus120that provides a service used by the IoT device110. The overall picture is a system comprising a service provider apparatus120that provides a service on an IP network, an IoT device110that uses the service, and an apparatus100that mediates configuration of authentication information for a connection between the IoT device110and the service provider apparatus120. The IoT device110has a SIM110-1for connecting to a cellular network, and the SIM110-1stores an identification number, such as IMSI, and secret information, such as a K-value. In this embodiment, the SIM110-1is provided by the operator of the apparatus100or its affiliate, and its identification number and secret information are also stored in the apparatus100or in a storage medium or storage apparatus accessible by the apparatus100. The apparatus100verifies the credibility of the IoT device110through a SIM authentication process using the identification number and secret information. SIM authentication can be performed in the same way as the conventional process based on the MILENAGE algorithm, etc. using HLR/HSS, but with some features specific to the present invention. This point will be discussed later in the second embodiment. The service provider apparatus120also has a credential for communication with the apparatus100based on the trust that the operator of the apparatus100has somehow formed in the service provider. Here, a “credential” is a generic term for a piece of information used for authentication, including IDs and passwords. While service providers can securely share credentials with the operator of the apparatus100using known methods, when sharing with IoT devices110is considered, there are risks of leakage during manufacturing of each of IoT devices110, and in order to minimize the risk the cost will inevitably increase. In a situation where there is no shared authentication information between the IoT device110and the service provider apparatus120, the present invention easily enables remote configuration of authentication information by having the apparatus100, which is capable of authentication for legitimate access with both sides, act as an intermediary apparatus, thereby accelerating the practical spread of IoT systems. In more detail, in this embodiment, the cipher key (CK) or the key corresponding to the cipher key stored in the apparatus100and the IoT device110as a result of SIM authentication is used as the master key (parent key) for the IoT device110to use various services. By generating an application key (key with a specific use) specific to the service used by the IoT device110based on the master key in the apparatus100and the IoT device110, and transmitting the application key from the apparatus100to the service provider apparatus120through a secure connection, a common key can be configured as authentication information for the IoT device110and the service provider apparatus120. In this embodiment, the cipher key CK will be used as an example, but it is also conceivable that the integrity key (IK) or the key corresponding to the integrity key stored in the apparatus100and the IoT device110as a result of SIM authentication may be used as the master key. In the present Specification, an integrity key can be understood as an example of a key corresponding to a cipher key. The apparatus100exists within the core network of a cellular network. It can be a communication apparatus of an MNO (Mobile Network Operator), or it can be a communication apparatus of an MVNO (Mobile virtual Network Operator) which provides a wireless communication service by connecting to an MNO's communication infrastructure. The SIM110-1can be a SIM card provided by an MNO or MVNO. Between an MNO and an MVNO, there may be an intervening MVNE (Virtual Mobile Network Operator) that provides support services for the smooth operation of the MVNO, and the MVNE may have a communication infrastructure that connects to the MNO's communication infrastructure to provide a wireless communication service. In this case, the apparatus100can be a communication apparatus of the MVNE and the SIM110-1can be a SIM card provided by the MVNE. All or part of the apparatus100may be an instance on a cloud or on a public or private cloud. Herein, the term “cloud” refers to a system that can dynamically provision and provide computing resources such as a CPU, a memory, a storage, and a network bandwidth on demand over a network. For example, a cloud is available from AWS, etc. The term “public cloud” refers to a cloud that can be used by multiple tenants. The SIM110-1of the IoT device110can be a physical SIM card, but it can also be a semiconductor chip embedded in the IoT device110(also referred to as an “eSIM”). Also, a software can be installed in a secure area within a module of the IoT device110to store an identification number and secret information in the software. Various aspects for the IoT device110to maintain values and programs necessary for SIM authentication can be considered. FIG.2shows an overview of the method for mediating configuration of authentication information according to the present embodiment. First, the IoT device110transmits a bootstrap request (initial configuration request) to the service provider apparatus120(not shown). The initial configuration to be performed includes configuration on the IoT device110and the service provider apparatus120of the application key for using a service, and may further include configuration of the connection information required for the IoT device110to establish a connection when using the service. The bootstrap request can be transmitted by installing a bootstrap agent as a piece of software or a program for that purpose on the IoT device110so that the agent is activated when the IoT device110is turned on for the first time, or as described below it can be transmitted in response to the expiration of the authentication information. The destinations of the initial configuration requests may be, for example, stored within respective pieces of client software which are installed on the IoT device110for services available on the IoT device110, or a list of one or more destinations of the initial configuration requests for services available on the IoT device110may be stored, or they may be specified directly or indirectly by an administrator of the IoT device110from a console for a user of the intervening apparatus100. Although the bootstrap agent and the client software do not necessarily have to be separate programs, in this embodiment, the bootstrap agent can obtain the destinations of the initial configuration requests. The initial configuration request includes a key Id for specifying the master key (also referred to as the “first key”) as a part of initial configuration information and a signature based on the master key and a timestamp, which is added as necessary, is added. When a timestamp is used in the generation of the signature, the timestamp is also included in the initialization information. As it will also be discussed later, in the SIM authentication process, when the generated master key is stored in the apparatus100and the IoT device110, the key Id can also be generated to store them in an associated manner. Next, the apparatus100receives the initial configuration information from the service provider apparatus120(S201). The apparatus100authenticates the service provider apparatus120that has transmitted the initial configuration information (S202), and if the authentication result is positive, the apparatus100obtains a master key based on the key Id and verifies the signature (S203). The order of the authentication of the service provider apparatus120and the verification of the signature may be reversed. The apparatus100then generates a nonce and calculates an application key (also referred to as a “second key”) based on the master key and the nonce. (S204). The application key and the nonce are then transmitted to the service provider apparatus120(S205). The IoT device110can receive the nonce from the service provider apparatus120and use it to calculate the application key using the same algorithm as the apparatus100, as explained further below, so that the IoT device110and the service provider apparatus120can configure a common key together as authentication information. Here, although nonce generation is performed on the apparatus100, it is possible to perform it on the service provider apparatus120or the IoT device110, and perform necessary transmissions and receptions. If the nonce generation is performed on the service provider apparatus120or the IoT device110, the management by the intervening apparatus100may not always be sufficient, and there is a possibility that simple nonce may be generated. In such a case, although it is desirable to generate different application keys for different services, they may end up being identical, or an attacker may be able to guess the logic of the nonce generation. The intervening apparatus100comprises a communication unit101-1such as a communication interface, a processing unit101-2such as a processor or CPU, and a storage unit101-3including a storage apparatus or storage medium such as a memory and a hard disk. The intervening apparatus100can realized each of the processes explained below by executing a program, on the processing unit101-2, for performing each of the processes described above and below that is stored in the storage unit101-3or in a storage apparatus or medium accessible from the intervening apparatus100. As shown inFIG.1, the intervening apparatus100can be separated into a first apparatus101and a second apparatus102depending on the processing contents, but these can be made into a single apparatus or further separated. The other devices can be realized by similar hardware. The program to be executed by each apparatus may include one or more programs, and may be stored on a computer-readable storage medium to form a non-transitory program product. FIG.3shows a specific example of the method of mediating configuration of authentication information according to the present embodiment. First, the IoT device110transmits initial configuration information such as a key Id to the destination of the service provider apparatus120specified by “example.com/v1/path/to/something/” to make an initial configuration request. Although parameters such as {keyId} are shown inFIG.3, not all parameters to be sent and received are illustrated. The service provider apparatus120makes a request to the apparatus100for the generation of an application key based on the initial configuration information received. At the apparatus100, in a variable order, verification of the signature included in the initial configuration information, authentication of the service provider apparatus120, and, if necessary, confirmation of whether an access authority to the specified key Id is given to the service provider apparatus120are performed. The apparatus100can provide credentials to respective service providers. It can also configure a master key or its Id for which respective service providers can provide services as an access authority. In more detail, it is possible to specify services or service providers that can access the master key or its key Id generated as a result of authentication in a SIM authentication request in which the AUTS described in the second embodiment is specified. The signature can be, for example, a hash value or digest value of the concatenated value of the master key and a timestamp added as necessary, and the same calculation can be performed on the apparatus100to verify the signature by match or mismatch of the hash values or digest values. SHA-256 can be cited as an example of a hash function for obtaining hash values. The apparatus100then generates a nonce required to calculate the application key. The nonce can be a sequence of numbers generated from random numbers or pseudo-random numbers, such as [23, 130, 4, 247, . . . ]. The calculation of the application key is then performed using the generated nonce and the master key that can be obtained by the received key Id. As a specific example, it can be a hash value of the concatenated value of these values. The application key is transmitted to the service provider apparatus120in a secure communication channel between the apparatus100and the service provider apparatus120. The service provider apparatus120configures the received application key as authentication information and transmits connection information needed for the IoT device110to use the service to the IoT device110. The IoT device110which received the information makes the required configuration. The connection information can include destination information, and the URL or the IP address of the destination are examples. In addition, the service provider apparatus120also performs necessary configuration, if any, in addition to the configuration of the application key. In this embodiment, the connection information includes the above-mentioned nonce. The IoT device110uses the received nonce to calculate for itself an application key that is identical to the application key set on the service provider apparatus120. As an example, if client software for the service to be used is installed on the IoT device110, the connection information for the service may be read by the software so that the IoT device110can automatically use the service. In this case, the software is capable of communication using an application key. The IoT device110can be any device that has necessary communication functions and is capable of performing SIM authentication and initial configuration. For that end, it is required to be capable of running software in a programming language such as C or Java (registered trademark). Extension to different programming languages (Ruby, Go, Javascript (registered trademark), etc.) by means of wrapping the implementation in C is also possible. For example, the IoT device110can be a device on which an OS, such as Linux (registered trademark) or Android (registered trademark), is installed. It is to be noted that if the term “only” is not written, such as in “based only on x”, “in response to x only”, or “in the case of x only”, in the present specification, it is assumed that additional information may also be taken into account. In addition, as a caveat, even if there are characteristics of a method, a program, a terminal, an apparatus, a server or a system (hereinafter referred to as “method, etc.”) that perform operations different from those described herein, each aspect of the invention is intended to perform the same operation as one of the operations described herein, and the existence of an operation different from those described herein does not mean that the method, etc. is outside the scope of each aspect of the invention. Second Embodiment The SIM authentication of the SIM of the IoT device110by the intervening apparatus100described in the first embodiment can be performed in the same way as the conventional process using HLR/HSS, but with the improvements described below. As shown inFIG.1, the intervening apparatus100can be divided into the first apparatus101and the second apparatus102depending on the processing contents. In this embodiment, the second apparatus102is mainly responsible for generating the parameters required in SIM authentication. This corresponds at least partially to the function of an AuC of a communication carrier. The second apparatus102stores the identification number, such as IMSI stored in the SIM110-1, and secret information, such as K-value, and also stores an SQN that is synchronized between the SIM110-1and the second apparatus102. The SQN is usually incremented synchronously in the SIM and in the AuC with a SIM authentication request specifying an identification number such as an IMSI. Since the generation of the cipher key CK referred to in the first embodiment can be performed by communication between the IoT device110and the intervening apparatus100or the second apparatus102via a computer network that is not necessarily limited to a cellular network, there may be a situation where the SQN managed by the second apparatus102is incremented by a bad request from a device without a legitimate SIM that has somehow obtained the identification number such as the IMSI. Therefore, there may be a situation in which the SQN managed by the second apparatus102is incremented by an unauthorized request from a device without a SIM that somehow obtains an identification number such as an IMSI, etc., and the SQN may deviate from the SQN of the IoT device110. This embodiment suppresses this type of attacks on SQNs by bad requests. InFIG.4, the IoT device110and the SIM110-1that the IoT device110has are shown as separate elements. The purpose of this is to distinguish processes performed by executing a program on the IoT device110from processes performed inside the SIM110-1in response to the access from the IoT device110to the SIM110-1as being different in nature. It is added that both processes can be understood as processes performed on the IoT device110. When the IoT device110is used as the subject, it can be understood to refer to the program for SIM authentication running on the IoT device110. In addition, inFIG.4, the second apparatus102is shown separately from the first apparatus101, However, it should be noted that this can also be regarded as processes performed on the intervening apparatus100. It should also be noted thatFIG.4does not illustrate all the parameters that are transmitted and received, althoughFIG.4will be used as a reference in the following. First, IoT device110requests an IMSI to the SIM110-1. The SIM110-1returns the value of the IMSI to the IoT device110. The IoT device110which received the IMSI requests SIM authentication to the first apparatus101. The first apparatus101requests the second apparatus102to generate an authentication vector including an AUTN, a RAND, a CK, an IK, and an XRES. This authentication vector generation request sets an SQN to an invalid value in order to suppress the bad request described above, so that the AUTN determined based on the SQN is a value that causes SIM authentication to fail. If the value of the SQN is set to 0, when SIM authentication is performed even once for the SIM110-1, then the SQNs will not match. In addition, instead of specifying an invalid value for the SQN at the first apparatus101, the value may be specified at the second apparatus102. More specifically, the value may be set so that it is below the correct SQN stored at the second apparatus102. Also, in light of its purpose in this embodiment, only the AUTN, RAND, etc. required for subsequent processes may be generated and returned to the first apparatus101, without generating the entire authentication vector. After receiving the authentication vector from the second apparatus102, the first apparatus101transmits the AUTN and the RAND to the IoT device110, and the IoT device110passes the AUTN and the RAND to the SIM110-1to request key calculation. As an error handling process since the SQNs do not match, the SIM110-1generates an AUTS required for a Resync request to synchronize SQNs and provides it to the IoT device110. the IoT device110specifies the IMSI, the RAND and the AUTS and requests SIM authentication again to the first apparatus101. Here, the AUTS is a parameter that cannot be calculated if the secret information stored in the SIM110-1is not known. The first apparatus101makes a Resync request for resynchronization to the second apparatus102in response to the AUTS and RAND being specified. The second apparatus102generates an authentication vector by specifying the SQN of the SIM110-1included in the AUTS in a masked manner and returns it to the first apparatus101. The first apparatus101generates a key Id to identify the received cipher key CK, and stores them in association with each other. It also stores the XRES in association with them. The first apparatus101sends the key Id, the AUTN and the RAND to the IoT device110, and the IoT device110requests the SIM110-1to make a key calculation using the AUTN and the RAND. In addition to the AUTN and the RAND received, the SIM110-1calculates a CK and an RES using the secret information K stored in its own storage medium or apparatus, and passes them to the IoT device110. The IoT device110sends the key Id and RES to the first apparatus101and requests the verification of the generated CK. The first apparatus101performs the verification by obtaining an XRES based on the key Id and comparing it with the received RES, and if there is a match, it flags the CK identified by the key Id as verified. Then, the first apparatus101transmits a success response of the SIM authentication to the IoT device110with the expiration date of the CK if necessary, and the IoT device110stores the CK in association with the key Id. In this case, the CK may be stored in association with the IMSI as well. This embodiment enables the application of the CK or a key corresponding to the CK as a master key for the IoT device110to use various services in the subsequent processes, so that the application of a key agreed upon by SIM authentication is made possible without frequent SIM authentication requests. In addition, in order to deter bad SIM authentication requests from a device without a legitimate SIM, the SIM authentication process according to the present embodiment intentionally fails SQN synchronization, which is a prerequisite for key calculation, and triggers Resync, which requires secret information that can only be accessed by a legitimate SIM. Then, the above-mentioned bad attacks can be disabled by making the success of Resync as a condition for the execution of subsequent processes. The scope of protection of the invention is not limited to the examples given hereinabove. The invention is embodied in each novel characteristic and each combination of characteristics, which includes every combination of any features which are stated in the claims, even if this feature or combination of features is not explicitly stated in the examples. REFERENCE SIGNS LIST 100intervening apparatus101first apparatus101-1communication unit101-2processor unit101-3storage unit102second apparatus110IoT device110-1SIM120service provider apparatus | 23,062 |
11943214 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS BEST MODE In order to further elucidate the technical means and efficacy of the present disclosure for achieving the intended purpose of the disclosure, the present disclosure will be described in more detail with reference to the accompanying drawings and preferred embodiments as follow. The identity recognition method for an office platform provided by various embodiments of the present invention may be applied in an application environment as shown inFIG.1, to implement identity recognition. As shown inFIG.1, the application environment includes a user terminal100and a server300. The user terminal100and the server300are located in a wireless network or a wired network, and the user terminal100and the server300exchange data mutually through the wireless network or the wired network. Among them, the user terminal100may be a computer terminal device such as a personal computer (PC), an all-in-one computer, a laptop portable computer, or a mobile terminal device such as a vehicle terminal, and a smart phone, a smart TV, a TV box, a tablet computer, and an e-book reader, a MP3 player (Moving Picture Experts Group Audio Layer III), a MP4 (Moving Picture Experts Group Audio Layer IV). The server300may be a server, or a server cluster composed of several servers, or a cloud computing service center. FIG.2is a diagram illustrating of a terminal of the present disclosure. The structure shown inFIG.2can be applied to the user terminal100. As shown inFIG.2, the terminal10includes a memory102, a memory controller104, one or more (only one shown in theFIG.2) processor106, a peripheral interface108, a radio frequency module110, a positioning module112, and a camera module114, an audio module116, a screen118, and a button module120. These components communicate with each another via one or more communication buses/signal lines122. It can be understood that the structure shown inFIG.2is only an illustration, and the terminal10may further include more or less components than those shown inFIG.2, or have a different configuration from that shown inFIG.2. The components shown inFIG.2may be implemented using hardware, software or a combination thereof. The memory102may be used to store software programs and modules, such as program instructions/modules corresponding to the identity recognition method and system for the office platform in an embodiment of the present invention. The processor106runs software programs and modules stored in the memory controller104, to execute various functional applications and data processing, to implement the above-mentioned identity recognition method and system for the office platform. The memory102may include a high-speed random access memory, and may also include a non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some embodiments, the memory102can further include remote memories set with the processor106remotely. These remote memories can be connected to the terminal10through a network. The above network includes, but not limited to, Internet, intranet, LAN, mobile communication network and their combination. The processor106and other possible components access to memory102can be controlled under the storage controller104. The peripheral interface108couples various input/output devices to the CPU and the memory102. The processor106executes various software and instructions within the memory102to execute various functions of the terminal10and perform data processing. In some embodiments, the peripherals interface108, the processor106, and the memory controller104may be implemented in a single chip. In other embodiments, they can be implemented by separate chips. The radio frequency module110is used for receiving and transmitting electromagnetic waves, and realizes the mutual conversion of electromagnetic waves and electric signals so as to communicate with a communication network or other devices. The radio frequency module110may include various existing circuit elements for performing these functions, such as antennas, radio frequency transceivers, digital signal processors, encryption/decryption chips, subscriber identity module (SIM) cards, memory, etc. The radio frequency module110can communicate with various networks such as the Internet, corporate intranets, and wireless networks, or communicate with other devices through a wireless network. The above wireless network may include a cellular telephone network, a wireless local area network, or a metropolitan area network. The above wireless network may use various communication standards, protocols and technologies, including but not limited to Global System for Mobile Communication (GSM), Enhanced Data GSM Environment (EDGE), and Wideband Code. Wideband code division multiple access (W-CDMA), Code division access (CDMA), Time division multiple access (TDMA), Bluetooth, Wireless fidelity (Wireless) , Fidelity, WiFi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, and/or IEEE 802.11n), Voice over Internet Protocol (VoIP), and Worldwide Interoperability for Microwave Access (Worldwide Interoperability for Microwave Access (Wi-Max), other protocols for e-mail, instant messaging, and SMS, and any other suitable communication protocol, may even include those that are currently not yet developed. The positioning module112is configured to acquire current position of the terminal10. Examples of positioning module112include, but are not limited to, global positioning system (GPS), wireless local area network or mobile communication network based positioning technology. The camera module114is used to take photos or videos. A captured photo or video may be stored in the memory102and may be transmitted through the radio frequency module110. The audio module116provides a user with an audio interface that may include one or more microphones, one or more speakers, and an audio circuit. The audio circuit receives sound data from the peripheral interface108, converts the sound data into electrical information, and transmits the electrical information to the speaker. The speakers convert the electrical information into sound waves that human ear can hear. The audio circuit also receives electrical information from the microphone, converts the electrical signal to voice data, and transmits the voice data to the peripheral interface108for further processing. Audio data may be obtained from the memory102or through the radio frequency module110. In addition, the audio data may also be stored in the memory102or transmitted through the RF module110. In some embodiments, the audio module116may also include a headphone jack for providing an audio interface to a headset or other device. The screen118provides an output interface between terminal10and the user. Specifically, the screen118displays video output to the user. The content of the video output may include text, graphics, video, and any combination thereof. Some output results correspond to some user interface objects. As can be appreciated, the screen118can also include a touch screen. The touch screen simultaneously provides an output and input interface between the terminal10and the user. In addition to displaying video output to the user, the touch screen also receives user input such as user's gestures such as clicking and swiping, so that the user interface object responds to the user's input. The technology for detecting the user input can be based on resistive, capacitive, or any other possible touch detection technology. Specific examples of touch screen display units include but are not limited to liquid crystal displays or light emitting polymer displays. The button module120also provides an interface for the user to input to the terminal10, and the user can press different buttons to make the terminal10perform different functions. FIG.3is a diagram illustrating of a server of the present disclosure. As shown inFIG.3, the server includes: a memory301, a processor302, and a network module303. It can be understood that the structure shown inFIG.3is only an illustration, and the server may also include more or fewer components than those shown inFIG.3, or have a different configuration from that shown inFIG.3. The components shown inFIG.3may be implemented using hardware, software, or a combination thereof. In addition, the server in the embodiment of the present invention may also include a plurality of servers with different specific functions. The memory301may be used to store software programs and modules, such as program instructions/modules corresponding to the identity recognition method and system for the office platform in an embodiment of the present invention. The processor302runs software programs and modules stored in the memory301, to execute various functional applications and data processing, i.e., to implement the above-mentioned identity recognition method and system for the office platform. The memory301may include a high-speed random access memory, and may also include a non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some embodiments, the memory301can further include remote memories set with the processor302remotely. These remote memories can be connected to the server through a network. Further, the above software program and module may further include: an operating system321and a service module322. The operating system321, for example, may be LINUX, UNIX, WINDOWS, which may include various software components and/or drivers for managing system tasks (e.g., memory management, storage device control, power management, etc.), and may be communicate with Hardware or software components, to provide the operating environment for other software components. The service module322runs on the basis of the operating system321, and listens for requests from the network through the network service of the operating system321, completes corresponding data processing according to the request, and returns processing results to the terminal. That is, the service module322is used to provide network services to the terminal. The network module303is used to receive and send network signals. The above network signal may include a wireless signal or a wired signal. In one embodiment, the above network signal is a wired network signal. At this time, the network module303may include elements such as a processor, a random access memory, a converter, a crystal oscillator, and so on. A First Embodiment FIG.4is a schematic process view of a first embodiment of an identity recognition method of the present disclosure. This embodiment may be identity recognition method for the office platform performed by the server300through the network. The office platform may be, but not limited to an integrated platform including a mailbox system, an approval flow system, and a social platform system, but the invention is not limited thereto. As shown inFIG.4, the identity recognition method for the office platform in this embodiment may include the following steps: step S401: receiving registration information of a first user, wherein the registration information includes an identification of a post of the first user, an identity of the first user and a first login password; Specifically, the post may, but not limited to, includes both authority and responsibilities. Users with a same post have a same responsibilities and authorities. Specifically, the identification of the post may be, but is not limited to a title such as a manager, a leader, etc., but may also be, but not limited to, a unique identification that can identify the identification of the post of the first user, such as a combination of a title and a character, such as a first manager, a first leader, etc. The identity of the first user may be the identification of the post of the first user, and may also include a unique identification that can identify the identity of the first user, such as at least one of the first user's identification number, name, and phone number, and so on. step S402: binding the first login password to a first account corresponding to the post; step S403: receiving a login request sent by a user terminal, wherein the login request includes the first login password; Specifically, in one embodiment, it may, but not limited to regard the identification of the post of the first user as the identification of the first account, so that, the login request may, but not limited to further include the identification of the post of the first user. Of course, when there is a binding relationship between the identity of the first user and the identification of the first account, the identification of the first account may be, but also not limited to the identity of the first user such as name, phone number, identification number, etc. Therefore the login request may also, but is not limited to include the identity of the first user such as the name, the phone number, the identification number, and so on. step S404: responding to the login request, and sending data information of the first account binding with the first login password to the user terminal, to make the user terminal display the data information. Specifically, the data information of the first account is information stored in the first database corresponding to the first account. The data information may, but not limited to, include operation information about contents of the first account, such as an editing operation, a favorite operation, and/or the like, and/or interaction information between the first account and other accounts, and so on. Specifically, the identity recognition method may, but not limited to include: sending share-data information of an associated account of the first account to the user terminal, when receiving a read request. The read request includes an identification of the first account. Wherein the associated account of the first account may, but is not limited to, include a system-default account, and a post corresponding to the system-default account with the post corresponding to the first account belong to a same post type, for example, it may, but may not be limited to define some or all posts in a same department belong to the same post type. The associated account of the first account may also include an account that response to the first account after receiving an association request. Wherein, the share-data information that sent to the first account of the user terminal is stored in a shared database, for example, accounts associated of the first account include an A account and a B account, and then the share-data information of the first account, the A account, and the B account are stored in the shared database. Each of the first account, the A account, and the B account can obtain the share-data information in the shared database by sending the read request. Wherein, the share-data information that sent to the first account of the user terminal may, but not limited to include some documents edited and/or sent and/or received by a system-default-associated account, and/or some data information sent by the associated account to a shared account or a designated account. Format of the documents may be excel, and so on. According to the identity recognition method of the present disclosure, the first account is corresponding to the post, thus when the login password set by a post successor is bound to the first account corresponding to the post, the successor of the post can browse historical-data information corresponding to the first account through the login password. Thusly, the user experience is good. A Second Embodiment FIG.5is a schematic process view of a second embodiment of the identity recognition method of the present disclosure. This embodiment may be identity recognition method for the office platform performed by the server300through the network. The office platform may be, but not limited to an integrated platform including a mailbox system, an approval flow system, and a social platform system. But the disclosure is not limited thereto, the office platform also can include a third-party-social platform, and so on. When the office platform is the integrated platform, the login password can, but not limited to be used to login all system of the office platform. Wherein, one account of the office platform is corresponding to one post. Property information of each post can be, but not limited to included: an identification of the post, a fixed telephone number of a post manager, a job template corresponding to the post, responsibilities of the post, organizational relationship of the post, annual task of the post, and so on. It can, but not limited to, send the property information to a user terminal corresponding to an account when an user login the office platform through a login password, so as to facilitate the user to understand properties of the post. As shown inFIG.5, the identity recognition method for the office platform in this embodiment may include the following steps: step S501: receiving registration information of a first user, wherein the registration information includes an identification of a post of the first user, an identity of the first user and a first login password; Specifically, the post may, but not limited to, include both authority and responsibilities. Users with a same post have a same responsibilities and authorities. Specifically, the identification of the post may be, but is not limited to a title such as a manager, a leader, etc., but may also be, but not limited to, a unique identification that can identify the identification of the post of the first user, such as a combination of a title and a character, such as a first manager, a first leader, etc. The identity of the first user may be the identification of the post of the first user, and may also include a unique identification that can identify the identity of the first user, such as at least one of the first user's identification number, name, and phone number, and so on. In one embodiment, the registration information of the first user may also, but not limited to include other property information of the first user such as the first user's gender, the first user's home address, the first user's emergency contact, the first user's third-party-social account, and so on, beside the identity of the first user. step S504: binding the first login password to a first account corresponding to the post; Specifically, before execute the step of S504: binding the first login password to a first account corresponding to the post, the identity recognition method includes step S502: determining whether the post is a new post according to the identification of the post; if the post is a new post, enter in step S503: creating the first account corresponding to the post, binding the first login password to the first account; if the post is not a new post, enter in the step S504directly: binding the first login password to the first account. In one embodiment, if the post is not a new post, the first user is a successor of the post, so it can make an original login password unbind from the first account automatically, and enter into the step5504directly: binding the first login password to a first account corresponding to the post, when the first account corresponding to the post has a binding relationship. Of course, it can, but not limited to make the original login password unbind from the first account when receiving an unbinding request, and then entered in step5504: binding the first login password to a first account corresponding to the post, when the first account corresponding to the post has the binding relationship. Of course, the present disclosure is not limited to it, and it can also allow the first account to bind to both the original login password and the first login password within a preset time such as a month. Specifically, it may, but not limited to save all of data information stored in a database corresponding to the first account, when receiving the unbinding request including the original login password. It also may, but not be limited to, remove part of the data information stored in the database automatically, such as the data information corresponding to the third-party-social platform, and so on. The present disclose is not limited to this. In one embodiment, before enter in the step S504: binding the first login password to a first account corresponding to the post, the identity recognition method may further include: sending an approval request for the registration information to a user terminal corresponding to an upper account of the first account; and receiving an approval information send by the user terminal corresponding to the upper account of the first account. Wherein a post corresponding to the upper account is an upper of the post corresponding to the first account, for example, if the post corresponding to the first account is an engineer of a first group, then the post corresponding to the upper account may be a leader of the first group, such as a manager of the first group, etc. Of course, the upper account may also be a management account of human resources, etc. In one embodiment, the identity recognition method for the office platform may, but not limited to further include: step S505: binding the identity of the first user to the first account corresponding to the post; Specifically, the identity of the first user may, but is not limited to include at least one of a unique identification that can identify the identity of the first user, such as the first user's identification number, name, and phone number. It should be noted that, sequence of step S504and step S505is not limited to this. It may bind the identity of the first user with the first account corresponding to the post first, and then bind the first login password with the first account. It may also that binding the identity of the first user and the first login password with the first account corresponding to the post first simultaneously. In one embodiment, the identity recognition method for the office platform may, but not limited to further include: step S506: determining whether the first user is a new user according to the identity of the first user; if the first user is not a new user, enter in step S507: finding an original account binding with the first user, and linking the first account corresponding to the post to a database corresponding to the original account. Specifically, after the first account linked to the original account, the first user may operate such as access and/or edit all of data information or partial of information of a database corresponding to the original account through the first account. In one embodiment, the first user may have access rights and/or edit permissions such as replicating, forwarding, etc., for all of the data information in the database of the original account, or only have operate rights for the data information within a certain period of time. step S508: receiving a login request, wherein the login request includes the first login password. Specifically, in one embodiment, it may, but not limited to regard the identification of the post of the first user as the identification of the first account, so that, the login request may, but not limited to further include the identification of the post of the first user. Of course, when there is a binding relationship between the identity of the first user and the identification of the first account, the identification of the first account may be, but also not limited to the identity of the first user such as name, phone number, identification number, etc. Therefore the login request may also, but is not limited to include the identity of the first user such as the name, the phone number, the identification number, and so on. step S509: responding to the login request, and sending data information of the first account binding with the first login password to the user terminal, to make the user terminal display the data information. Specifically, the data information of the first account is information stored in the first database corresponding to the first account. The data information may, but not limited to, include operation information about contents of the first account, such as an editing operation, a favorite operation, and/or the like, and/or interaction information between the first account and other accounts, and so on. In one embodiment, the identity recognition method for the office platform in this embodiment may, but not limited to include the following steps: step S510: sending at least two items of the identity of the first user, the identification of the post, and the identification of the first account corresponding to the post, to a user terminal corresponding to a second user. In one embodiment, step S510can, but not limited to include send the identity of the first user and other property information of the first user, such as at least one item of the gender of the first user and the age of the first user to the user terminal corresponding to the second user. In addition, it may be, but not limited to, send at least one item of the identification of the post and other property information of the post, such as a fixed telephone number of the post manager, responsibilities of the post, and organizational relationship, to the user terminal corresponding to the second user. Among them, the second user can be, but not limited to, the in-service employees who have registered on the office platform and saved in a communication record of the office platform, or customers stored in the communication record of the office platform, etc. If the first user is a new user, can enter in the step of S508directly: receiving a login request, wherein the login request includes the first login password, or enter in other following steps such as the step S510: sending at least two items of the identity of the first user, the identification of the post, the identification of the first account corresponding to the post to a user terminal corresponding to a second user directly. In one embodiment, the identity recognition method for the office platform in this embodiment may, but not limited to include the following steps: receiving an unbind request, wherein the unbind request includes an identity of a former user; finding a second account binding to the former user according to the identity of the former user, and unbinding the second account to the former user. Among them, former users refer to those who no longer hold the same post, such as those who have been promoted or resigned, etc. According to the identity recognition method of the present disclosure, the first account is corresponding to the post, thus when the login password set by the successor of the post is bound to the first account corresponding to the post, the successor of the post can browse historical-data information corresponding to the first account through the login password. Thusly, the user experience is good. Third Embodiment FIG.6is a schematic process view of a third embodiment of the identity recognition method of the present disclosure. The identity recognition method for the office platform provided by the third embodiments of the present invention may be applied between the user terminal100and the server300as shown inFIG.1. In the embodiment, the identity recognition method for the office platform includes following steps: step S601: the user terminal sending registration information of a first user, wherein the registration information includes an identification of a post of the first user, an identity of the first user and a first login password; step S602: the server binding the first login password to a first account corresponding to the post; step S603: the user terminal sending a login request to the server, wherein the login request includes the first login password; step S604: the server responding to the login request, and sending data information of the first account binding with the first login password to the user terminal; step S605: the user terminal displaying the data information. In one embodiment, the step S602: the server binding the first login password to a first account corresponding to the post includes: determining whether the post is a new post according to the identification of the post; If the post is a new post, creating the first account corresponding to the post, binding the first login password to the first account; if the post is not a new post, enter in the step of the server binding the first login password to the first account directly. In one embodiment, the step S602: the server binding the first login password to a first account corresponding to the post further includes: the server binding the identity of the first user to the first account corresponding to the post. In one embodiment, the identity recognition method may, but not limited to further includes: the server determining whether the first user is a new user according to the identity of the first user; if the first user is not a new user, the server finding an original account binding with the first user, and linking the first account corresponding to the post to a database corresponding to the original account; sending at least two items of the identity of the first user, the identification of the post, the identification of the first account corresponding to the post, to a user terminal corresponding to a second user directly; If the first user is a new user, enter in the step of sending at least two items of the identity of the first user, the identification of the post, the identification of the first account corresponding to the post to a user terminal corresponding to a second user directly. According to the identity recognition method of the present disclosure, the first account is corresponding to the post, thus when the login password set by a post successor is bound to the first account corresponding to the post, the successor of the post can browse historical-data information corresponding to the first account through the login password. Thusly, the user experience is good. Fourth Embodiment FIG.7is a schematic structural view of a fourth embodiment of an identity recognition apparatus70of the present disclosure. As shown inFIG.7, the identity recognition apparatus70includes a first receiving module701, a first binding module702, a second receiving module703, a first sending module704. The first receiving module is configured to receive registration information of a first user. The registration information includes an identification of a post of the first user, an identity of the first user and a first login password. The first binding module702is configured to bind the first login password to a first account corresponding to the post. The second receiving module703is configured to receive a login request sent by a user terminal, wherein the login request comprises the first login password. The first sending module704is configured to response the login request, and sent data information of the first account binding with the first login password to the user terminal, to make the user terminal display the data information. In one embodiment, the first sending module704is being further configured to send share-data information of an associated account of the first account to the user terminal when receiving a read request. Wherein, the read request includes an identification of the first account. Among them, the associated account of the first account may include, but is not limited to, a system default account that of which post with post corresponding to the first account belong to the same post type, for example, but may not be limited to the definition of some or all posts in the same department belong to the same post type. The associated account of the first account may also include an account that receives a response after sends an association request to the first account. Wherein, the share-data information that sent to the first account of the user terminal is stored in a shared database, for example, accounts associated of the first account include an A account and a B account, and then the share-data information of the first account, the A account, and the B account are stored in the shared database. Each of the first account, the A account, and the B account can obtain the share-data information in the shared database by sending the read request. Wherein, the share-data information that sent to the first account of the user terminal may, but not limited to include some documents edited and/or sent and/or received by a system-default-associated account, and/or some data information sent by the associated account to a shared account or a designated account. Format of the documents may be excel, and so on. Wherein, the first binding module702includes a first determination unit712, a first creating unit722, a first binding unit732. The first determination unit712is configured to determining whether the post is a new post according to the identification of the post. The first creating unit722is configured to create the first account corresponding to the post when the post is a new post. The first binding unit732is configured to bind the first login password to the first account. Wherein, the first binding module702further includes a second binding unit742. The second binding unit742is configured to bind the identity of the first user to the first account corresponding to the post. Wherein, the identity recognition apparatus70further includes a first determination module705and a linking module706. The first determination module is configured to determine whether the first user is a new user according to the identity of the first user. the linking module706is configured to find an original account binding with the first user, and link the first account corresponding to the post to a database corresponding to the original account when the first user is not a new user. Wherein, the identity recognition apparatus70further includes a second sending module707. The second sending module707is configured to sending at least two items of the identity of the first user, the identification of the post, the identification of the first account corresponding to the post to a user terminal corresponding to a second user. Wherein, the identity recognition apparatus70further includes a third receiving module708and a first canceling module709. The third receiving module708is configured to receive an unbind request, wherein the unbind request includes an identity of a former user. The first canceling module709is configured to find a second account binding to the former user according to the identity of the former user, and unbind the second account to the former user. An identity recognition system of the present disclose includes the identity recognition apparatus. The identity recognition apparatus includes a first receiving module, a first binding module, a second receiving module, a first sending module. The first receiving module is configured to receive registration information of a first user. The registration information includes an identification of a post of the first user, an identity of the first user and a first login password. The first binding module is configured to bind the first login password to a first account corresponding to the post. The second receiving module is configured to receive a login request sent by a user terminal, wherein the login request includes the first login password. The first sending module is configured to response the login request, and sent data information of the first account binding with the first login password to the user terminal, to make the user terminal display the data information. According to the identity recognition method, apparatus, system and server of the present disclosure, the first account is corresponding to the post, thus when the login password set by a post successor is bound to the first account corresponding to the post, the successor of the post can browse historical-data information corresponding to the first account through the login password. Thusly, the user experience is good. It should be noted that the embodiments in the specification are described in a progressive manner. The description of any embodiment focuses on the difference compared with other embodiments. The same or similar elements of the respective embodiments may refer to each other. The embodiments of the devices and the embodiments of the corresponding method may refer to each other, so as to omit the duplicated description. It should be noted that the relational terms herein such as first and second are used only to differentiate one entity or operation from another entity or operation, and do not require or imply any actual relationship or sequence between these entities or operations. Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements, and/or steps are included or are to be executed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. The ordinary person skilled in the art can understand that all or part of the steps in the above method can be completed by a hardware or be completed by a program instructing related hardware, and the program can be stored in a computer readable memory medium, such as a read-only memory, disk or optical disk and so on. The above are merely the preferred embodiments of the present invention and are not intended to limit the present invention in any form. Although the present invention has been disclosed by the preferred embodiments as mentioned above, the preferred embodiments are not used for limiting the present invention. Many possible variations and modifications may be made to the technical solutions of the present invention, or the technical solutions of the present invention may be modified into equivalent embodiments changed equivalently, without departing from the scope of the technical solutions of the present invention by any person skilled in the art by using the methods and technical contents as disclosed above. Therefore, any simple modifications, equivalent changes and modifications made to the embodiments above according to the technical essence of the present invention without departing from the contents of the technical solutions of the present invention shall belong to the scope of protection of the technical solutions of the present invention. | 39,832 |
11943215 | DETAILED DESCRIPTION Example System Architecture In example architectures for the technology, while each server, system, and device shown in the architecture is represented by one instance of the server, system, or device, multiple instances of each can be used. Further, while certain aspects of operation of the technology are presented in examples related to the figures to facilitate enablement of the claimed invention, additional features of the technology, also facilitating enablement of the claimed invention, are disclosed elsewhere herein. FIG.1is a block diagram depicting a system to validate object identities, in accordance with certain examples. As depicted inFIG.1, the architecture100includes an authorization system120, one or more objects110, a service provider device130, a certificate authority140, a vault150, and a virtual directory160connected by communications network99. Each network, such as communication network99, includes a wired or wireless telecommunication mechanism and/or protocol by which the components depicted inFIG.1can exchange data. For example, each network99can include a local area network (“LAN”), a wide area network (“WAN”), an intranet, an Internet, a mobile telephone network, storage area network (SAN), personal area network (PAN), a metropolitan area network (MAN), a wireless local area network (WLAN), a virtual private network (VPN), a cellular or other mobile communication network, Bluetooth, NFC, Wi-Fi, or any combination thereof or any other appropriate architecture or system that facilitates the communication of signals or data. Throughout the discussion of example embodiments, it should be understood that the terms “data” and “information” are used interchangeably herein to refer to text, images, audio, video, or any other form of information that can exist in a computer-based environment. The communication technology utilized by the components depicted inFIG.1may be similar to network technology used by network99or an alternative communication technology. Each component depicted inFIG.1includes a computing device having a communication application capable of transmitting and receiving data over the network99or a similar network. For example, each can include a server, desktop computer, laptop computer, tablet computer, a television with one or more processors embedded therein and/or coupled thereto, smart phone, handheld or wearable computer, personal digital assistant (“PDA”), other wearable device such as a smart watch or glasses, wireless system access point, or any other processor-driven device. In the example embodiment depicted inFIG.1, the object110is a stand-alone device without an operator to perform the functions herein, the authorization system120is operated by an authorization system operator, the service provider device130is operated by a service provider operator or other user, a certificate authority140is operated by a certificate authority operator, an vault150is operated by a vault system operator, and a virtual directory160is operated by a virtual directory operator. As shown inFIG.1, the object110includes a data storage unit (not shown) accessible by a communication application115. The object110may be any type of non-human object operating on a computerized operating system or processor, such as a machine, an application, a service, a computing device, a network device or any other type of object that communicates with service provider devices130or other devices. In an example, the object110is an appliance, such as a washing machine or a refrigerator. In another example, the object110is an application on a user device, such as a stock trading and reporting application on a smart phone. In another example, the object110is a server or other computing device that performs tasks for users. Any other type of non-human object110may perform the methods herein. The communication application115on the object110may be, for example, a web browser application or a stand-alone application, to view, download, upload, or otherwise access instructions, updates, databases, documents, or web pages via the networks99. The communication application115can interact with web servers or other computing devices connected to the network99, such as the vault150, the certificate authority140, the authorization system120, or any of the systems described herein. In some embodiments, the user associated with an object110can install an application and/or make a feature selection to obtain the benefits of the techniques described herein. As shown inFIG.1, the authorization system120includes a server125. The server125or one or more other suitable devices are used to perform the computing functions of the authorization system120. The authorization system120and/or the server125represent any computing system that may be used to receive passwords or certificates, verify object identities, generate and provide access tokens, fetch password hashes, negotiate mutual transport layer security protocol with the object110, or perform any other suitable tasks for validating object identities. Any other computing or storage function required by the authorization system120may be performed by the server125. The server125may represent any number of servers, cloud computing devices, or other types of device for performing the tasks described herein. In an example, the authorization system120is an OAuth server system that provides OAuth tokens. In another example, the authorization system120is any other type of authorization process that provides access tokens to objects110or users. As shown inFIG.1, the service provider device130includes a data storage unit (not shown) accessible by a communication application135. The communication application135on the service provider device130may be, for example, a web browser application or a stand-alone application, to view, download, upload, or otherwise access instructions, updates, databases, documents or web pages via the networks99. The communication application135can interact with web servers or other computing devices connected to the network99, such as the vault150, the certificate authority140, the authorization system120, or any of the systems described herein. The service provider device130may be any type of non-human object, such as a machine, an application, a service, a computing device, or any other type of object that communicates with objects110or other devices. In an example, the service provider device130is a database management system. In another example, the service provider device130is an application on a server, such as a stock trading and reporting application. In another example, the service provider device130is a server or other computing device that performs tasks for users. In another example, the service provider device130is a device that provides updates for appliances or other devices. Any other type of non-human service provider device130may perform the methods herein. As shown inFIG.1, the certificate authority140may be, for example, a computing device, a server, a web browser application, or a stand-alone application, to provide certificates to objects110or other computing devices. The certificate authority140can interact with web servers or other computing devices connected to the network99. The certificate authority140may be an application or a device associated with the authorization system120, a financial institution, a third party identity verification system, a mobile device service provider, or any other suitable system. In an example, the certificate authority140provides certificates to objects110and other computing devices. The certificate authority140issues digital certificates that certifies the ownership of a public key in the name of the object110. The public key allows the service provider device130to rely upon assertions made about a private key that corresponds to the certified public key. The vault150is a secure online platform where objects110and other users collect and maintain digital assets, passwords, and logins. The vault150allows the object110to share access with trusted systems and devices, such as the authentication system120. The vault150may create, store, and dispense passwords to the object110without human interaction or configuration, limiting the opportunity for a malicious actor to gain access to the password. The virtual directory160is a device or application that delivers a single access point for identity management applications and service platforms. The virtual director160receives queries from the authorization system120or others and directs the query to the appropriate data sources by abstracting and virtualizing data. The virtual directory160integrates identity data from multiple heterogeneous data stores and presents the data as though it were coming from virtual directory160. In the examples, the virtual directory160validates the credential by fetching the encrypted client password hash from a system for accessing and maintaining distributed directory information services over an IP network. In example embodiments, the network computing devices and any other computing machines associated with the technology presented herein may be any type of computing machine such as, but not limited to, those discussed in more detail with respect toFIG.4. Furthermore, any functions, applications, or components associated with any of these computing machines, such as those described herein or any other others (for example, scripts, web content, software, firmware, hardware, or modules) associated with the technology presented herein may by any of the components discussed in more detail with respect toFIG.4. The computing machines discussed herein may communicate with one another, as well as with other computing machines or communication systems over one or more networks, such as network99. The network99may include any type of data or communications network, including any of the network technology discussed with respect toFIG.4. EXAMPLE EMBODIMENTS Reference will now be made in detail to embodiments of the invention, one or more examples of which are illustrated in the accompanying drawings. Each example is provided by way of explanation of the invention, not as a limitation of the invention. Those skilled in the art will recognize that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For example, features illustrated or described as part of one embodiment can be used in another embodiment to yield a still further embodiment. Thus, the present invention covers such modifications and variations that come within the scope of the invention. The technology for embodiments of the invention may employ methods and systems to allow a network of machines, services, or other objects to use authorization tokens to verify object identities without human input. The examples for embodiments of the invention may employ computer hardware and software, including, without limitation, one or more processors coupled to memory and non-transitory computer-readable storage media with one or more executable computer application programs stored thereon, which instruct the processors to perform such methods. The example methods illustrated inFIGS.2-3are described hereinafter with respect to the components of the example communications and processing architecture100. FIG.2is a block flow diagram depicting a method200to validate object identities using a password, in accordance with certain examples. In block210, an object110retrieves a configured password associated with the object from a password vault150. The object110may be any type of non-human object, such as a machine, an application, a service, a computing device, or any other type of object that communicates with service provider devices130or other devices. In an example, the object110is an appliance, such as a washing machine or a refrigerator. The object110may be configured to require periodic updates to the software operating the object110. The object110is programmed to communicate with a service provider device130to download the update. In conventional systems, the object110and the service provider device130communicate via a point to point protocol or require a human created and entered password to verify the identity of the object110. This communication is insecure and does not provide a single password or credential to access any other needed systems. In another example, the object110is an application on a user computing device, such as a stock trading and reporting application, a social media application, a gaming application, or any other suitable application. In the example, the object110is configured to communicate with a service provider device130that provides data, such as stock quote information, to display to a user of the computing device on which the object110operates. The object110desires to communicate directly with the service provider device130securely without human interaction to obtain data, such as to monitor a stock price. In conventional systems, the object110may require a user to provide a password to the service provider device130to allow the object110to obtain information. In another example, the object110is a server or other computing device that performs tasks for users without human interaction. In another example, the object110is a network device that receives and transmits communications to and from other network devices, such as in a cellular network or a data communications network. In another example, the object110is a passenger vehicle or an industrial vehicle that receives updates and communications from a central server or from a cloud computing device regarding traffic or operational instructions. In another example, the object110is a personal technology device, such as smartphone, smart watch, tablet, headphones, or any other technology device that receives updates and communications from a central server or from a cloud computing device. Any other type of non-human object110may perform the methods herein. The object110may be configured to request a communication with another device, such as the service provider device130. The communication may request or provide data, an update, a check-in, a location update, a status, or any other suitable communication. The communication may be initiated by the object110, the service provider device130, a user, or any other suitable party. For example, a service provider device130may have received a software update for distribution to a group of appliances operating a certain operating system. The service provider device130provides a communication to the group of appliances requesting a secure communication. When the object110determines that a secure communication with the service provider device130is needed or imminent, the object110identifies a need for a password for identification and authorization. The object110communicates a request to a vault150. The vault150is a is a secure online platform in which objects110and other users collect and maintain digital assets, passwords, and logins. The vault150may create, store, and dispense passwords to the object110without human interaction or configuration, limiting the opportunity for a malicious actor to gain access to the password. In an example, the object110communicates a request to vault150for a current password for the object110. The object110provides a username for the object110with which the password is associated. For example, the username for the object110is “object1.” The vault150accesses a database of passwords and selects the password associated with the username of the object110. The vault150communicates the password to the object110. The communication may be via any suitable technology, such as an Internet communication over the communications network99. In certain examples, the communication is encrypted or otherwise protected from outside observation. In block220, the object110communicates the password to an authorization server125of the authorization system120. The communication may be via any suitable technology, such as an Internet communication over the communications network99. In certain examples, the communication is encrypted or otherwise protected from outside observation. The object110communicates the password with a request to receive in return an access token. The communication may include the username, password, and the service provider device130with which the object110is attempting communication. For example, the object110provides a username and password and details of a potential communication session with a particular service provider device130. In block230, the object110requests an access token from the authorization server125to be used with a service provider device130. The communication in the example requests that the authorization system120creates or provides an access token that is specific to the particular service provider device130. An access token may be requested that only is effective with the particular service provider device130and not any other provider device. In block240, the authorization server125validates the password by fetching the encrypted password hash from a virtual directory160. The virtual directory160is a device or application that delivers a single access point for identity management applications and service platforms. The virtual director160receives queries from the authorization system120or others and directs the query to the appropriate data sources by abstracting and virtualizing data. The virtual directory160integrates identity data from multiple heterogeneous data stores and presents the data as though it were coming from virtual directory160. In the examples, the virtual directory160validates the credential by fetching the encrypted client password hash from a system for accessing and maintaining distributed directory information services over an IP network. In the example, if the password provided by the object110matches the password provided by the virtual directory160, then the object identification is validated and authorized. In block250, the authorization server125communicates to the object110an access token that is associated with a service provider device130. The authorization server125creates or accesses an access token. The access token is associated with the service provider device130such that the access token is scoped for the object110. The authorization server125passes the access token as a credential to the object110to be provided to the service provider device130. The passed token informs the service provider device130that the bearer of the token has been authorized to access the service provider device130and perform specific actions specified by the scope that was granted during authorization. The configured scope defines the specific functions and protected resources that the service provider device130is allowed to share with the object110. The authorization server125scopes the token such that only the required information that is needed is provided to or from the object110. The authorization server125signs the access token with an authorization provider signature to validate the access token to a service provider device130. The authorization provider signature may be compared by the service provider device130to a stored authorization provider signature to verify the authenticity of the access token. In block260, the object110communicates the access token to the service provider device130. The communication may be via any suitable technology, such as an Internet communication over the communications network99. The object110provides the scoped access token and a request to provide access to the needed resources. For example, the request may be for a device update, data from a database, access to an application, access to a device, access to a directory, third party information, a communication with a third party, a download, or any other suitable access request. The access token is scoped to allow access to the requested resources, identify the object110, identify the service provider device130, or perform any other suitable tasks to support the resource request. In block270, the service provider device130validates the access token with the authorization server125signature. The service provider device130, after receiving the access token and the request verifies that the authorization provider signature associated with the access token is valid. The authorization provider signature may be compared by the service provider device130to a stored authorization provider signature to verify the authenticity of the access token. If the authorization provider signature is verified, then the service provider device130determines that the access token is authentic, and access may be granted to the object110. In block280, the service provider device130optionally communicates a session token to the object110. Depending on the implementation of the service provider device130, a session token may be communicated to the object110to identify the session and allow the service provider device130to access the session and all user information available for the session. The session token, also known as a sessionID, is an encrypted, unique string that identifies the specific session instance. The session token may be provided in a cookie or in any other suitable communication. In block290, the object110accesses resources from the service provider device130. The object110requests the needed information or data from the service provider device130. If the session token, access token, or other instructions provide authorization for the service provider device130to allow access to the requested data, then the service provider device130communicates the data to the object110. For example, if the object110is an appliance that requires a software update, the object110and the service provider device130will negotiate to identify the correct version of the software update needed. The service provider device130verifies that the authority to provide the software update is provided. The service provider device130communicates the software update to the object110. In another example, the service provider device130communicates a file to download an application to the object110. In another example, the service provider device130communicates data from a database to the object110. Any other requested and authorized data may be communicated to the object110. Additionally, any data from the object110to the service provider device130may be provided. In an example, if an object110requests to upload data history, the service provider device130receives and stores that data. For example, an application that is requesting a software update may send a communication specifying the version of the software that is currently operating on the object110. In another example, an application communicates user history of an application to the service provider device130for storage. In another example, a communication network device provides updates to the service provider device130regarding third party devices that are attempting to connect to the communication network device. Any other suitable communication may be sent from the object110to the service provider device130. After the communications are completed, the session token and/or the access token may expire. If the object110desires future communications, the object110may be required to repeat the steps of the method200to obtain new tokens to communicate with the service provider device130. FIG.3is a block flow diagram depicting a method300to validate an object identity using a certificate, in accordance with certain examples. In block310, a certificate authority140communicates a certificate to the object110. The certificate authority140may be, for example, a computing device, a server, a web browser application, a stand-alone application, or other device or service that provides certificates to objects110or other computing devices. The certificate authority140can interact with web servers or other computing devices connected to the network99. The certificate authority140may be an application or a device associated with the authorization system120, a financial institution, a third party identity verification system, a mobile device service provider, or any other suitable system. In an example, the certificate authority140provides certificates to objects110and other computing devices. The certificate authority140issues digital certificates that certifies the ownership of a public key in the name of the object110. The public key allows the service provider device130to rely upon assertions made about a private key that corresponds to the certified public key. In an example, the object110is an appliance, an application on a user computing device, a server or other computing device that performs tasks for users without human interaction, a network device that receives and transmits communications to and from other network devices, or any other type of device as described herein with respect toFIG.2. The objects110may be configured to perform tasks as described in block210ofFIG.2. The object110may be configured to request a communication with another device, such as the service provider device130. The communication may request or provide data, an update, a check-in, a status, a location update, or any other suitable communication. The communication may be initiated by the object110, the service provider device130, a user, or any other suitable party. When the object110determines that a secure communication with the service provider device130is needed or imminent, the object110identifies a provided certificate from the certificate authority140. In an example, the certificate is requested by the object110and received at the time of the determination. In another example, the certificate is provided by the certificate authority140in anticipation of a future need. The certificate may be provided at the time that the object is first configured, at each request, on a configure schedule, when certain events occur, or at any suitable time. The certificate is stored at any suitable location, such as a data storage unit of the object110or in a cloud storage location. In an example, the object110communicates a request to certificate authority140for a current certificate for the object110. In an example, the object110provides a username for the object110with which the password is associated. For example, the username for the object110is “object1.” In an example, the certificate authority140accesses a database of certificates and selects the current certificate associated with the username of the object110. In another example, the certificate authority140creates a new certificate for the object110. The certificate authority140communicates the certificate to the object110. The communication may be via any suitable technology, such as an Internet communication over the communications network99. In certain examples, the communication is encrypted or otherwise protected from outside observation. In block320, the object110negotiates a mutual security protocol with an authorization server125and communicates the certificate to the authorization server125. The communication may be via any suitable technology, such as an Internet communication over the communications network99. In certain examples, the communication is encrypted or otherwise protected from outside observation. For example, in certificate-based mutual transport layer security protocol, the system requires the authorization server125to be authenticated to the object110. The authentication of the object110to the authorization server125is managed by the application layer. The protocol offers the ability for the authorization server125to request that the object110send the certificate to prove the identity of the object110. The mutual transport layer security ensures that both parties are authenticated via certificates. In block330, the object110requests an access token from the authorization server125to be used with a service provider device130. The communication in the example requests that the authorization system120creates or provides an access token that is specific to the particular service provider device130. The access token may be requested that only is effective with the particular service provider device130and not any other provider device. In block340, the authorization server125validates the subject domain name and the issuer domain name of the certificate to establish the object110identity via a virtual directory160. The virtual directory160is a device or application that delivers a single access point for identity management applications and service platforms. The virtual director160receives queries from the authorization system120or others and directs the query to the appropriate data sources by abstracting and virtualizing data. The virtual directory160integrates identity data from multiple heterogeneous data stores and presents the data as though it were coming from virtual directory160. In the examples, the virtual directory160validates the credential by comparing the subject domain name and the issuer domain name of the certificate. If a match is found, then the object identity is validated. In block350, the authorization server125communicates to the object110an access token that is associated with a service provider device130. The authorization server125creates or accesses an access token. The access token is associated with the service provider device130such that the access token is scoped for the object110. The authorization server125passes the access token as a credential to the object110to be provided to the service provider device130. The passed token informs the service provider device130that the bearer of the token has been authorized to access the service provider device130and perform specific actions specified by the scope that was granted during authorization. The configured scope defines the specific functions and protected resources to which the service provider device130is allowed. The authorization server125scopes the token such that only the required information that is needed is provided to or from the object110. The authorization server125signs the access token with an authorization provider signature to validate the access token to a service provider device130. The authorization provider signature may be compared by the service provider device130to a stored authorization provider signature to verify the authenticity of the access token. In block360, the object110communicates the access token to the service provider device130. The communication may be via any suitable technology, such as an Internet communication over the communications network99. The object110provides the scoped access token and a request to provide access to the needed resources. For example, the request may be for a device update, data from a database, access to an application, access to a device, access to a directory, third party information, a communication with a third party, a download, or any other suitable access request. The access token is scoped to allow access to the requested resources, identify the object110, identify the service provider device130, or perform any other suitable tasks to support the resource request. In block370, the service provider device130validates the access token with the authorization server125signature. The service provider device130, after receiving the access token and the request verifies that that authorization provider signature associated with the access token is valid. The authorization provider signature may be compared by the service provider device130to a stored authorization provider signature to verify the authenticity of the access token. If the authorization provider signature is verified, then the service provider device130determines that the access token is authentic and access may be granted to the object110. In block380, the service provider device130optionally communicates a session token to the object110. Depending on the implementation of the service provider device130, a session token may be communicated to the object110to identify the session and allow the service provider device130to access the session and all user information available for the session. The session token, also known as a sessionID, is an encrypted, unique string that identifies the specific session instance. The session token may be provided in a cookie or in any other suitable communication. In block390, the object110accesses resources from the service provider device130. The object110requests the needed information or data from the service provider device130. If the session token, access token, or other instructions provide authorization for the service provider device130to allow access to the requested data, then the service provider device130communicates the data to the object110. For example, if the object110is an appliance that requires a software update, the object110and the service provider device130will negotiate to identify the correct version of the software update needed. The service provider device130verifies that the authority to provide the software update is provided. The service provider device130communicates the software update to the object110. In another example, the service provider device130communicates a file to download an application to the object110. In another example, the service provider device130communicates data from a database to the object110. Any other requested and authorized data may be communicated to the object110. Additionally, any data from the object110to the service provider device130may be provided. In an example, if an object110requests to upload data history, the service provider device130receives and stores that data. For example, an application that is requesting a software update may send a communication specifying the version of the software that is currently operating on the object110. In another example, an application communicates user history of an application to allow the service provider device130for storage. In another example, a communication network device provides updates to the service provider device130regarding third party devices that are attempting to connect to the communication network device. Any other suitable communication may be sent from the object110to the service provider device130. After the communications are completed, the session token and/or the access token may expire. If the object110desires future communications, the object110may be required to repeat the steps of the method300to obtain new tokens to communicate with the service provider device130. While examples provided herein are directed to using passwords and certificates to authenticate the objects110, other systems may also be used. For example, an alternative to certificates that uses a proprietary authentication system may be used to validate the identity of the object110to the authentication server125. Example Systems FIG.4depicts a computing machine2000and a module2050in accordance with certain examples. The computing machine2000may correspond to any of the various computers, servers, mobile devices, embedded systems, or computing systems presented herein. The module2050may comprise one or more hardware or software elements configured to facilitate the computing machine2000in performing the various methods and processing functions presented herein. The computing machine2000may include various internal or attached components, for example, a processor2010, system bus2020, system memory2030, storage media2040, input/output interface2060, and a network interface2070for communicating with a network2080. The computing machine2000may be implemented as a conventional computer system, an embedded controller, a laptop, a server, a mobile device, a smartphone, a set-top box, a kiosk, a vehicular information system, one more processors associated with a television, a customized machine, any other hardware platform, or any combination or multiplicity thereof. The computing machine2000may be a distributed system configured to function using multiple computing machines interconnected via a data network or bus system. The processor2010may be configured to execute code or instructions to perform the operations and functionality described herein, manage request flow and address mappings, and to perform calculations and generate commands. The processor2010may be configured to monitor and control the operation of the components in the computing machine2000. The processor2010may be a general purpose processor, a processor core, a multiprocessor, a reconfigurable processor, a microcontroller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a graphics processing unit (GPU), a field programmable gate array (FPGA), a programmable logic device (PLD), a controller, a state machine, gated logic, discrete hardware components, any other processing unit, or any combination or multiplicity thereof. The processor2010may be a single processing unit, multiple processing units, a single processing core, multiple processing cores, special purpose processing cores, co-processors, or any combination thereof. According to certain examples, the processor2010along with other components of the computing machine2000may be a virtualized computing machine executing within one or more other computing machines. The system memory2030may include non-volatile memories, for example, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), flash memory, or any other device capable of storing program instructions or data with or without applied power. The system memory2030may also include volatile memories, for example, random access memory (RAM), static random access memory (SRAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM). Other types of RAM also may be used to implement the system memory2030. The system memory2030may be implemented using a single memory module or multiple memory modules. While the system memory2030is depicted as being part of the computing machine2000, one skilled in the art will recognize that the system memory2030may be separate from the computing machine2000without departing from the scope of the subject technology. It should also be appreciated that the system memory2030may include, or operate in conjunction with, a non-volatile storage device, for example, the storage media2040. The storage media2040may include a hard disk, a floppy disk, a compact disc read only memory (CD-ROM), a digital versatile disc (DVD), a Blu-ray disc, a magnetic tape, a flash memory, other non-volatile memory device, a solid state drive (SSD), any magnetic storage device, any optical storage device, any electrical storage device, any semiconductor storage device, any physical-based storage device, any other data storage device, or any combination or multiplicity thereof. The storage media2040may store one or more operating systems, application programs and program modules, for example, module2050, data, or any other information. The storage media2040may be part of, or connected to, the computing machine2000. The storage media2040may also be part of one or more other computing machines that are in communication with the computing machine2000, for example, servers, database servers, cloud storage, network attached storage, and so forth. The module2050may comprise one or more hardware or software elements configured to facilitate the computing machine2000with performing the various methods and processing functions presented herein. The module2050may include one or more sequences of instructions stored as software or firmware in association with the system memory2030, the storage media2040, or both. The storage media2040may therefore represent examples of machine or computer readable media on which instructions or code may be stored for execution by the processor2010. Machine or computer readable media may generally refer to any medium or media used to provide instructions to the processor2010. Such machine or computer readable media associated with the module2050may comprise a computer software product. It should be appreciated that a computer software product comprising the module2050may also be associated with one or more processes or methods for delivering the module2050to the computing machine2000via the network2080, any signal-bearing medium, or any other communication or delivery technology. The module2050may also comprise hardware circuits or information for configuring hardware circuits, for example, microcode or configuration information for an FPGA or other PLD. The input/output (I/O) interface2060may be configured to couple to one or more external devices, to receive data from the one or more external devices, and to send data to the one or more external devices. Such external devices along with the various internal devices may also be known as peripheral devices. The I/O interface2060may include both electrical and physical connections for operably coupling the various peripheral devices to the computing machine2000or the processor2010. The I/O interface2060may be configured to communicate data, addresses, and control signals between the peripheral devices, the computing machine2000, or the processor2010. The I/O interface2060may be configured to implement any standard interface, for example, small computer system interface (SCSI), serial-attached SCSI (SAS), fiber channel, peripheral component interconnect (PCI), PCI express (PCIe), serial bus, parallel bus, advanced technology attached (ATA), serial ATA (SATA), universal serial bus (USB), Thunderbolt, FireWire, various video buses, and the like. The I/O interface2060may be configured to implement only one interface or bus technology. Alternatively, the I/O interface2060may be configured to implement multiple interfaces or bus technologies. The I/O interface2060may be configured as part of, all of, or to operate in conjunction with, the system bus2020. The I/O interface2060may include one or more buffers for buffering transmissions between one or more external devices, internal devices, the computing machine2000, or the processor2010. The I/O interface2060may couple the computing machine2000to various input devices including mice, touch-screens, scanners, electronic digitizers, sensors, receivers, touchpads, trackballs, cameras, microphones, keyboards, any other pointing devices, or any combinations thereof. The I/O interface2060may couple the computing machine2000to various output devices including video displays, speakers, printers, projectors, tactile feedback devices, automation control, robotic components, actuators, motors, fans, solenoids, valves, pumps, transmitters, signal emitters, lights, and so forth. The computing machine2000may operate in a networked environment using logical connections through the network interface2070to one or more other systems or computing machines across the network2080. The network2080may include wide area networks (WAN), local area networks (LAN), intranets, the Internet, wireless access networks, wired networks, mobile networks, telephone networks, optical networks, or combinations thereof. The network2080may be packet switched, circuit switched, of any topology, and may use any communication protocol. Communication links within the network2080may involve various digital or analog communication media, for example, fiber optic cables, free-space optics, waveguides, electrical conductors, wireless links, antennas, radio-frequency communications, and so forth. The processor2010may be connected to the other elements of the computing machine2000or the various peripherals discussed herein through the system bus2020. It should be appreciated that the system bus2020may be within the processor2010, outside the processor2010, or both. According to certain examples, any of the processor2010, the other elements of the computing machine2000, or the various peripherals discussed herein may be integrated into a single device, for example, a system on chip (SOC), system on package (SOP), or ASIC device. Examples may comprise a computer program that embodies the functions described and illustrated herein, wherein the computer program is implemented in a computer system that comprises instructions stored in a machine-readable medium and a processor that executes the instructions. However, it should be apparent that there could be many different ways of implementing examples in computer programming, and the examples should not be construed as limited to any one set of computer program instructions. Further, a skilled programmer would be able to write such a computer program to implement an example of the disclosed examples based on the appended flow charts and associated description in the application text. Therefore, disclosure of a particular set of program code instructions is not considered necessary for an adequate understanding of how to make and use examples. Further, those skilled in the art will appreciate that one or more aspects of examples described herein may be performed by hardware, software, or a combination thereof, as may be embodied in one or more computing systems. Additionally, any reference to an act being performed by a computer should not be construed as being performed by a single computer as more than one computer may perform the act. The examples described herein can be used with computer hardware and software that perform the methods and processing functions described previously. The systems, methods, and procedures described herein can be embodied in a programmable computer, computer-executable software, or digital circuitry. The software can be stored on computer-readable media. For example, computer-readable media can include a floppy disk, RAM, ROM, hard disk, removable media, flash memory, memory stick, optical media, magneto-optical media, CD-ROM, etc. Digital circuitry can include integrated circuits, gate arrays, building block logic, field programmable gate arrays (FPGA), etc. The systems, methods, and acts described in the examples presented previously are illustrative, and, in alternative examples, certain acts can be performed in a different order, in parallel with one another, omitted entirely, and/or combined between different example examples, and/or certain additional acts can be performed, without departing from the scope and spirit of various examples. Accordingly, such alternative examples are included in the scope of the following claims, which are to be accorded the broadest interpretation so as to encompass such alternate examples. Although specific examples have been described above in detail, the description is merely for purposes of illustration. It should be appreciated, therefore, that many aspects described above are not intended as required or essential elements unless explicitly stated otherwise. Modifications of, and equivalent components or acts corresponding to, the disclosed aspects of the examples, in addition to those described above, can be made by a person of ordinary skill in the art, having the benefit of the present disclosure, without departing from the spirit and scope of examples defined in the following claims, the scope of which is to be accorded the broadest interpretation so as to encompass such modifications and equivalent structures. | 47,803 |
11943216 | Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION A web-based resource can be provided to users in an RBI host if the user is authorized according to an identity provider. An identity provider can send an HTTP redirect in response to login that contains a URL of an RBI service. The identity provider can provide an assertion to the machine of the user specifying their permission, and this assertion can contain an original URL of a cloud application. Then the client device can redirect to the address of an RBI host and provide the assertion to the RBI host. The RBI host can then launch a remote browser and read the originally intended application login URL from the assertion object and inject that URL into the remote browser with the assertion inside of it, logging in the user. The end user then sees an image of the remote browsing session through the RBI service in a graphic user interface (GUI) of the service to the client device and receiving GUI input from the client device to be applied to the network service. FIG.1is a block diagram of an example system100that can be used for managing access to a network-provided service. In the system100, a client device102is, for example, a desktop computer or mobile phone used by a user (i.e. a human that it using the client device102) that receives user-input and provide user-output to a user. This user can have an identity that is managed by an identity provider104. For example, an organization such as a business or school may use the identity provider104to maintain authorization information for the network provided service. In other examples, the user is not associated with any organization for the identity provider104and instead uses the identity provider104to enable single sign on (SSO) to various different service providers without the need to manage different credentials for each of the service providers, with the identity provider104maintaining authorization information for the network provided service. A service provider106provides one or more services for authorized users, in some cases referred to as cloud service or cloud apps. For example, the service provider106can provide browser-based services for email, cloud storage of data, image or video editing, document creation, etc. To manage the authorization of users to use the service(s) provided by the service provider106, the service provider106can work with the identity provider104. For some or all users of the service(s) of the service provider106, this authentication and authorization functionality can be off-loaded to the identity provider104, and possibly other identity providers (not shown). An RBI host108can instantiate RBI sessions for network services. RBI sessions can include controlled execution or running of network services. These RBI sessions can (but do not need to) include security services that can protect user devices and/or service providers when a user is using the network service. For example, the client device102may never locally store any data from the service provider106, allowing the user to access secure information without risk to the service provider106that the data will be exfiltrated by a compromised client device102. Similarly, the client device102can be protected from malicious services because the network service need never run on the client device102. The BRI session can include sandboxing operations that monitor activity within the RBI and prevent unapproved (e.g., data exfiltration) or unsecure (e.g., remote code execution) operations. In some examples, to provide the user with the network-provided service, the client device102can request110the service or authorization for the service from the identity provider. In some other examples, the client device102can send a request112to the service provider106and be redirected to the identity provider104. The identity provider104can verify that the user of the user device102is authenticated (e.g., is the person they assert to be) and is authorized to use the network-provided service (e.g., permitted). The identity provider104can provide to the client device102an assertion data object that includes information recording the identity provider's104determination that the user of the user device102is authenticated and authorized, and also contains information to allow a web browser of the client device102to redirect to the network location of the RBI host108. The RBI host108can then broker the assertion and instantiate114an RBI instance to hose the network-provided service. Each of the elements102-108described here can be implemented in appropriate computing hardware. For each, they may be a single device or multiple devices working together. The elements102-108can each include one or more hardware elements such as processors, memory, etc. The elements102-108can communicate through one or more data networks. These networks can include the internet and can also or alternatively include local networks. That is to say, the elements102-108may be remote from each other and communicate through the internet, may all be hosted by the same organization on the same local or virtual private network, or a mix (e.g. with the identity provider104and service provider106on an organization's network, and the client device102and the RBI host108being remote from the organization's network). FIG.2is a schematic diagram of example network architecture200that can be used for managing access to a network-provided service. In the architecture200, a device layer208contains user devices such as desktops, laptops, tablets, servers, and Internet of Things (IoT) devices that may operate as the client devices102. An access network layer206contains access networks by which the devices of the layer208access network resources, including mobile networks and intranet networks at various physical locations. A security layer204contains hardware210running security services212, including one or more RBI hosts108. A service layer contains hardware running services, including service provider106and identity provider104. Examples of the services212of the security layer include, but are not limited to, proxy services, data loss prevention, firewall services, intrusion prevention services, reporting services, private access to data, cloud access security brokering, malware detection, and packet capture. The hardware210can be collections of nodes (e.g., virtual machines, hosted applications, physical servers). In the example shown, three different network service providers are used, each supplying three datacenters with virtual machines, hosted applications, and physical services. However, other arrangements are possible. FIG.3Ais a swimlane diagrams of an example process300that can be used for managing access to a network-provided service. The process300can be performed by, for example, the system100and as such will be described with reference to elements of the system100. However, another system or systems can be used to perform the process300or a similar process. The RBI host108can include an RBI interface302to handle communication tasks and an RBI environment304for instantiating RBI instances and hosting network provided services. The client device102sends304, to the identity provider104, credentials for the client-user. For example, the user of the client device102can open a web browser and navigate to the web address of the identity provider's104webpage and can log in to with a username, password, 2-factor code, etc. The identity provider104verifies306the identity and permissions of the client-user. For example, the identity provider104may send a request for the user's credentials, may determine that the client device102is storing a cookie signed by the identity provider104, may determine that the client device102is on a virtual private network with permissions to access the network-provided service, etc. The identity provider104sends308, to the client device102, a dashboard to be rendered with elements that, when selected by the client-user, cause the client device to send, to the identity provider, the access-request. For example, the dashboard may be part of a webpage that, when rendered, shows icons for various network-provided services that the user is authorized to access. The client device102sends102send310, to the identity provider, an access-request to access the network-provided service that is served by the service provider. For example, the client may click, using a mouse or touchpad, the rendered icon of a network-provided service and the browser may send a message to the identity provider104requesting the selected network-provided service. The identity provider104receives, from the client device, the access-request and generates312a permission-object that i) specifies that the client-user is an authorized user of the network-provided service; and ii) comprises an access-override field that specifies a network address of the RBI frontend302. The identity provider104can send, to the client device102, the permission-object and a redirect message with the URL of the RBI interface302. For example, the identity provider104may look up in memory data needed to complete a permission-object from a template. Such information can include the network address of the RBI interface302. The identity provider may add that network address to the permission-object in the access-override field. In an alternative example, instead of providing a dashboard308, the service provider106can respond to the verification306by sending312, to the client device102, the permission-object. For example, this may be a desirable implementation when the identity provider is managing identities for only a single network-provided service. The client device102can redirect314from the identity provider104to the RBI interface302by receiving, from the identity provider104, the permission-object and redirect message; and sending, to the RBI interface302, the permission-object. For example, the browser of the client device102may, transparently to the user, redirect from the identity provider104to the RBI interface302. The RBI interface302receives the permission object to process316it. For example, the browser may send, as part of the redirect, the permission object to the RBI interface. The RBI environment304can instantiate an RBI instance and access318the network-provided service. For example, the RBI environment may look up in memory a web address of the service provider106and request the network-provided service from the web address. The service provider106can serve320the network-provided resource to the RBI environment304. The RBI environment304runs322the network-provided service in an isolation environment to generate a graphic user interface (GUI). For example, the RBI environment may run a web browser in a sandbox environment with heightened security settings compared to default web browser. This execution can generate a GUI that may normally be displayed locally when not run in a sandbox environment. The RBI environment304provides324a visual reproduction of the GUI to the client device102. For example, the BRI environment304(or RBI interface302, etc.) can generate a plurality of tiles from the GUI, and serve each tile as an image file to the client device. The client device102displays326the GUI to the user and receives input from the user. For example, the client device102can reassemble the tiles to be displayed by the web browser to the user of the client device102. In response, the user can click an button, drag a scroll bar, or otherwise interact with interface elements shown in the display of the GUI. The RBI environment receives browser-input from the client device102applies the browser-input to the running network-provided service. For example, the client device102can send an instruction to the RBI interface302that records, for example, the location and type of interaction that the user imitated. The RBI environment can translate this message into a command to the sandbox environment that simulates the user input, and can run the network-provided service with this input. Then, as the GUI is updated, the RBI environment302and client device102can continue elements322,324, and326as the user continues to interact with the displayed GUI and the service continues to run. FIG.3Bis a swimlane diagrams of an example process328that can be used for managing access to a network-provided service. The process328can be performed by, for example, the system100and as such will be described with reference to elements of the system100. However, another system or systems can be used to perform the process328or a similar process. The RBI host108can include an RBI interface302to handle communication tasks and an RBI environment304for instantiating RBI instances and hosting network provided services. The client device102sends330, to the service provider106, an access-request to access the network-provided service that is served by the service provider. The service provider106receives, from the client device102, the access-request. For example, the user of the client device102can open a web browser and navigate to the web address of the service provider106and request the network-provided service. The service provider106sends332, an authentication-request. For example, instead of serving the network-provided service to the client device102, the service provider106can redirect the client device's102browser to the identity provider. The identity provider104receives the authentication request and determines that the client-user is an authorized user of the network-provided service. To determine that the client-user is an authorized user of the network-provided service, the identity provider104sends334, to the client device336, a credential request, receive, from the client device102, credentials for the client-user; and verifies338authentication of the client-user. The client device102provides the credentials. For example, the identity provider104may serve a log-in webpage and receive the user's username, password, 2-factor code, etc. The process328can then continue312-326. FIG.3Cis a swimlane diagrams of an example process340that can be used for managing access to a network-provided service. The process340can be performed by, for example, the system100and as such will be described with reference to elements of the system100. However, another system or systems can be used to perform the process340or a similar process. The RBI host108can include an RBI interface302to handle communication tasks and an RBI environment304for instantiating RBI instances and hosting network provided services. In the process340, to determine that the client-user is an authorized user of the network-provided service, the identity provider106determine342that the client-user is already authenticated. For example, the web browser of the client device102can store a cookie that was previously serviced by the identity provider104. This cookie can store a cryptographically signature or secret data that the identity provider104can read and recognize as being issued from a previous single-sign-on event. FIG.4is a schematic diagram of a code snippet400of an example permission object. In this example, the permission object is a Security Assertion Markup Language (SAML) object, though other examples can include Extensible Markup Language (XML) data objects, OAuth tokens, etc. The permission object can conform to a scheme published and accessible by the identity provider104, service provider106, RBI host108, etc. This schema can define the data fields of the permission object. One such data field defined in the scheme is an access-override field402. This access-override field402can be created to store the network address at which the RBI host108can request the network-provided service from the service provider106. In some cases, this network address is unique to the user of the client device102, the organization of the user, etc. In some cases, this network address is common to all users and consistent for a plurality of the authorized users. That is to say, all users authenticated by the identity provider104would get the same address in their permission object in such a scheme. As can be seen, the access-override field402can be (but does not need to be) free of user-specific characters such as a hash of the user's identity, a cryptographic signature, etc. However, other fields in the permission object may contain such user-specific characters. FIG.5shows an example of a computing device500and an example of a mobile computing device that can be used to implement the techniques described here. The computing device500is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The mobile computing device is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. The computing device500includes a processor502, a memory504, a storage device506, a high-speed interface508connecting to the memory504and multiple high-speed expansion ports510, and a low-speed interface512connecting to a low-speed expansion port514and the storage device506. Each of the processor502, the memory504, the storage device506, the high-speed interface508, the high-speed expansion ports510, and the low-speed interface512, are interconnected using various busses, and can be mounted on a common motherboard or in other manners as appropriate. The processor502can process instructions for execution within the computing device500, including instructions stored in the memory504or on the storage device506to display graphical information for a GUI on an external input/output device, such as a display516coupled to the high-speed interface508. In other implementations, multiple processors and/or multiple buses can be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices can be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory504stores information within the computing device500. In some implementations, the memory504is a volatile memory unit or units. In some implementations, the memory504is a non-volatile memory unit or units. The memory504can also be another form of computer-readable medium, such as a magnetic or optical disk. The storage device506is capable of providing mass storage for the computing device500. In some implementations, the storage device506can be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product can also contain instructions that, when executed, perform one or more methods, such as those described above. The computer program product can also be tangibly embodied in a computer- or machine-readable medium, such as the memory504, the storage device506, or memory on the processor502. The high-speed interface508manages bandwidth-intensive operations for the computing device500, while the low-speed interface512manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In some implementations, the high-speed interface508is coupled to the memory504, the display516(e.g., through a graphics processor or accelerator), and to the high-speed expansion ports510, which can accept various expansion cards (not shown). In the implementation, the low-speed interface512is coupled to the storage device506and the low-speed expansion port514. The low-speed expansion port514, which can include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) can be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device500can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a standard server520, or multiple times in a group of such servers. In addition, it can be implemented in a personal computer such as a laptop computer522. It can also be implemented as part of a rack server system524. Alternatively, components from the computing device500can be combined with other components in a mobile device (not shown), such as a mobile computing device550. Each of such devices can contain one or more of the computing device500and the mobile computing device550, and an entire system can be made up of multiple computing devices communicating with each other. The mobile computing device550includes a processor552, a memory564, an input/output device such as a display554, a communication interface566, and a transceiver568, among other components. The mobile computing device550can also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of the processor552, the memory564, the display554, the communication interface566, and the transceiver568, are interconnected using various buses, and several of the components can be mounted on a common motherboard or in other manners as appropriate. The processor552can execute instructions within the mobile computing device550, including instructions stored in the memory564. The processor552can be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor552can provide, for example, for coordination of the other components of the mobile computing device550, such as control of user interfaces, applications run by the mobile computing device550, and wireless communication by the mobile computing device550. The processor552can communicate with a user through a control interface558and a display interface556coupled to the display554. The display554can be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface556can comprise appropriate circuitry for driving the display554to present graphical and other information to a user. The control interface558can receive commands from a user and convert them for submission to the processor552. In addition, an external interface562can provide communication with the processor552, so as to enable near area communication of the mobile computing device550with other devices. The external interface562can provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces can also be used. The memory564stores information within the mobile computing device550. The memory564can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. An expansion memory574can also be provided and connected to the mobile computing device550through an expansion interface572, which can include, for example, a SIMM (Single In Line Memory Module) card interface. The expansion memory574can provide extra storage space for the mobile computing device550, or can also store applications or other information for the mobile computing device550. Specifically, the expansion memory574can include instructions to carry out or supplement the processes described above, and can include secure information also. Thus, for example, the expansion memory574can be provide as a security module for the mobile computing device550, and can be programmed with instructions that permit secure use of the mobile computing device550. In addition, secure applications can be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. The memory can include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below. In some implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The computer program product can be a computer- or machine-readable medium, such as the memory564, the expansion memory574, or memory on the processor552. In some implementations, the computer program product can be received in a propagated signal, for example, over the transceiver568or the external interface562. The mobile computing device550can communicate wirelessly through the communication interface566, which can include digital signal processing circuitry where necessary. The communication interface566can provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), among others. Such communication can occur, for example, through the transceiver568using a radio-frequency. In addition, short-range communication can occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, a GPS (Global Positioning System) receiver module570can provide additional navigation- and location-related wireless data to the mobile computing device550, which can be used as appropriate by applications running on the mobile computing device550. The mobile computing device550can also communicate audibly using an audio codec560, which can receive spoken information from a user and convert it to usable digital information. The audio codec560can likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device550. Such sound can include sound from voice telephone calls, can include recorded sound (e.g., voice messages, music files, etc.) and can also include sound generated by applications operating on the mobile computing device550. The mobile computing device550can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a cellular telephone580. It can also be implemented as part of a smart-phone582, personal digital assistant, or other similar mobile device. Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms machine-readable medium and computer-readable medium refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor. To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Although a few implementations have been described in detail above, other modifications are possible. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. Other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims. | 31,026 |
11943217 | DETAILED DESCRIPTION The example embodiments presented herein are directed to systems, methods, and non-transitory computer-readable medium products for associating a target device with credentials of a source device based on an identifier broadcast by the target device. This is for convenience only and is not intended to limit the application of the present invention. After reading the following description, it will be apparent to one skilled in the relevant art how to implement the following disclosure in alternative embodiments. As used herein, “credentials” are data usable by a device to at least access content. For instance, credentials are usable for authentication. In another instance, credentials are usable to gain access to particular services. In some of the example embodiments described below, credentials are associated with a first account (e.g., the credentials usable to initiate a login process for the first account to access content associated with the account). In an example, using credentials to access content includes performing an authentication process with an application or server, such as using OAUTH 2.0, OPENID CONNECT (maintained by the OPENID FOUNDATION), SAML (maintained by OASIS of Burlington, MA), or other standards, protocols, or techniques. Other uses for, and examples of, credentials will be apparent to one of skill in the art. In some examples, the credentials are representative of a username and password for a first account. In some instances, the credentials are use-limited or time-limited, such as one-session-use credentials or credentials that are valid for limited amount of time. As used herein, “target device” refers to a computing device with which credentials are to be associated. As used herein, “source device” refers to a computing device with which the credentials are currently associated. In an example, a target device enters an association mode. The target device connects to an association server to obtain an identification code. The target device broadcasts the identification code without regard to particular devices nearby. A source device detects the broadcast. The source device extracts the identification code from the broadcast. The source device provides a prompt via a user interface to obtain confirmation of associating a source device account with the target device. The source device receives confirmation over the user interface. In response to receiving the confirmation, the source device provides the identification code to the association server. The association server then provides credentials associated with the source device account to the target device for use in accessing content associated with the source device account. System for Associating a Target Device with Source Device Credentials FIG.1illustrates an example system100for associating a target device150with a source device110. The system100includes the source device110, the target device150, and an association server190. The source device110, the target device150, and the association server190include various components usable to perform one or more of the operations described herein. The source device110is a computing device. The source device110stores source device credentials112. The source device credentials are associated with a source device account114. In an example, the source device account114is associated with particular content. For instance, the source device110uses the source device credentials112to access the content associated with the source device account114. The content is stored at the association server190or a system association therewith, in some examples. The target device150is a computing device to which the source device credentials112are to be provided. In many examples, the target device150is already associated with target device credentials152associated with a target device account154, such as by storing the target device credentials152. The target device150is able to become associated with different credentials, such as the source device credentials112. The target device150becomes associated with the source device credentials112as part of a guest mode via which the source device credentials112are associated with the target device150, for instance. The association server190is a computing device remote from the source device110and the target device150. The association server190is connected to the source device110via a network. The association server190is connected to the target device150via a network. The association server190manages providing the source device credentials112to the target device150. For instance, the association server190manages the source device account114and the target device account154. In some examples, the association server190provides the content that the source device credentials112are used to access. Process for Associating the Target Device with the Source Device Credentials FIG.2, which is made up ofFIG.2AandFIG.2B, illustrates a process200for associating a target device150with a source device110, such as source device credentials112thereof. The process200is performed by various components, such as the components described above in connection with system100. As illustrated, the process200includes various operations performed by the source device110, the target device150, and an association server190, though the process need not be so limited. More or fewer operations may be performed in some examples. One or more of the operations may be performed by different devices. One or more of the operations may be performed in response to one or more other operations completing. One or more of the operations may be performed concurrently or sequentially. The process200begins with operation202, at which the target device150enters an association mode. Entering the association mode includes preparing to become associated with a particular account or credentials, as will be described in more detail below. The target device150can enter the association mode in response to one or more of a variety of causes. In one example, the target device150enters the association mode responsive to attempting to operate without credentials. For instance, the target device150determines whether the target device150has credentials with which to use. In the example, the target device150begins operating in the association mode in response to determining that it lacks credentials with which to use. The target device150can lack credentials (e.g., target device credentials152) to use for a variety of reasons. In an example, the target device150lacks credentials when the target device150is powering up for a first time without being preconfigured with credentials. In another example, the target device150lacks credentials because the target device150finished executing log-out process. For instance, the target device150finished a log-out process in which the target device150disassociates itself from particular credentials. In another example, the target device150enters the association mode responsive to the target device150receiving a signal. For instance, the target device150receives an electronic signal from another device, such as a message from the association server190or the source device110. The target device150receives an acoustic signal from another device, for instance. In the example, the signal causes the target device150to enter the association mode. For instance, the target device150enters the association mode responsive to receiving the signal. In another instance, the signal is a user input signal, such as a user input signal (e.g., an utterance having a command for the target device to enter the association mode) received over a voice-based user interface of the target device150. The target device150can take one or more of a variety of actions in response to entering the association mode. In an example, the target device150prepares to become associated with credentials responsive to entering the association mode, such as by disassociating itself from the target device credentials152. At operation204, the target device150obtains an identification code205. In many examples, the target device150obtains the identification code205responsive to entering the association mode. The target device150can obtain the identification code205in a variety of ways, such as: obtaining the identification code205locally from the target device150(e.g., as described in connection with operation206) or obtaining the identification code205from the association server190(e.g., as described in connection with operation208). At operation206, the target device150obtains the identification code205from storage local to the target device150(e.g., by accessing a data store of the target device150). In an example, the identification code205is a locally-stored identification code. For instance, the identification code205includes or is based on the device identifier of the target device150(e.g., the identification code205is a hash of the device identifier). At operation208, the target device150obtains the identification code205from the association server190. For example, obtaining the identification code205includes the target device150sending an identification code request to the association server190. In another example, the target device150obtains the identification code205from the association server190by logging into the association server190. For instance, the target device150logs into the association server190by providing a one-time-password to the association server190, such as an unused one-time-password stored in a one-time-password data store of the target device150. The target device150then provides the one-time-password to the association server190as part of the identification code request. In such an example, the identification code request includes the one-time-password. Where the identification code request is an application programming interface call, the call can include the one-time-password as a parameter. At operation210, the association server190receives the identification code request (e.g., from the target device150). The association server190can receive the identification code request in a variety of ways, such as over an application programming interface. The association server190can take one or more of a variety of actions in response to receiving the identification code request. Where the identification code request includes a one-time-password, the association server190determines whether the one-time-password is valid. Responsive to the one-time-password being valid, the association server190continues processing the identification code request. Responsive to the one-time-password being invalid, the association server190stops processing the request. At operation212, the association server190generates the identification code205. In an example, the association server190generates the identification code205by randomly or pseudo-randomly generating one or more portions of the identification code205. The generated identification code205can be limited, such as a time-limited identification code (e.g., an identification code205that expires after a predetermined amount of time, such as fifteen minutes) or a use-limited identification code (e.g., the identification code205expires after being used a certain number of times). In an example, the identification code205is stored in a data structure. In some examples, the association server190performs operation214in which the association server190generates an access code215. The access code215is a code used by the target device150to access the association server190. The access code215is used by the association server190to identify the target device150, for instance. In an example, the association server190performs one or more of the following actions: storing data in association with the access code215, using the access code215as a session identifier, and using the access code215to authenticate the target device150for a session, among other actions. In an example, the one-time-password that the target device150used to access the association server190to request the identification code may have expired, and the target device150uses the access code215to access one or more services of the association server190(e.g., the target device150provides the access code215in future requests). At operation216, the association server190responds to the identification code request, such as by sending an identification code request response to the target device150. In an example, the identification code request response includes the identification code205. In another example, the identification code request response also includes the access code215 Returning to operation208, in an example obtaining the identification code205from the association server190further includes receiving the identification code request response from the association server190. In such examples, obtaining the identification code from the association server190includes extracting the identification code205from the identification code request response. Returning to operation204, the target device150obtained the identification code205(e.g., by obtaining the identification code205locally as in operation206, by obtaining the identification code205from the association server190as in operation208, or in other ways). Following operation204, the flow moves to operation222to provide a broadcast of the identification code205and, optionally, to operation218to send a status request to the association server190. At operation218, the target device150sends a status request to the association server190, such as by sending the status request to the association server190regarding the association mode. In an example, the target device150sends the status request to one or both of: determine whether a source device110was detected and request a new identification code205(e.g., because the identification code205expired). For instance, the target device150determines that the identification code205expired and the status request in response thereto. In an example, responsive to sending the status request, the target device150receives from the association server190a new identification code250and the flow returns to operation204. In some examples, the target device150provides the access code215with the status request (e.g., the status request includes the access code215). In an example, the target device150sends the status request using an application programming interface of the association server190. For instance, the target device150uses the application programming interface and provides the access code215as a parameter. At operation220, the association server190responds to the status request. Responding to the status request can include first receiving the status request. After receiving the status request, the association server190determines whether the status request includes a valid access code215, for instance. Responsive to determining that the status request does not include a valid access code215, the association server190ignores the status request or returns an error. In an example, the association server190determines the identification code205associated with the access code215. The association server190can query a data structure using the access code215to determine whether an authorization was received for the identification code205(see operation230, below). In an example, the association server190determines whether the identification code205is valid (e.g., not expired due to time or use). Responsive to determining that the identification code205is invalid (e.g., expired due to time), the flow returns to operation212. The association server190also sends a message to the target device150to cause the target device150to exit the association mode, in some instances, responsive to determining that the identification code205is invalid. At operation222, the target device150broadcasts the identification code205, such as by providing a broadcast that includes the identification code205. The target device150can broadcast the identification code205in a variety of ways. In an example, the target device150broadcasts the identification code using one or more of the following technologies: a wireless personal area network technology, a low-energy technology, and a radio frequency broadcast technology, among other technologies. Where the target device150broadcasts using a radio frequency technology, the radio frequency broadcast can have one or more attributes: have a frequency of approximately 2.4 GHz, use a low-energy radiofrequency transmission protocol (e.g., BLUETOOTH LOW ENERGY maintained by the BLUETOOTH SPECIAL INTEREST GROUP), and use a WI-FI protocol, among other attributes. In an example, the identification code205can be broadcast as part of a service set identifier (SSID), such as a WI-FI SSID. In an example, the target device150broadcasts the identification code205using an audio signal (e.g., the target device150encodes the identification code205in the audio signal using audio steganography and plays the audio signal through a speaker). In an example, the target device150broadcasts the identification code205in a light signal. In some instances, the target device150broadcasts the identification code205periodically or continuously. In an example, the target device150broadcasts the identification code205without one or more of the following actions: pairing with a device (e.g., without pairing with the source device110), encrypting the broadcast, responding to an answer from a device directly to the broadcast, and broadcasting with respect to a particular device (e.g., the broadcast is not directed to a particular device, such as directed to a device having a particular Internet protocol address), among other actions. For instance, the target device150broadcasts the identification code205to all nearby devices. In a further instance, the target device150is not receptive to responses directly to the broadcast, such as a pairing request in direct response to the broadcast. In an example, the target device150does not accept a connection responsive to the broadcast. For instance, the target device150broadcasts an SSID (e.g., a WI-FI SSID) but does not accept a connection request directed to the SSID from another device. In some examples, the target device150broadcasts the identification code205as a payload of a message. In an example, the broadcast has a format recognizable by the source device110as being an association mode broadcast. For instance, the identification code205has a format recognizable as being part of an association mode. In an example, the broadcast includes a name of the target device150. In an example, the broadcast includes a device type of the target device. At operation224, the source device110receives the broadcast, such as using an antenna. In an example the source device110executes an application that monitors for a broadcast (e.g., an application that continuously monitors for a broadcast). For instance, the application is a media-playback application. In an example, the source device110detects the broadcast. In an example, the source device110recognizes the broadcast as having a format associated with an association mode. For instance, the source device110recognizes the identification code205as having a format identifying the identification code205as part of an association mode. In an example, the source device110obtains, from the broadcast, (e.g., by parsing the content of the broadcast) one or more of: the identification code205, a name of the target device150, and a device type of the target device150from the broadcast. In examples, the source device110performs one or more operations in response to receiving the broadcast, such as operation226. At operation226, the source device110obtains authorization to associate with the target device150, such as in response to receiving the broadcast in operation224. In an example, the authorization is user authorization. For instance, the source device110provides a user interface to obtain user authorization to associate with the target device150. In an example, the user interface is a voice-based user interface, and the target device150obtains the user authorization with the voice-based user interface. For instance, the source device110verbally asks for user authorization over the voice-based user interface, and the source device110receives verbal authorization from the user over the voice-based user interface. In an example, the user interface includes a visual element, such as by rendering one or more visual user interface elements at a display of the source device110. For instance the source device110receives a selection of a visual user interface elements indicating that the user authorizes the association with the target device. In an example, the user interface includes a tactile element. For instance, the source device110receives authorization via a tactile user interface element (e.g., a button). In another example, the source device110receives the selection of one of the user interface elements via a virtual interface (e.g., a touch screen, a gesture sensor, etc.). In still other examples, the source device110automatically provides authorization (e.g., without needing to receive contemporaneous authorization from the user). For instance, the source device110automatically provides authorization based on determining that a parameter allows automatic authorization. Responsive to receiving an indication that that the user does not authorize the association, the source device110takes no further action regarding the broadcast. Responsive to receiving an indication that the user does authorize the association, the flow moves to operation228in which the source device110sends the authorization to the association server190via a source device authorization message. At operation228, the source device110sends a source device authorization message to the association server190. In an example, the source device authorization message includes one or both of an identification of the source device account114and the identification code205. At operation230, the association server190receives the source device authorization message. In an example, the association server190extracts the identification code205from the source device authorization message. In an example, the association server190extracts an identification of the source device account114from the source device authorization message. At operation232, the association server190processes the source device authorization message. In an example, processing the source device authorization message includes determining a device associated with the identification code205(e.g., by looking up a device identifier in a data structure that stores a device identifier in association with an identification code). At operation234, the association server190requests target device authorization from the target device150. In an example, the association server190provides a target device authorization request to the target device150. For instance, the target device request includes an identification of the source device110that provided the source device identification (e.g., the name of the source device110). At operation236, the target device150obtains authorization to associate with the source device110. In an example, the target device150obtains authorization to associate with the source device110, such as in response to receiving the target device authorization request. The target device150obtains authorization to associate with the source device110. In an example, the target device150obtains the authorization (e.g., user authorization) responsive The authorization can be obtained in a variety of ways, such as by providing a user interface to obtain user authorization to associate with the source device110. In an example, the target device150provides one or both of a visual user interface or a voice-based user interface. The target device then receives a selection of one of a user interface elements indicating that the user authorizes the association with the source device110in one or more of the following ways: via a tactile user input (e.g., via a button), via a virtual user input (e.g., via a touch screen or a gesture sensor), or via a verbal user input (e.g., via an utterance). Responsive to receiving an indication that the user does not authorize the association, the process200terminates. Responsive to receiving an indication that the user does authorize the association, the flow moves to operation238and the target device150sends the authorization to the association server190. At operation238, the target device150sends a target device authorization message to the association server190. In an example, sending the target device authorization message includes accessing an application programming interface. At operation240, the association server190receives the target device authorization message, such as by receiving the target device authorization message over an application programming interface. At operation242, the association server190processes the target device authorization message, such as by determining whether the target device authorization message was received within a threshold amount of time. At operation244, the association server190associates the source device110and the target device150, such as in response to receiving one or both of the target device authorization message and the source device authorization message. Though, in some examples, the association server190associating the target device150and the source device110is performed without receiving one or both of the target device authorization message and the source device authorization message. In an example, associating the source device110and the target device150includes performing credential management or account management based on the source device110and the target device. For instance, the association server190updates one or more records to indicate that the source device110and the target device150are associated or vice versa. In an example, the association server190associating the source device110and the target device150includes the association server190providing credentials to the target device150, such as the source device credentials112. For instance, associating the source device110and the target device150includes the association server190obtaining source device credentials112in operation246and the association server190providing the source device credentials112in operation248. At operation246, the association server190obtains source device credentials112(e.g., the source device credentials112for the source device account114). For instance, the association server190selects the source device credentials112from a data store that stores the source device credentials112. The association server190selects the source device credentials112using an identifier of the source device110. In other instances, the association server190obtains the source device credentials112by generating new source device credentials112associated with the source device account114. At operation248, the association server190provides the source device credentials112to the target device150. In an example, the association server190sends a message to the target device150. The message includes the source device credentials112. At operation250, the target device150receives the source device credentials112. In many examples, the target device150stores the source device credentials112locally at the target device150. For instance, the source device credentials112are stored for use in accessing services (e.g., services provided by the association server190). In instances where the source device credentials112are for use in a guest mode, the source device credentials112can be stored in a temporary area (e.g., a temporary data structure in a data store on the target device150) or are stored in association with a limit (e.g., a time limit or use limit). In some examples, the association server190does not provide the source device credentials112. Instead, the association server190provides an identifier associated with the source device credentials112. For instance, the target device150uses the provided identifier in order to select credentials from among locally-stored credentials (e.g., the target device150already stores multiple different sets of credentials). In some examples, the association server190provides a decryption key with which to unlock or access locally-stored credentials. At operation252, the target device150accesses content using the source device credentials112. In an example, the target device150uses the received source device credentials112to access services, such as a media server application. Process for Guest and Primary Modes FIG.3illustrates a process300for operating the target device150in a primary mode312and a guest mode322. The process300begins with operation310. At operation310, the target device150operates in a primary mode312associated with the target device account154. The primary mode312is a mode in which the target device150remains indefinitely (e.g., the primary mode312is a default mode). Operating the target device150in the primary mode312associated with a target device account154includes the target device150using the target device account154to access content associated with the target device account154. For example, the target device150obtains one or more media content items using the target device account154. For instance, obtaining the one or more media content items includes accessing a library of media content items associated with the target device account154or obtaining the one or more media content items according to a taste profile associated with the target device account154. In an example, the target device150leaves the primary mode312responsive to receiving or executing a command, such as a log-out command, a switch account command, or an enter association mode command. In many examples, the primary mode312is a mode associated with an account of owner of the target device150. In an example, while operating in the primary mode312, the target device150receives a guest mode command. The guest mode command is a command that causes the target device150to begin the process of entering a guest mode. For instance, the guest mode command causes the flow of the process300to transition to operation320. The target device150receives the guest mode command in any of a variety of ways. In an example, receiving the guest mode command includes receiving the guest mode command over a user interface. In an example, the target device150receives the guest mode command over a voice-based user interface. For instance, the target device150receives an utterance (e.g., “enter guest mode”). The utterance is analyzed (e.g., using natural language processing), and the target device150executes a guest mode process in response thereto. In another example, the target device150receives the guest mode command over a touch screen user interface. For instance, the target device150detects that a virtual user interface element has been actuated. In an example, the target device150receives the guest mode command over a gesture-based user interface. In an example, the target device150receives the guest mode command over a tactile user interface (e.g., the target device150detects that a tactile button has been actuated and executes a guest mode process in response thereto). In an example, the target device150receives the guest mode command from another device. For instance, the target device150receives a guest mode message from the association server190. The target device150executes a guest mode process in response thereto. At operation320, the target device150operates in the guest mode322associated with an account other than the target device account154. As illustrated, the other account is the source device account114. Operating in the guest mode322takes various forms. Operating in the guest mode322includes, for instance, the target device150operating according to credentials associated with the source device account114. The credentials can be obtained using any of a variety of techniques described herein, including but not limited to those described in relation toFIG.2. In some instances, operating in the guest mode322includes the target device150operating with a limited set of permissions compared to the primary mode312. For instance, while operating in the guest mode322, the target device150may be unable to perform one or more of the following actions: downloading media content items to the target device150, modifying an equalizer of the target device150, accessing predetermined content, changing wireless settings of the target device150, changing security settings of the target device150, changing an ownership of the target device150, changing account management settings of the target device150, playing explicit tracks on the target device150, locking the target device150, and obtaining primary mode status on the target device150, among other actions. For example, the target device150operating in the guest mode322, the target device150receives user input associated with performing an action not permitted in guest mode, and the target device150does not perform the action. In an example, the target device150provides an error message in response thereto. For instance, the error message indicates that the action cannot be taken due to operating in the guest mode. In an example, the guest mode322is a temporary mode. For instance, the target device150operates in the guest mode322for a limited amount of time (e.g., one hour, one day, one week, and one month). For instance, the guest mode322being a temporary mode includes the target device150operating in the guest mode322until a certain number of media content items are played (e.g., playback of one, two, three, or more media content items). For instance, the guest mode322being a temporary mode includes the target device150operating in the guest mode322until the occurrence of a particular event (e.g., receiving a revert command, detecting the presence of a device associated with the target device account154). Once the temporary mode ends, the target device150reverts to the primary mode312. In an example, reverting to the primary mode312includes operating the target device150using the account associated with the primary mode312(e.g., the target device account154). In another example, the device operates in the guest mode322until a reversion command is received. In an example, receiving the reversion command includes receiving user input associated with reverting to the primary mode312over a user interface (e.g., receiving an utterance over a voice-based user interface of the target device150or receiving an indication that a virtual or physical user interface element associated with reversion has been actuated). In an example, the target device150provides an interface via which an account operating in a primary mode312customizes the permissions of the guest mode322. In another example, the target device150operates in the guest mode322until a device power event occurs. For instance, the target device150operates in the guest mode until the target device150powers on, powers off, enters a sleep mode, enters an inactive state, enters an active state, wakes up, restarts, loses power, or gains power, among others. The target device150then reverts to the primary mode312with the target device account154. In some examples, operating in a guest mode322includes the target device150storing credentials of the account associated with the primary mode312. In an example, the target device150stores credentials associated with the primary mode in memory for later use. Storing the credentials in memory facilitates the target device150reverting from, for example, operating in the guest mode322associated with the source device account114to the primary mode312associated with the target device account154without re-receiving the credentials associated with the target device account154, which saves the user time and reduces resource consumption (e.g., by not requiring the use associated with the target device account154to re-log into the target device150). At operation330, the target device150reverts to the primary mode312associated with the target device account154from the guest mode322. For instance, the reversion is triggered by one or more of the conditions or criteria described in operation320. Reverting can include accessing credentials associated with the target device account154that are stored locally at the target device150. In some examples, reverting includes obtaining the credentials from the association server190(e.g., the target device150accessing an application programming interface associated with the association server190). Device Environment FIG.4, which is made up ofFIG.4AandFIG.4B, illustrates an example system400for association via a broadcast. The example system400illustrates the source device110, the target device150, and the association server190connected over a network406. In the illustrated example, the association server190is part of a media delivery system404and this example system400is described in the context of media content item playback. This is for example purposes only. The techniques described herein can be used with a variety of systems. Source Device The source device110is a computing device. In some examples, the source device110is a computing device for playing media content items to produce media output. In some examples, the media content items are provided by the media-delivery system404and transmitted to the source device110using the network406. A media content item is an item of media content, including audio, video, or other types of media content, which may be stored in any format suitable for storing media content. Non-limiting examples of media content items include songs, music, albums, audiobooks, music videos, movies, television episodes, podcasts, other types of audio or video content, and portions or combinations thereof. The source device110plays the media content item for a user. The media content item is selectable for playback with user input. The media content item is also selectable for playback without user input, such as by the source device110or the media-delivery system404. In an example, media content is selected for playback by the media-delivery system404based on a user taste profile stored in association with the source device account114. The source device110selects and plays media content and generates interfaces for controlling playback of media content items. In some examples, the source device110receives user input over a user interface, such as a touch screen user interface, an utterance-based user interface, tactile user interfaces, virtual user interfaces, or other user interfaces and plays a media content item based thereon. The source device110can include other input mechanisms including but not limited to a keypad and/or a cursor control device. The keypad receives alphanumeric characters and/or other key information. The cursor control device includes, for example, a handheld controller or mouse, a rotary input mechanism, a trackball, a stylus, and/or cursor direction keys. As noted above, in the example, the source device110plays media content items. In some examples, the source device110plays media content items that are provided (e.g., streamed, transmitted, etc.) by a system external to the media-playback device such as the media-delivery system404, another system, or a peer device. Alternatively, in some examples, the source device110plays media content items stored locally on the source device110. Further, in at least some examples, the source device110plays media content items that are stored locally as well as media content items provided by other systems. In some examples, the source device110is a computing device, handheld entertainment device, smartphone, tablet, watch, wearable device, or any other type of device capable of playing media content. In yet other examples, the source device110is a media playback appliance, such as an in-dash vehicle head unit, an aftermarket vehicle media playback appliance, a smart assistant device, a smart home device, a television, a gaming console, a set-top box, a network appliance, a BLU-RAY disc player, a DVD player, a media player, a stereo system, smart speaker, an Internet-of-things device, or a radio, among other devices or systems. In many examples, the source device110includes a user interface452, one or more source device processing devices116, and a source device memory device118. In an example, the source device110includes a content output device458. In an example, the source device110includes a movement-detecting device. In an example, the source device110includes a network access device462. In an example, the source device110includes a sound-sensing device464. Other examples may include additional, different, or fewer components. The location-determining device450is a device that determines the location of the source device110. In some examples, the location-determining device450uses one or more of the following technologies: Global Positioning System (GPS) technology that receives GPS signals from satellites, cellular triangulation technology, network-based location identification technology, WI-FI positioning systems technology, ultrasonic positioning systems technology, and combinations thereof. Examples of the location-determining device450further include altitude- or elevation-determining devices, such as barometers. The user interface452operates to interact with the user, including providing output and receiving input. In an example, the user interface452is a physical device that interfaces with the user (e.g., touch screen display). In an example, the user interface452is a combination of devices that interact with the user (e.g., speaker and microphone for providing an utterance-based user interface). In some examples, the user interface452includes a touch-screen-based user interface. A touch screen operates to receive an input from a selector (e.g., a finger, stylus, etc.) controlled by the user. In some examples, the touch screen operates as both a display device and a user input device. In some examples, the user interface452detects inputs based on one or both of touches and near touches. In some examples, the touch screen displays a user interface for interacting with the source device110. Some examples of the source device110do not include a touch screen. Examples of the user interface452include input control devices that control the operation and various functions of the source device110. Input control devices include any components, circuitry, or logic operative to drive the functionality of the source device110. For example, input control device(s) include one or more processors acting under the control of an application. While some examples of the source device110do not include a display device, where a source device110does include a display device, the source device110will often include a graphics subsystem and coupled to an output display. The output display uses various technologies, such as TFT (Thin Film Transistor), TFD (Thin Film Diode), OLED (Organic Light-Emitting Diode), AMOLED (active-matrix organic light-emitting diode) display, and/or liquid crystal display (LCD)-type displays. The displays can also be touch screen displays, such as capacitive and resistive-type touch screen displays. The one or more source device processing devices116include one or more processing units, such as central processing units (CPU), digital signal processors, and field-programmable gate arrays, among others. The source device memory device118operates to store data and instructions. In some examples, the source device memory device118stores instructions to perform one or more operations described herein. Some examples of the source device memory device118also include a media content cache. The media content cache stores media content items, such as media content items that have been previously received from the media-delivery system404. The media content items stored in the media content cache are storable in an encrypted or unencrypted format, and decryption keys for some or all of the media content items are also stored. The media content cache can also store metadata about media content items such as title, artist name, album name, length, genre, mood, or era. The media content cache can also store playback information about the media content items, such as the number of times the user has requested to playback the media content item or the current location of playback. The source device memory device118typically includes at least some form of computer-readable media. Computer-readable media includes any available media that can be accessed by the source device110. By way of example, computer-readable media include computer-readable storage media and computer-readable communication media. Computer-readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any device configured to store information such as computer-readable instructions, data structures, program modules, or other data. Computer-readable storage media includes, but is not limited to, random access memory, read only memory, electrically erasable programmable read only memory, flash memory and other memory technology, compact disc read only memory, BLU-RAY discs, DVD discs, other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by the source device110. In some examples, computer-readable storage media is non-transitory computer-readable storage media. Computer-readable communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, computer-readable communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media. Combinations of any of the above are also included within the scope of computer-readable media. In an example, the source device110has the one or more source device processing devices116coupled to the source device memory device118storing source device instructions which when executed cause the one or more source device processing devices116to perform one or more operations described herein. The content output device458operates to output media content. In many examples, the content output device458provides media output for a user. In some examples, the content output device458provides media output to a target device150. Examples of the content output device458include a speaker assembly having one or more speakers, an audio output jack, a BLUETOOTH transmitter, a display panel, and a video output jack. Other examples are possible as well, such as transmitting a signal through the audio output jack or BLUETOOTH transmitter to reproduce an audio signal by a connected or paired device such as headphones, speaker system, or vehicle head unit. The network access device462operates to communicate with other computing devices over one or more networks, such as the network406. Examples of the network access device include one or more wired network interfaces and wireless network interfaces. Examples of wireless network interfaces include infrared, BLUETOOTH wireless technology, WI-FI, 802.11a/b/g/n/ac, and cellular or other radio frequency interfaces. In an example, the source device110further includes a broadcast receiver466. The broadcast receiver466is a component able to receive a broadcast (e.g., as in operation224). In many examples, the broadcast receiver466is not a component dedicated to receiving the broadcast. For example, the network access device462may operate as the broadcast receiver466. In another example, the sound-sensing device464operates as the broadcast receive466. In an example, the broadcast receiver466is a dedicated broadcast receiving device. In an example, the broadcast receiver includes an antenna. In an example, the antenna is a radiofrequency antenna. In an example, the radiofrequency antenna is preconfigured to receive short-wavelength ultra-high frequency radio waves. In an example, the radio waves are in the ISM (Industrial, Scientific and Medical) radio bands. In an example, the radio waves are approximately 2.4 GHz. In an example, the radio waves are associated with BLUETOOTH. In an example, the broadcast receiver466is able to receive broadcasts from a beacon. In an example, the beacon is a low-energy beacon. In an example, the beacon is a BLUETOOTH LOW ENERGY beacon. In an example, broadcast receiver466is preconfigured to receive broadcasts from a broadcast transmitter468of the target device150. In an example, the broadcast receiver466is a transceiver. In some examples, the source device110includes a movement-detecting device that senses movement of the source device110, acceleration of the source device110, determines an orientation of the source device110, or includes other detecting devices. In at least some examples, the detecting devices include one or more accelerometers or other motion-detecting technologies or orientation-detecting technologies. Network The network406is an electronic communication network that facilitates communication between the source device110, the media-delivery system404, and in some instances, the target device150. An electronic communication network includes a set of computing devices and links between the computing devices. The computing devices in the network use the links to enable communication among the computing devices in the network. The network406can include routers, switches, mobile access points, bridges, hubs, intrusion detection devices, storage devices, standalone server devices, blade server devices, sensors, desktop computers, firewall devices, laptop computers, handheld computers, mobile telephones, vehicular computing devices, and other types of computing devices. In various examples, the network406includes various types of links. For example, the network406includes wired and/or wireless links, including BLUETOOTH, ultra-wideband (UWB), 802.11, ZIGBEE, cellular, and other types of wireless links. Furthermore, in various examples, the network406is implemented at various scales. For example, the network406is implemented as one or more vehicle area networks, local area networks (LANs), metropolitan area networks, subnets, wide area networks (such as the Internet), or can be implemented at another scale. Further, in some examples, the network406includes multiple networks, which may be of the same type or of multiple different types. Target Device The target device150can include one or more of the components of the source device110. The aspects described herein are relevant to using the source device account114of the source device110on the target device150. In an example, the association is performed using a broadcast transmitter468of the target device150. In some examples, it is otherwise difficult for a user to provide account information to the target device150, such as by the target device150lacking a keyboard, touch screen, or other components that facilitate arbitrary input. In some examples, the target device150lacks a direct connection to the source device110over BLUETOOTH, WI-FI, or other electronic communication schemes. In an example, the target device150has one or more target device processing devices156coupled to a target device memory device158storing target device instructions which when executed cause the one or more target device processing devices156to perform one or more operations described herein. The one or more target device processing devices156include one or more processing units, such as central processing units (CPU), digital signal processors, and field-programmable gate arrays, among others. The target device memory device158operates to store data and instructions. The target device memory device158stores instructions to perform one or more operations described herein. Some examples of the target device memory device158also include a media content cache (e.g., a media content cache as described above in relation to the source device memory device118). The target device memory device158typically includes at least some form of computer-readable media (e.g., computer-readable media as described above in relation to the source device memory device118). The sound-sensing device464senses sounds proximate the target device150(e.g., sounds within a vehicle in which the target device150is located). In some examples, the sound-sensing device464comprises one or more microphones. In some examples, the sound-sensing device464includes multiple microphones in a sound-canceling arrangement to facilitate operation in a noisy environment (e.g., configured for use in a vehicle). The sound-sensing device464is able to capture sounds from proximate the target device150and create a representation thereof. These representations are analyzed by the target device150or the media-delivery system404. In some examples, the representations are used to provide an utterance-based user interface. In such examples, speech-recognition technology is used to identify words spoken by the user. The words are recognized as commands affect the behavior of the target device150(e.g., affecting playback of media content by the target device150). Natural language processing and/or intent-recognition technology are usable to determine appropriate actions to take based on the spoken words. Additionally or alternatively, the sound-sensing device464determines various sound properties about the sounds proximate the user such as volume, dominant frequency or frequencies, among other properties. These sound properties are usable to make inferences about the environment proximate to the target device150, such as whether the sensed sounds correspond to playback of a media content item. In some examples, the sound sensed by the sound-sensing device464are transmitted to media-delivery system404(or another external system) for analysis, such as using speech-recognition, intent-recognition, and media identification technologies, among others. In an example, the target device150further includes a broadcast transmitter468. The broadcast transmitter468is a component able to transmit a broadcast (e.g., as in operation222). In many examples, the broadcast transmitter468is not a component dedicated to transmitting the broadcast. For example, the network access device462may operate as the broadcast transmitter468. In another example, the content output device458operates as the broadcast transmitter468. In an example, the broadcast transmitter468is a dedicated broadcast transmitting device. In an example, the broadcast transmitter468includes an antenna. In an example, the antenna is a radiofrequency antenna. In an example, the radiofrequency antenna is preconfigured to transmit short-wavelength ultra-high frequency radio waves. In an example, the radio waves are in the ISM (Industrial, Scientific and Medical) radio bands. In an example, the radio waves are approximately 2.4 GHz. In an example, the radio waves are associated with BLUETOOTH. In an example, the broadcast transmitter468is able to transmit broadcasts as a beacon. In an example, the beacon is a low-energy beacon. In an example, the beacon is a BLUETOOTH LOW ENERGY beacon. In an example, broadcast receiver466is preconfigured to transmit broadcasts to the broadcast receiver466of the source device110. In an example, the broadcast transmitter468is a transceiver. Media-Delivery System The media-delivery system404includes one or more computing devices and provides media content items to the source device110, target device150, and, in some examples, other media-playback devices as well. In the illustrated example, the media-delivery system404includes a media content server480and the association server190. AlthoughFIG.4shows single instances of the media content server480and the association server190some examples include multiple servers. In these examples, each of the multiple servers may be identical or similar and may provide similar functionality (e.g., to provide greater capacity and redundancy, or to provide services from multiple geographic locations). Alternatively, in these examples, some of the multiple servers may perform specialized functions to provide specialized services (e.g., services to enhance media content playback during travel or other activities, etc.). Various combinations thereof are possible as well. The media content server480transmits stream media to media-playback devices, such as the source device110or target device150. In some examples, the media content server480includes a media server application484, one or more processing devices454, a memory device456, and a network access device462. In some examples, the media server application484streams music or other audio, video, or other forms of media content. The media server application484includes a media stream service494, a media data store496, and a media application interface498. The media stream service494operates to buffer media content such as media content items506,508, and510, for streaming to one or more streams500,502, and504. The media application interface498can receive requests or other communication from media-playback devices or other systems, to retrieve media content items from the media content server480. For example, the media application interface498receives communication from a media-playback engine. In some examples, the media data store496stores media content items512, media content metadata514, and playlists516. The media data store496may store one or more databases and file systems, such as the set of data structures600described in relation toFIG.5. As noted above, the media content items512may be audio, video, or any other type of media content, which may be stored in any format for storing media content. The account data store518is used to identify users. In an example, the account data store518is used to identify users of a media streaming service provided by the media-delivery system404. In some examples, the media-delivery system404authenticates a user via data contained in the account data store518and provides access to resources (e.g., media content items512, playlists516, etc.) to a device operated by a user. In some examples, different devices log into a single account and access data associated with the account in the media-delivery system404. User authentication information, such as a username, an email account information, a password, and other credentials, can be used for the user to log into his or her user account. A device can use stored credentials to log a user into the account on a device. The media data store496includes user tastes data520. The user tastes data520includes but is not limited to user preferences regarding media content items, such as media content items that the user likes/dislikes, media content item qualities that the user likes/dislikes, historical information about the user's consumption of media content, libraries of media content items, and playlists of media content items, among other user data. The media content metadata514operates to provide various information associated with the media content items512. In some examples, the media content metadata514includes one or more of title, artist name, album name, length, genre, mood, era, acoustic fingerprints, and other information. The playlists516operate to identify one or more of the media content items512and in some examples, the playlists516identify a group of the media content items512in a particular order. In other examples, the playlists516merely identify a group of the media content items512without specifying a particular order. Some, but not necessarily all, of the media content items512included in a particular one of the playlists516are associated with a common characteristic such as a common genre, mood, or era. The playlists516can include user-created playlists, which may be available to a particular user, a group of users, or to the public. In some examples, the media server application484or a dedicated access management server provides access management services. In examples, the media server application484exposes application programming interface endpoints usable by calling devices or functions to use access management services, such as services for logging in to an account, obtaining credentials associated with an account, generating credentials associated with an account, and other services. As described above, the association server190is a computing device remote from the source device110and the target device150. In an example, the association server190manages providing the source device credentials112to the target device150. In an example, the association server190has one or more association server processing devices196coupled to an association server memory device198storing association server instructions which when executed cause the one or more association server processing devices196to perform one or more operations described herein. The one or more association server processing devices196include one or more processing units, such as central processing units (CPU), digital signal processors, and field-programmable gate arrays, among others. The association server memory device198operates to store data and instructions. The association server memory device198stores instructions to perform one or more operations described herein. The association server memory device198typically includes at least some form of computer-readable media (e.g., computer-readable media as described above, such as in relation to the source device memory device118). Although inFIG.4only a single source device110, target device150, and media-delivery system404are shown, in accordance with some examples, the media-delivery system404supports the simultaneous use of devices, and the source device110and the target device150simultaneously access media content from multiple media-delivery systems404. Additionally, althoughFIG.4illustrates a streaming media-based system for media playback, other examples are possible as well. For example, in some examples, the source device110includes a media data store and the source device110selects and plays back media content items without accessing the media-delivery system404. Further in some examples, the source device110operates to store previously-streamed media content items in a local media data store (e.g., in a media content cache). In at least some examples, the media-delivery system404streams, progressively downloads, or otherwise communicates music, other audio, video, or other forms of media content items to the source device110or target device150for later playback. In accordance with an example, the user interface452receives a user request to, for example, select media content for playback on the source device110. Software examples of the examples presented herein may be provided as a computer program product, or software, that may include an article of manufacture on a machine-accessible or machine-readable medium having instructions. The instructions on the non-transitory machine-accessible machine-readable or computer-readable medium may be used to program a computer system or other electronic device. The machine- or computer-readable medium may include, but is not limited to, magnetic disks, optical disks, magneto-optical disks, or other types of media/machine-readable medium suitable for storing or transmitting electronic instructions. The techniques described herein are not limited to any particular software configuration. They may find applicability in any computing or processing environment. In some examples, there are one or more processors that operate as a particular program product or engine. In some examples, one or more processors are coupled to a memory storing instructions which when executed cause the one or more processors to operate in a particular manner. In some examples, the one or more processors include two or more sets of processors operating on different devices. The terms “computer-readable”, “machine-accessible medium” or “machine-readable medium” used herein shall include any medium that is capable of storing, encoding, or transmitting a sequence of instructions for execution by the machine and that causes the machine to perform any one of the methods described herein. Further, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, unit, logic, and so on), as taking an action or causing a result. Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action to produce a result. Some examples include a computer program product. The computer program product may be a storage medium or media having instructions stored thereon or therein which can be used to control, or cause, a computer to perform any of the procedures of the examples of the invention. The storage medium may include without limitation an optical disc, a ROM, a RAM, an EPROM, an EEPROM, a DRAM, a VRAM, a flash memory, a flash card, a magnetic card, an optical card, nanosystems, a molecular memory integrated circuit, a RAID, remote data storage/archive/warehousing, and/or any other type of device suitable for storing instructions and/or data. Stored on any one of the computer-readable medium or media, some implementations include software for controlling both the hardware of the system and for enabling the system or microprocessor to interact with a human user or other mechanism utilizing the results of the examples of the invention. Such software may include without limitation device drivers, operating systems, and user applications. Ultimately, such computer-readable media further include software for performing example aspects of the invention, as described above. Included in the programming and/or software of the system are software modules for implementing the procedures described above. Various operations and processes described herein can be performed by the cooperation of two or more devices, systems, processes, or combinations thereof. Set of Data Structures FIG.5illustrates an example set of data structures600storable in the media data store496. As illustrated, the set of data structures600includes an account table602, a media content item context data structure604, and a media content item data structure606. For each account record in the account table602, the set of data structures600includes a device data table610, a playback history table612, a favorite table614, and a playback state data structure616. It is noted that, where user data is used, it can be handled according to a defined user privacy policy and can be used to the extent allowed by the user. Where the data of other users is used, it can be handled in an anonymized matter so the user does not learn of the details of other users generally or specifically. In addition, the data contained in the set of data structures600is stored according to a defined security policy and in accordance with applicable regulations. As illustrated, each account record in the account table602has a relationship with a device data table610, a playback history table612, and a favorite table614. Each device record in the device data table610has a relationship with a playback state data structure616. Each device record in a favorite table614has a relationship with a media content item context data structure604. Each context data structure has a relationship with the media content item data structure606. The account table602stores one or more account records usable to identify accounts of the media-delivery system404. In an example, where a particular account is referred to in this disclosure (e.g., the target device account154), the account is associated with an entry stored in the set of data structures600. In an example, when the target device150accesses the media-delivery system404under the target device account154, the media-delivery system404locates an account record corresponding to the target device account154in the account table602. The media-delivery system404then uses the data associated with the account record in the account table602to provide services associated with that account. For instance, the media-delivery system404provides a media content item described in the favorite table614to the target device150and updates the playback history table612accordingly. The account table602references one or more other tables, and is referenced by one or more other tables. In an example, each account record of the account table602corresponds to an account. For instance, the target device account154corresponds to a target device account record in the account table602, and the source device account114corresponds to a source device account record in the account table602. Each account record of the account table602includes data associated with one or more fields of the account table602, such as an account ID field, a user ID field, a password field, and a type field. The account ID field stores an identifier of the account record, such as using a number. The user ID field stores an identifier of a user, such as the user's name. The password field stores data associated with a password of the user, such as a hashed and salted password. The type field identifies subscription types associated with the account record. Each account record identified in the account table602is associated with, and identifies, data for providing various services from the media-delivery system404. In some examples, the data includes the device data table610, the playback history table612, the favorite table614, and the playback state data structure616, among others. In the illustrated example, the tables610,612, and614are primarily described in association with a single record (e.g., the record having the Account ID: 71828). However, it is understood that, in other examples, the tables610,612, and614are structured to be associated with a plurality of accounts The device data table610identifies one or more devices associated with a particular account record of the account table602. The device data table610is referenced by the account table602or other tables. The device data table610can reference one or more other tables. In an example, each device record of the device data table610includes data associated with a device. For instance, a first device record of the device data table610corresponds to the source device110, and a second device record of the device data table610corresponds to the target device150once both devices110,150have been associated with the same account. Each device record of the device data table610includes data associated with one or more fields of the device data table610, such as a device ID field (e.g., storing device identifier data, such as an alphanumeric identifier), a name field (e.g., for storing a device name), a status field (e.g., for storing a status of the device, such as whether the device is currently active or inactive), a location field (e.g., for storing a last-known location of the device), and type field (e.g., for storing a type of the device, such as a phone device, a speaker device, or a vehicle head unit). The playback history table612describes the media content items played by the account by storing one or more playback records. The playback history table612can reference and be referenced by one or more other tables. In an example, each playback record of the playback history table612includes data associated with a media content item played by a respective account or device. Each playback record of the playback history table612includes data associated with one or more fields of the playback history table612, such as a device ID field (e.g., for storing an identifier of the device that caused playback of the playback record), an MCI (Media Content Item) ID field (e.g., for storing an identifier of the media content item that was played back), a start time field (e.g., for identifying the start time at which the media content item was played back), and a location field (e.g., for identifying the location of the device associated with the device ID when playback was initiated). The favorite table614describes information about favorite media content item contexts associated with the account by storing one or more favorite records. The favorite table614includes information about favorites associated with an account. The favorite table614can reference and be referenced by one or more other tables. In an example, each favorite record of the favorite table614includes data associated with a favorite media content item context (e.g., album or playlist). Each favorite record of the favorite table614includes data associated with one or more fields of the favorite table614, such as an ID field (e.g., for identifying the favorite record) and a context field (e.g., for identifying a media content item context associated with the favorite record). The context data structure604is a data structure (e.g., record of a table or other data structure) that contains data associated with a media content item context (e.g., album or playlist). The context data structure604can reference and be referenced by one or more tables or other data structures. The context data structure604stores data regarding a particular media content item context in one or more fields, such as an ID field (e.g., for identifying the context data structure604), a title field (e.g., a string naming the context data structure604), a type field (e.g., for describing the type of the media content item context, such as a playlist, album, or television season), and media content item field (e.g., for identifying one or more media content items of the context data structure604) The media content item data structure606is a data structure (e.g., record of a table or other data structure) that contains data associated with a media content item. The media content item data structure606can reference and be referenced by one or more tables or other data structures. The media content item data structure606stores data regarding a particular media content item in one or more fields, such as an ID field (e.g., storing an identifier of the media content item data structure606), a title field (e.g., storing a title of the media content item data structure606, such as a song title), a content field (e.g., storing the content of the media content item or a link to the content of the media content item data structure606, such as the audio content of a song), and an audio fingerprint field. In an example, the audio fingerprint field stores an audio fingerprint of the content of the media content item data structure606. The playback state data structure616is a data structure (e.g., a record of a table or other data structure) that contains data associated with a state of a device (e.g., a state associated with a device record of the device data table610). The playback state data structure616can reference and be referenced by one or more tables or other data structures. The playback state data structure616stores data regarding a particular playback state in one or more fields, such as a current context field (e.g., describing a current context from which a device is playing, such as by containing an identifier of the context), a current MCI (Media Content Item) (e.g., describing a current media content item that is playing, such as by containing an identifier of the media content item), a playback mode field (e.g., describing a playback mode of the device, such as shuffle or repeat), a playback speed field (e.g., describing a current playback speed), and a next MCI field (e.g., describing the next media content item to be played). Various operations and processes described herein can be performed by the cooperation of two or more devices, systems, processes, or combinations thereof. While various examples of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein. Thus, the present invention should not be limited by any of the above described example embodiments, but should be defined only in accordance with the following claims and their equivalents. Further, the Abstract is not intended to be limiting as to the scope of the example embodiments presented herein in any way. It is also to be understood that the procedures recited in the claims need not be performed in the order presented. | 77,835 |
11943218 | DETAILED DESCRIPTION OF EMBODIMENTS A description of embodiments of the present innovation will now be given with reference to the Figures. It is expected that the present innovation may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. Referring toFIG.1, an automated computer software operating system100in one embodiment is disclosed. In one embodiment, the system100is configured to automatically create an individual profile or multiple profiles based on the presence and interaction of multi-factor biometric sensors of a user device/hardware device or multi-factor biometric authentication system114that could be a combination of hardware and software in nature that further works with onboard storage, memory and networking modules among other features to enable all the current features as described above to be executed. In one embodiment, the onboard biometric sensors have the ability to test a blood sample, detect a fingerprint mark, a contactless finger print method of detection, a facial detection, an iris detection, and even through a password method of creating a user profile. In one embodiment, the system100is configured to automatically generate user profiles for individuals. In one embodiment, the system100comprises a computing device102having a processor106and a memory104having a software module executed by the processor106, wherein the software module is at least one of a plugin component and/or a browser extension. In one embodiment, the processor106is in communication with a server108via a network110and configured to execute a set of instructions stored in the memory104. In one embodiment, a database112in communication with the server108is configured to store data related user's profile, wherein the database112comprises one or more program modules, which are executed by the processor106to perform multiple operations include, but not limited to, creating a plurality of user profiles or personal accounts through the use of onboard biometric sensors on a hardware device or user device or a multi-factor biometric authentication system114, generating passwords automatically associated with the plurality of user profiles or personal accounts when the user profiles are successfully completed by a multi-factor biometric authentication system114on board the user device or hardware device, thereby enabling the user to login into log in to the system100to access the user profile or personal account using the user device. In one embodiment, the user device is at least any one of, but not limited to, a computer, a laptop, a smartphone, a personal digital assistant (PDA), a tablet, a credit card terminal, a point of sale terminal (POS), an entertainment device or TV, a medical terminal, and a travel terminal. The network110is at least any one of, but not limited to, Wi-Fi, Bluetooth®, wireless local area network (WLAN)/Internet connection, and radio communication. In one embodiment, the system100is further configured to upload and download data and information related to an auto created user profile from the server unless user manually stops. In one embodiment, the created user profile includes, but not limited to, images of the user, voice, touch-based biometric, an iris recognition (using the rear or front-facing camera), finger prints, a retinal scan, DNA sample, a palm print, a hand geometry, odor/scent, and gait. In one embodiment, the computing device102is at least any one of a computer, a laptop, a smartphone, a personal digital assistant (PDA), and a tablet. In one embodiment, the system100could allow the users to change the auto generated passwords associated with the user profile or personal account. In one embodiment, the system100could enable the user to automatically log in or log out based on presence or absence to the associated user profile or personal account on that user device as needed from time to time. In one embodiment, the system100is further configured to detect and store the user device's information in association with or along with the auto created user profile. In one embodiment, the system100is further configured to automatically categorize the generated information or data associated with the created user profile on the computing device102. In one embodiment, the system100is further configured to automatically store the user generated information or data associated with the created user profiles from multiple devices through the use of networking technologies, memory, and server and/or storage modules of the computing device remotely. In one embodiment, the system100is further configured to automatically download any user profile from the server108to any other computing device for further use on that local device. In one embodiment, the system100is further configured to continuously upload and download to and from the server108in an infinite loop unless a user manually stops such auto created information related to the user, user device, and the data created by the user. In one embodiment, the system100is further configured to enable the user to check the user device and its associated data or information stored on the server108and/or the computing device on any local device102, thereby using the stored data or information for further use by the user. Referring toFIG.2, a block diagram of the system100configured to automatically detect the features and configuration of the local device in one embodiment is disclosed. In one embodiment, the system100is configured to be able to automatically detect the features and configuration of that local device and install itself on it as required for further use116and save that same information along with the auto created user profile onboard that local device. The local device information is captured by the automated operating system (AOS) and stored with the individual profiles. Referring toFIG.3, a block diagram of the system100configured to automatically categorize the generated information or data associated with the created user profile on the computing device102in one embodiment is disclosed. In one embodiment, the system100is able to enable the user to create or generate new data based on the features of the local device and this information is automatically categorized by its type in various categories and saved under the same auto created user profile of this present invention. In an exemplary embodiment, at one step118, the onboard biometric sensors on the hardware device or multi-factor biometric authentication system114could be able to test and check a blood drop. At other steps120and122, the AOS checks the blood and displays the results on the screen (LCD) of the device and also creates the user profile based on the blood type and test. Further, at another step124, the AOS stores the created user profile with the blood test locally and remotely with the device information. Further at another step126, every time a blood test is made or other tests are done with health related or other features, all results are automatically grouped into the same user profile and stored and retrieved from the server108or a remote server by a local device's multi-factor biometric authentication system114of the hardware device, for example, a computer or a laptop. Referring toFIG.4, the block diagram of the system100configured to automatically transmit the data created on the local device to the server or remote data storage server108in one embodiment is disclosed. In one embodiment, the system100is configured to automatically transmit the data created on the local device to the server or remote data storage server (cloud)108via the network, for example, a wireless communication or a wired communication and with such feature on the local device itself and store it automatically under the uniquely created user profile on the local device but also on the remote computer or server108. Referring toFIGS.5A-5G, a block diagram of the system100configured to automatically save unlimited amounts of devices' information and their associated user generated information under the same unique user profiles that were created automatically on the local device through the aid of multifactor biometric sensors in one embodiment is disclosed. In one embodiment, the system100could automatically save unlimited amounts of devices' information and their associated user generated information under the same unique user profiles that were created automatically on the local device through the aid of multifactor biometric sensors. In an exemplary embodiment, the system100could use blood samples for automatically creating the user profile. In an exemplary embodiment, the blood test results could display on the screen (LCD) of the device128, for example, a blood test apparatus. The system100further creates the user profile based on the blood type and test. In yet another exemplary embodiment, the system100could use a car radio interface132for checking face or finger prints or by password for automatically creating the user profile. In one embodiment, at step130, the AOS could check face or finger prints or by password through the car radio interface132and check if the user account exists on the server108or a remote server (cloud). If the user account exists then it downloads the appropriate apps and features that are appropriate to the car radio interface for further use. If the user account does not exist then the AOS creates a local user account and uploads to the server108or remote server and also uploads the user profile and device information based on the multi-factor biometric authentication system114. In an exemplary embodiment, the features and apps could be displayed on the screen of the car radio interface132, for example, a blood test apparatus. In yet another exemplary embodiment, the system100can check face or a finger print or by a password entry through a smartphone's interface136for automatically creating a user profile. In one embodiment, at step134, the AOS can check face or finger print or by password through the smartphone's interface136and checks if a user account exists locally or on the server or a remote server108. If the account already exists then it downloads appropriate apps and features that are appropriate to the smartphone136for further use. It also continuously backs up newly created data on the smartphone136automatically for future download and usage as needed on the same device or another smartphone. If the user account does not exist then the AOS creates a local user account and uploads to the server or a remote server108and also uploads the user profile and device information based on the multi-factor biometric authentication system automatically114. In an exemplary embodiment, the features and apps could be displayed on the screen of the smartphone136. In yet another embodiment, the system100could check face or finger print or by password through a computer's interface140for automatically creating the user profile. In one embodiment, at step138, the AOS could check face or finger print or by password through a computer's interface140and checks if a user account exists locally or on the server or a remote server108. If the user account already exists then it downloads appropriate apps and features that are appropriate to the computer140for further use. It also continuously backs up newly created data on the computer140automatically for future download and usage as needed on the same device or another device or computer. If the user account does not exist, the AOS creates a local user account and uploads to the server or a remote server108and also uploads the user profile and device information based on the multi-factor biometric authentication system114. In an exemplary embodiment, the features and apps could be displayed on the screen of the computer or device140. In yet another embodiment, the system100could check face or finger print or by password through a credit card terminal's interface144for automatically creating the user profile. In one embodiment, at step138, the AOS could check face or finger print or by password through the credit card terminal's interface144and checks if a user account exists locally or on the server or a remote server108. If it is there, it downloads the appropriate credit card accounts that are appropriate to the terminal for further use by the user. In yet another embodiment, the system100could check face or finger print or by password through a television (TV) or an entertainment system's interface148for automatically creating the user profile. In one embodiment, at step146, the AOS can check face or finger print or by password through the television (TV) or entertainment system's interface148and checks if a user profile or account exists locally or on the server or a remote server108. If it is there, it downloads the appropriate accounts that are appropriate to the TV or Entertainment system for further use by the user. If the user profile or account does not exist then the AOS will ask to create a user profile based on the multi-factor biometric log in system114and create and backup it up remotely along with any newly created user data of the TV/Entertainment system. Referring toFIG.6, the system100is configured to automatically download auto-created and remotely stored user profile(s) and its associated device information and associated user generated or created data or information in one embodiment is disclosed. In one embodiment, the system100is configured to automatically download auto-created and remotely stored user profile(s) and its associated device information and associated user generated or created data or information from a remote computer server108to any other local device156to that local device through the means of onboard modules that consist of networking features, memory and storage modules for further use by any user. This process of biometric based automatic downloading and uploading of user profiles and associated device data and user created data always continues in an infinite loop unless manually stopped by the user with a feature to enable such. At step154, all devices and their associated profiles and user generated information can be viewed on any capable device156to be able to monitor as needed. Referring toFIG.7, the system100is configured to automatically download the auto-created and remotely stored user profile(s) and its associated device information and associated user generated or created data or information through the means of onboard modules in one embodiment is disclosed. In one embodiment, the system100is configured to automatically download the created and remotely stored user profile(s) and its associated device information and associated user generated or created data or information from a remote computer server108to any another local device156to that local device through the means of onboard modules that consist of networking features, memory and storage modules for further use by any user. At step158, all devices and their associated profiles and user generated information is constantly updated, uploaded and downloaded automatically in an infinite loop unless stopped by the user. Preferred embodiments of this innovation are described herein, including the best mode known to the inventors for carrying out the innovation. It should be understood that the illustrated embodiments are exemplary only and should not be taken as limiting the scope of the innovation. The foregoing descriptions comprise illustrative embodiments of the present innovation. Having thus described exemplary embodiments of the present innovation, it should be noted by those skilled in the art that the disclosures are exemplary only, and that various other alternatives, adaptations, and modifications may be made within the scope of the present innovation. Merely listing or numbering the steps of a method in a certain order does not constitute any limitation on the order of the steps of that method. Many modifications and other embodiments of the innovation will come to mind to one skilled in the art to which this innovation pertains having the benefit of the teachings in the foregoing descriptions. Although specific terms may be employed herein, they are used only in a generic and descriptive sense and not for purposes of any limitations. Accordingly, the present innovation is not limited to the specific embodiments illustrated herein. | 16,636 |
11943219 | DETAILED DESCRIPTION Reference will now be made to the illustrative embodiments illustrated in the drawings, and specific language will be used here to describe the same. It will nevertheless be understood that no limitation of the scope of the claims or this disclosure is thereby intended. Alterations and further modifications of the inventive features illustrated herein, and additional applications of the principles of the subject matter illustrated herein, which would occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the subject matter disclosed herein. The present disclosure is described in detail with reference to embodiments illustrated in the drawings, which form a part here. Other embodiments may be used and/or other changes may be made without departing from the spirit or scope of the present disclosure. The illustrative embodiments described in the detailed description are not meant to be limiting of the subject matter presented here. FIG.1shows components of a distributed data processing and display system100. The system100may include user devices102, system servers104, and system databases106. The user devices102, the system servers104, and the system databases106are connected to each other through a network108. The examples of the network108may include, but are not limited to, private or public LAN, WLAN, MAN, WAN, and Internet. The network108may include both wired and wireless communications according to one or more standards and/or via one or more transport mediums. The communication over the network108between the user devices102, the system servers104, and the system databases106may be performed in accordance with various communication protocols such as Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), and IEEE communication protocols. In one example, the network108may include wireless communications according to Bluetooth specification sets, or another standard or proprietary wireless communication protocol. In another example, the network108may also include communications over a cellular network, including, e.g., a GSM (Global System for Mobile Communications), CDMA (Code Division Multiple Access), and EDGE (Enhanced Data for Global Evolution) network. User devices102may be any computing and/or telecommunications device comprising a processor and capable of performing the various tasks and processes described herein, such as accessing a webserver and providing a GUI interface to a user to interact with a website and sensitive data hosted on the webserver. Non-limiting examples of the user device102may include a user computer (e.g., desktop, laptop, server, tablet), a telephone (e.g., smartphone), or any other telecommunications or computing device used to interact with various web services. For ease of explanation,FIG.1shows a single computer device functioning as the user device102. However, it should be appreciated that some embodiments may comprise any number of computing devices capable of performing the various tasks described herein. The user device102may be any computer allowing a user110to interact with a system server104via the webserver to access sensitive data. The user device102may execute an Internet browser or a local software browser application that access the webserver in order to issue requests or instructions to the system server104to access various components of the system100. The user device102may transmit credentials from inputs (user identification and/or authorization data) of the user110to the webserver, from which the webserver may authenticate the user110. One having skill in the art would appreciate that the user device102may comprise any number of input devices configured to receive any number of data inputs (e.g., mouse, keyboard, touchscreen, stylus), including various types of data inputs allowing for authentication, e.g., username, passwords, certificates, biometrics. One having skill in the art would also appreciate that the user device102may be any personal computer (PC) comprising a processor and non-transitory machine-readable storage medium allowing the user device102to perform the various tasks and processes described herein. The user device102may include one or more transmitter devices (transmitters) and one or more receiver devices (receivers). The transmitter may transmit or broadcast signals to the receiver. The transmitter and the receiver may be permanently integrated into the user device102, or the transmitter and the receiver may be detachably coupled to the user device102, which, in some cases, may result in a single integrated product or unit. As an example, the user device102may be placed into a protective sleeve comprising embedded transmitter and receiver that are detachably coupled to the user device102power supply input. Non-limiting examples of the integrated user device102may include laptops, tablets, among other types of the user device102. The user device102may further include embedded or associated cameras, sensors112(such as proximity sensors, image sensors, motion sensors, thermal sensors, and ambient light sensors), accelerometers, compasses, and/or gyroscopes, which may act as a data source for the transmitter to supplement data, as generated by various electronic devices physically associated with the transmitter. A transmitter may include or be associated with a processor, a communications component, and a sensor device/sensor112. The processor may control, manage, and otherwise govern the various processes, functions, and components of the transmitter. The processor may be configured to process and communicate various types of data (e.g., sensor and camera data). Additionally or alternatively, the processor of the transmitter may manage execution of various processes and functions of the transmitter, and may manage the components of the transmitter. For example, the processor may determine an interval at which a signal (such as Bluetooth or Infrared) may be broadcast by the communications component, to identify receivers (such as Bluetooth receiver) of a wearable device200(as shown in theFIG.2). In some cases, a single transmitter may comprise a single processor. However, it should be appreciated that, in some cases, a single processor may control and govern multiple transmitters. For example, the transmitters may be coupled to a system server104comprising a processor that executes software modules instructing the processor of the system server104to function as a transmitter processor capable of controlling the behavior of the various transmitters. Additionally or alternatively, a single transmitter may comprise multiple processors configured to execute or control specified aspects of the transmitter's behavior and components. For example, the transmitter may comprise a transmitter processor and a sensor processor, where the sensor processor is configured to manage a sensor and a camera, and generate sensor data and camera data, and where the transmitter processor is configured to manage the remaining functions of the transmitter. A communications component of a transmitter may effectuate wired and/or wireless communications to and from receivers of a wearable device200(as shown in theFIG.2). In some cases, the communications component may be an embedded component of the transmitter; and, in some cases, the communications component may be attached to the transmitter through any wired or wireless communications medium. In some embodiments, the communications component may be shared among a plurality of transmitters, such that each of the transmitters coupled to the communications component may use the data received within a communications signal, by the communications component. The communications component may comprise electromechanical components (e.g., processor) that allow the communications component to communicate various types of data with one or more receivers of a wearable device200(as shown in theFIG.2), transmitters of the wearable device200, and/or other components of the transmitter via communications signals. In some implementations, these communications signals may represent a distinct channel for hosting communications, independent from the sensor wave communication. The data may be communicated using the communications signals, based on predetermined wired or wireless protocols and associated hardware and software technology. The communications component of the transmitter may operate based on any number of communication protocols, such as Bluetooth®, Wireless Fidelity (Wi-Fi), Near-Field Communications (NFC), ZigBee, and others. However, it should be appreciated that the communications component of the transmitter is not limited to radio-frequency based technologies, but may include radar, infrared, and sound devices for sonic triangulation of any receiver. Using a communications signal, the transmitter may communicate data that may be used, e.g., to identify receivers of a wearable device200(as shown in theFIG.2), determine whether users110are authorized to access sensitive data, determine whether the user110wearing the wearable device200is authorized to access sensitive data, among other possible functions. Similarly, a communications component of a receiver of the wearable device200may use a communications signal to communicate data that may be used to, e.g., alert transmitters of a user device102that the receiver has entered or is about to enter a communication/transmission field of the transmitter. As an example, the communications component of the transmitter may communicate (i.e., send and receive) different types of data (e.g., authentication and identification data) containing various types of information. Non-limiting examples of the information may include a transmitter identifier (TX ID), a user device identifier (device ID) for the user device102, a user identifier (user ID) of the user110, the receiver's location in the communication field, the user device102location in the communication field, and other such information. A sensor (such as an imaging sensor or a camera)112may be physically associated with a transmitter and/or a user device102(i.e., connected to, or a component of). The sensor112may be configured to detect and identify various conditions of the system100and/or communication field, and a location and position of a user102with respect to a user device102. Sensor112is configured to generate sensor data (such as digital images), which may then be used by the user device102to determine various modes of operation. As detailed herein, the sensors112may transmit the sensor data collected during the sensor operations for subsequent processing by a transmitter processor of the transmitter and/or a processor of the user device102. Additionally or alternatively, one or more sensor processors may be connected to or housed within the sensor112. The sensor processors may comprise a microprocessor that executes various primary data processing routines, whereby the sensor data received at the transmitter processor or processor of the user device102has been partially or completely pre-processed as useable data for scrambling or unscrambling a screen and/or content displayed on the screen of the user device102. Hereinafter, the term “scrambling” and “encrypting” may be interchangeably used. Also, the term “unscrambling” and “decrypting” may be interchangeably used. In some configurations, the sensor device112may be a part of (e.g., communicatively coupled with) the user device102. For instance, the sensor device112may be an internal camera device installed and executing on the user device102such as a laptop device. The system server104may identify that the user device102comprises a camera and activate the camera in order to receive sensory data from the sensor device112. A user device102or a system server104may generate instructions or execute a scrambling algorithm/software program to scramble or unscramble content on screen of the user device102. In some embodiments, the user device102or the system server104may generate the instructions or execute the scrambling algorithm/software program to scramble or unscramble the screen of the user device102. The execution and/or implementation of the scrambling algorithm/software program results in the image (containing text data) displayed on the screen having jumbled graphical components (e.g., text elements). For instance, a scrambled text may comprise misplaced text characters (e.g., alphabet). In some embodiments, the execution and/or implementation of the scrambling algorithm/software program results in the image (containing text or visual data) displayed on the screen being divided into multiple segments. The segments may be squares, which can be tiled together to form the image. However, in some configurations, other types of segments can be formed out of other geometric shapes such as triangles and hexagons or any pre-determined shape not conforming to traditional geometric shapes. In some embodiments, the system server104may divide the display screen into a pre-determined number of segments of same or different sizes, for example, X×Y segments displaying unscrambled segmented text312(as shown in theFIG.3C). The system server104may then invert each of the screen segments displaying scrambled segmented text314(as shown in theFIG.3D) where each segmented text314may be of different size so that when viewed by an unauthorized person, the content display screen in not readily identifiable. In some configurations, only when viewed through a wearable device200(as shown in theFIG.2) having a lens204of corresponding X×Y configuration, the images will be seen in their original orientation. When lens unit204of a wearable device200(as shown in theFIG.2) are placed together, the unscrambled image may then be formed on the side of the lens. In this manner, only the user110who wears the eyeglasses is able to view the unscrambled image on the screen of the user device102, and to all other individuals, the screen of the user device102appears to be a distorted compilation of individual texts (i.e., scrambled screen). In some embodiments, execution and/or implementation of the scrambling algorithm/software program results in inversion of the multiple segments displaying scrambled segmented text314(as shown in theFIG.3D) on the screen of the user device102. In some embodiments, execution and/or implementation of the scrambling algorithm/software program results in scrambling of pixels316,318on the screen of the user device102(as shown in theFIG.3EandFIG.3F) on the screen of the user device102. In some embodiments, the execution and/or implementation of the scrambling algorithm/software program results in making the multiple segment appear backwards, making the multiple segments appear smaller than its regular size, and rotating the multiple segments about a central point. Various other methods in which a screen and/or an image (containing text or visual data) on the screen of the user device102may be distorted so that a specific lens of a wearable device200can correct the distortion and make the content displayed on the screen readable. The arrangement of the distorted multiple segments is such that the compilation of the individual distorted multiple segments is sufficiently different from the original content image and prevents unauthorized users from comprehending the content image on the screen of the user device102. In some embodiments, a sensor112associated with user device102may transmit sensor data to the system server104via the user device102. Although described in the exemplary embodiment as raw sensor data, it is intended that the sensor data is not limited to raw sensor data and can include data that is processed by a processor associated with the sensor112, processed by a processor associated with the user device102, processed by a processor associated with the system server104, or any other processor. The sensor data can include information derived from the sensor112of the user device102, and processed sensor data can include determinations based upon the sensor data. For example, a gyroscope of a receiver of a wearable device200(as shown in theFIG.2) may provide data such as an orientation in X-plane, Y-plane, and Z planes, and processed data from the gyroscope may include a determination of the location of the receiver based upon the orientation of the receiver. In another example, data from an infrared sensor of the transmitter may provide thermal imaging information, and processed data may include an identification of the user110based upon the thermal imaging information. As used herein, any reference to the sensor data or the raw sensor data can include data processed at the sensor112, the imaging device, or other device. In some implementations, a gyroscope and/or an accelerometer of the receiver of the wearable device200or the user device102associated with the receiver may provide sensor data indicating the orientation of the user110of the wearable device200or user device102with respect to the user110, which the user device102or the system server104may use to determine whether to scramble or unscramble screen and/or content on screen of the user device102. A user device102or a system server104may make a determination to whether scramble or unscramble a screen and/or an image (containing text or video data) on the screen of the user device102based on sensor and/or camera data obtained from the sensor (such as imaging sensor (camera) or other sensor)112directly or indirectly associated with the user device102and/or the system server104. If the user device102and/or the system server104chooses to unscramble the screen and/or the image on the screen of the user device102based on the sensor and/or camera data, then each screen and/or image segment is returned to its original orientation and the unscrambled screen and/or image will be visible on the screen of the user device102. If the user device102and/or the system server104unscrambles the screen and/or the image based on the sensor data (or after the screen and/or the image segments have been returned to their original orientation) a determination is made by the user device102and/or the system server104as to whether the execution of the scrambling algorithm/software program is to be stopped. In some configurations, if the user device102and/or the system server104terminates the execution of the scrambling algorithm/software program, then a determination is made by the user device102and/or the system server104on whether to change the configuration of the screen and/or image segments. In some configurations, the user110may be able to terminate the scrambling of the display screen. In some cases, a receiver may be embedded or attached to a wearable device200(as shown in theFIG.2) comprising a gyroscope and/or an accelerometer that generates data indicating an orientation of the wearable device200. The receiver may transmit the data to a processor of a user device102, via communications signals or waveforms. In such implementations, the processor may not scramble a screen and/or an image on the screen of the user device102until the processor receives, via communication waves, the data produced by the gyroscope and/or accelerometer, indicating that the receiver or the wearable device200is in motion or has an orientation suggesting that the wearable device200is in use. As an example, a receiver may be attached to or embedded within eyeglasses, which may include a gyroscope and an accelerometer. In this example, while the eyeglasses are being utilized by the user110, a processor of the user device102and/or the system server104may present unscrambled content on the screen of the user device102. But when the user110lifts the eyeglasses from his or her face, the accelerometer then generates data indicating that the eyeglasses is in motion and the gyroscope generates the data indicating that the eyeglasses has a planar-orientation indicating that the eyeglasses is not against the user110's face. The processor of the user device102and/or the system server104may then determine from the data produced by the gyroscope and accelerometer that the eyeglasses is not against the user110face, and thus the processor of the user device102and/or the system server104scrambles the screen and/or the content on the screen of the user device102. The processor of the user device102and/or the system server104may make this determination according to any number of preset threshold values regarding data produced by gyroscopes and/or accelerometers. A sensor112directly or indirectly associated with a user device102and/or a system server104may be a device configured to emit sensor waves, which may be any type of wave that may be used to identify a user110in a transmission field of the sensor112. Non-limiting examples of the sensor technologies for the sensors112may include: infrared/pyro-electric, ultrasonic, laser, optical, Doppler, accelerometer, microwave, millimeter, face recognition, head movement, motion, imaging, and radio frequency standing-wave sensors. Other sensor technologies that may be well-suited to secondary and/or proximity-detection sensors may include resonant LC sensors, capacitive sensors, and inductive sensors. Based upon the particular type of the sensor waves used and the particular protocols associated with the sensor waves, the sensor112may generate sensor data. In some cases, the sensor112may include a sensor processor that may receive, interpret, and process sensor data, which the sensor112may then provide to a processor of the user device102and/or the system server104. A sensor112directly or indirectly associated with a user device102and/or a system server104may be a passive sensor, an active sensor, and/or a smart sensor. Passive sensors, such as tuned LC sensors (resonant, capacitive, or inductive) are a type of sensor112and may provide minimal but efficient object discrimination. The passive sensors may be used as secondary (remote) sensors that may be dispersed into a communication field and may be part of a receiver or otherwise independently capture raw sensor data that may be wirelessly communicated a sensor processor. Active sensors, such as infrared (IR) or pyro-electric sensors, may provide efficient and effective target discrimination and may have minimal processing associated with the sensor data produced by such active sensors. Smart sensors may be sensors having on-board digital signal processing (DSP) for primary sensor data (e.g., prior to processing by a processor of the user device102and/or the system server104). The processors are capable of fine, granular object (such as user110) discrimination and provide processors of the user device102and/or the system server104with pre-processed sensor data that is more efficiently handled by the processor when determining when to scramble and unscramble the screen and/or content on the screen of the user device102. A sensor112directly or indirectly associated with a user device102and/or a system server104may have a capability to operate and generate different types of sensor data, and may generate location-related information of a user110in various formats. Active and smart sensors may be categorized by sensor type, characteristic hardware and software requirements, and capabilities for distance calculation and motion detection of the user110. In some implementations, sensors112associated with a user device102may be configured for the user110recognition, and thus may discriminate the user110from other objects, such as furniture. Non-limiting examples of the sensor data processed by human recognition-enabled sensors may include: body temperature data, infrared range-finder data, motion data, activity recognition data, silhouette detection and recognition data, gesture data, heart rate data, portable devices data, and wearable device data (e.g., biometric readings and output, accelerometer data). In some embodiments, the sensors112associated with the user device102may be configured for a particular user110(for example, a first user) recognition, and thus may discriminate the first user from other users, such as a second user and a third user. The sensors112may recognize the first user based on one or more of body temperature data associated with the first user, infrared range-finder data associated with the first user, motion data associated with the first user, activity recognition data associated with the first user, silhouette detection and recognition data associated with the first user, gesture data associated with the first user, heart rate data associated with the first user, portable devices data associated with the first user, or wearable device data (e.g., biometric readings and output, accelerometer data) associated with the first user. In operation, sensors112directly or indirectly associated with a user device102and/or a system server104may detect whether objects, such as a user110(authorized or unauthorized user), enter a predetermined proximity (of a transmitter) of the user device102. In one configuration, the sensor112may then instruct a processor of the user device102and/or the system server104to execute various actions such as scrambling or unscrambling a screen and/or content on the screen of the user device102based upon the detected objects such as the user110(authorized or unauthorized user). In another configuration, the sensor112may transmit sensor data to the user device102and/or the system server104, and the user device102and/or the system server104may determine which actions to execute. For example, after the sensor112identifies that the user110has entered a pre-defined communication field (for example, a Bluetooth or NFC field) of the user device102, and the user device102and/or the system server104determines that the user110is within the predetermined proximity (for example, a predetermined distance of 5 to 10 meters) of the user device102based on the sensor data, the sensor112could provide the relevant sensor data to the user device102and/or the system server104, causing the user device102and/or the system server104to scramble or unscramble screen and/or content on the screen of the user device102. As another example, after identifying the user110entering the field and then determining that the user110has come within the predetermined proximity of the user device102based on the sensor data, the sensor112may provide the sensor data to the user device102and/or the system server104that causes the user device102and/or the system server104to scramble or unscramble screen and/or content on the screen of the user device102. In another example, the system100may comprise an alarm device (not shown), which may produce a warning, and/or may generate and transmit a digital message to the system server104and/or an administrative computing device (not shown) configured to administer operations of the system100. In this example, after the sensor112detects the user110entering the predetermined proximity of the user device102, or otherwise detects other unsafe or prohibited conditions of the system100, the sensor data may be generated and transmitted to a processor of the alarm device, which may activate the warning, and/or generate and transmit a notification to the system server104or the administrator device. A warning produced by the alarm device may comprise any type of sensory feedback, such as audio feedback, visual feedback, haptic feedback, or some combination. In some embodiments, such as the exemplary system100, a sensor112may be a component of a user device102, housed within the user device102. In some embodiments, a sensor112may be external to the user device102and may communicate, over a wired or wireless connection, sensor data to one or more processors of the user device102. A sensor112, which may be external to the user device102or part of a single user device102, may provide sensor data to the one or more processors, and the processors may then use the sensor data to scramble or unscramble screen and/or content on the screen of the user device102. Similarly, in some embodiments, multiple sensors112may share sensor data with multiple processors. In such embodiments, sensors112or user device102may send and receive sensor data with other sensors (for example, sensors associated with wearable device) in the system100. Additionally or alternatively, the sensors112and/or the user device102may transmit or retrieve sensor data, to or from one or more memories. As an example, as seen inFIG.3AandFIG.3B, a first user device302may include a first sensor (now shown) that emits sensor waves and generates sensor data, which may be stored on the first user device302and/or a mapping memory. In this example, the user device302may comprise processors that may receive sensor data (such as captured images) from the sensors (such as cameras), and/or fetch stored sensor data from particular storage locations; thus, the sensor data produced by the respective sensor may be shared among the respective user device302. The processors of the user device302may then use the sensor data, to scramble or unscramble screen and/or content (text or visual data) on the screen of the user device302when a sensitive object (such as a user308) is detected. For instance, a processor of the user device302may display unscrambled content304on the screen of the user device302when, based on the processed and analyzed sensor data, the sensitive object, e.g., user308, is detected to be viewing the screen, the processor of the user device302may display scrambled content306on the screen of the user device302. In some configurations, the system server104may scramble and other unscrambled display of data when the sensitive object (such as the user308) is detected to be located away from the screen based on the processed and analyzed sensor data. For instance, when the user308walks away his or her computing device, the system server104may scramble the display of data. Referring back toFIG.1, a user device102may also include, or otherwise be associated with, multiple sensors112from which the user device102may receive sensor data. As an example, the user device102may include a first sensor located at a first position of the user device102and a second sensor located at a second position on the user device102. In such an embodiment, the sensors112may be imaging or binary sensors that may acquire stereoscopic sensor data, such as the location of the user110relative to the first and the second sensors. In some embodiments, such binary or stereoscopic sensors may be configured to provide three-dimensional imaging capabilities, which may be transmitted to the user device102, an administrator's workstation and/or a system server104. In addition, binary and stereoscopic sensors may improve the accuracy of a receiver of a wearable device or user110location detection and displacement, which is useful, for example, in motion recognition and tracking. In some implementations, a sensor112of a user device102may detect a user110within a sensor field of operation (for example, a range within which the sensor112may operate) that have been predetermined or tagged. In some cases, it may be desirable to avoid particular obstacles in the field, such as furniture or walls, regardless of whether a sensor112has identified a user110, entering within proximity to a particular obstacle. As such, an internal or external mapping memory may store mapping data and/or sensor112identifying the particular location of the particular obstacle, thereby effectively tagging the location of the particular location as being off-limits. Additionally or alternatively, the particular user110may be digitally or physically associated with a digital or physical tag that produces a signal or physical manifestation detectable by the sensor112, communications components, or other component of the user device102. For example, as part of generating sensor data for the user device102, the sensor112may access an internal mapping memory (i.e., internal to the user device102housing the sensor) that stores records of tagged obstacles to avoid, such as a table. Additionally or alternatively, in some implementations, a sensor112may detect a user110who has been tagged (i.e., previously recorded in an internal mapping memory or external mapping memory or received a digital or physical tag detectable by the sensors112). Under these circumstances, after detecting a tag or tagged user110, or otherwise determining that a tag or tagged user110is within a field, the sensor112may generate sensor data that causes the user device102to switch from scrambled screen (scrambled content on the screen) to unscramble screen (unscrambled content on the screen) or vice-versa. User device102may include an antenna array, which may be a set of one or more antennas configured to transmit and receive one or more signals (for example, identification data signals) from a receiver (in a wearable device). In some embodiments, an antenna array may include antenna elements, which may be configurable tiles comprising an antenna, and zero or more integrated circuits controlling the behavior of the antenna in that element, such as having predetermined characteristics (e.g., amplitude, frequency, trajectory, phase). An antenna of the antenna array may transmit a series of signals having the predetermined characteristics, such that the series of signals arrive at a given location within a field, and exhibit those characteristics. In some embodiments, a user device102may include receivers (along with transmitters), which may be an electrical device coupled to or integrated with the user device102. A receiver may comprise one or more antennas that may receive communication signals from (a transmitter of) a wearable device200(as shown inFIG.2). The receiver may receive the communication signals produced by and transmitted directly from the transmitter. The receiver directly or indirectly associated with the user device102may include a receiver-side communications component, which may communicate various types of data with a transmitter of a wearable device200(as shown inFIG.2) in real-time or near real-time, through a communications signal generated by the receiver's communications component. The data may include mapping data, such as device status data, status information for the receiver, status information for the user device102. In other words, the receiver may provide information to the transmitter regarding a current location information of the user device102and certain user identification information, among other types of information. As mentioned, in some implementations, a receiver may be integrated into a user device102, such that for all practical purposes, the receiver and the user device102would be understood to be a single unit or product, whereas in some embodiments, the receiver may be coupled to the user device102after production. It should be appreciated that the receiver may be configured to use the communications component of the user device102and/or comprise a communications component of its own. As an example, the receiver might be an attachable but distinct unit or product that may be connected to the user device102, to provide benefits to the user device102. In this example, the receiver may comprise its own communications component to communicate data with transmitters of a wearable device200(as shown inFIG.2). Additionally or alternatively, in some embodiments, the receiver may utilize or otherwise operate with the communications component of the user device102. For example, the receiver may be integrated into a laptop computer during manufacturing of the laptop or at some later time. In this example, the receiver may use the laptop's communication component (e.g., Bluetooth®-based communications component) to communicate data with transmitters of a wearable device200. A system server104may function as an interface for an administrator to set configuration settings or provide operational instructions to various components of a system100. The system server104may be any device comprising a communications component capable of wired or wireless communication with components of the system100and a microprocessor configured to transmit certain types of data to components of the system100. Non-limiting examples of the system server104may include a desktop computer, a server computer, a laptop computer, a tablet computer, and the like. For ease of explanation,FIG.1shows a single computer device functioning as the system server104. However, it should be appreciated that some embodiments may comprise any number of computing devices capable of performing the various tasks described herein. A system server104may be a device that may comprise a processor configured to execute various routines for tagging a receiver in a wearable device200(as shown inFIG.2) and a user device102, based upon a type of a technology employed. As mentioned herein, tagging receivers and other users110within a field may indicate to components of the system100that those components should or should not execute certain routines. As an example, the system server104may be a laser guidance device that transmits tagging data to a transmitter communication component of the user device102, sensor112of the user device102, mapping memory, or other device of the system100that is configured to receive and process the laser guidance-based tagging data. In this example, the tagging data may be generated whenever a user110interacts with an interface input, such as a push button on the wearable device200or graphical user interface (GUI) on the user device102, and a laser “tags” the desired user110. In some cases, the resulting tagging data is immediately transmitted to the transmitter or other device for storage into mapping data. In some cases, a sensor112having laser-sensitive technology may identify and detect the laser-based tag. Although additional and alternative means of tagging objects such as users110and devices are described herein, one having ordinary skill in the art would appreciate that there are any number of guidance technologies that may be employed to tag a user110and generate or detect tagging data. A system server104may execute a software application associated with a system100, where the software application may include one or more software modules for generating and transmitting tagging data to various components of the system100. The tagging data may contain information useful for identifying the users110or current locations of the users110. That is, the tagging data may be used to instruct a sensor112that, when a particular sensory signature (e.g., infrared) is detected, the sensor112should generate certain sensor data, which would eventually inform the user device102whether to scramble or unscramble screen and/or content on the screen of the user device102. A system server104may be a server computer or other workstation computer that is directly or indirectly connected to a user device102. In such implementations, an administrator may provide tagging data directly to an external mapping memory117, which may be stored until needed by the user device102. AlthoughFIG.1shows the system server104as being a distinct device from the user device102, it should be appreciated that they may be the same devices and may function similarly. In other words, the user device102may function as the system server104; and/or the system server104may receive instructions through associated transmitters or receivers, embedded or coupled to the system server104. User device102may further be associated with one or more mapping-memories, which may be non-transitory machine-readable storage media configured to store mapping data, and which may be data describing aspects of fields associated with processors and sensors of the user device102. The mapping data may comprise processor data, camera data, location data, and sensor data. The sensor data may be generated by sensor processors to identify users110located in a field of a sensor112. Thus, sensor data stored in a mapping memory of the system100may include information indicating location of a receiver of a wearable device200(as shown inFIG.2), location of the users110, and other types of data, which can be used by a processor of the user device102and/or the system server104to scramble and unscramble screen and/or content on the screen of the user device102. The user device102and/or the system server104may query the mapping data stored in the records of a mapping memory, or the records may be pushed to the user device102and/or the system server104in real-time, so that the user device102and/or the system server104may use the mapping data as input parameters for determining whether to execute programs to scramble and unscramble screen and/or content on the screen of the user device102. In some implementations, the user device102and/or the system server104may update the mapping data of a mapping memory as new, up-to-date mapping data is received, from the processors governing the communications components or sensors112. A user device102may comprise non-transitory machine-readable storage media configured to host an internal mapping memory, which may store mapping data within the user device102. A processor of the user device102, such as a transmitter processor or a sensor processor, may update records of the internal mapping memory as new mapping data is identified and stored. In some embodiments, the mapping data stored in the internal mapping memory may be transmitted to additional devices of the system100, and/or the mapping data in the internal mapping memory may be transmitted and stored into an external mapping memory at a regular interval or in real-time. A system100may include an external mapping memory, which may be a system database106or a collection of machine-readable computer files, hosted by non-transitory machine-readable storage media of one or more system servers104. In such embodiments, the system database106may be communicatively coupled to the user device102and/or the system server104by any wired or wireless communications protocols and hardware. The system database106may contain mapping data for one or more communication fields that are associated with the user device102and/or the system server104. The records of the system database106may be accessed by each user device102, which may update the mapping data when scanning a communication field for receivers (of a wearable device such as deciphering eyeglasses) or users110; and/or query the mapping data when determining whether to scramble or unscramble screen and/or content on the screen of the user device102. System databases106may have a logical construct of data files that are stored in non-transitory machine-readable storage media, such as a hard disk or memory, controlled by software modules of a database program (for example, SQL), and a related database management system (DBMS) that executes the code modules (for example, SQL scripts) for various sensor data queries and other management functions generated by the system server104. In some embodiments, a memory of the system databases106may be a non-volatile storage device. The memory may be implemented with a magnetic disk drive, an optical disk drive, a solid-state device, or an attachment to a network storage. The memory may include one or more memory devices to facilitate storage and manipulation of program code, set of instructions, tasks, data, PDKs, and the like. Non-limiting examples of memory implementations may include, but are not limited to, a random access memory (RAM), a read only memory (ROM), a hard disk drive (HDD), a secure digital (SD) card, a magneto-resistive read/write memory, an optical read/write memory, a cache memory, or a magnetic read/write memory. In some embodiments, a memory of the system databases106may be a temporary memory, meaning that a primary purpose of the memory is not long-term storage. Examples of the volatile memories may include dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. In some embodiments, the memory may be configured to store larger amounts of information than volatile memory. The memory may further be configured for long-term storage of information. In some examples, the memory may include non-volatile storage elements. Examples of such non-volatile storage elements include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. FIG.2illustrates a wearable device200, according to an exemplary embodiment. For ease of explanation, theFIG.2shows the wearable device200as eyeglasses, however, it should be appreciated that some embodiments may include any suitable wearable device200capable of performing various tasks described herein. For example, the wearable device200may be a display device in form of glasses, goggles, or any other structure comprising a frame202that supports and incorporates various components of the wearable device200, as well as serves as a conduit for electrical and other component connections. In some other embodiments, a software product running on a camera (e.g., an application executing on a mobile device enabled with a camera device) may also be used as the wearable device. Therefore, even though described as a wearable device200, the wearable device200may still be operational while not worn by the user. A wearable device200is configured for viewing and interacting with a real world item such as text displayed on a user computing device (as described in theFIG.1), with a virtual display of imagery and/or text. For instance, the wearable device200may comprise augmented reality systems, which may be a form of virtual reality (VR) that unscrambles and layers virtual information (such as scrambled text displayed on a user computing device) over a live camera feed (using a camera attached on the wearable device200) into the wearable device200or through a smartphone or tablet device giving a user of the wearable device200the ability to view three-dimensional and/or unscrambled text on display lenses (204aand204b) of the wearable device200. In some embodiments, the display lenses (204aand204b) may be a virtual retinal display (VRD). The VRD display is scanned directly onto retina of the user's eye, which results in bright images displaying unscrambled text with high revolution and high contrast. The user sees a conventional display displaying unscrambled text floating in space. A wearable device200may include a lens unit having two or more display lenses (204aand204b) connected to the frame202. The frame202is an eyeglass frame adapted to be located on a head of a user. When the frame202is located on the head of the user, the display lenses204are located in front of the user's eyes. In an alternate embodiment, any suitable type of frame could be provided, such as a headset or helmet. In some embodiments, the wearable device200could comprise merely one display lenses or more than two display lenses (204aand204b). Display lenses (204aand204b) may include one or more cameras, which may be devices for capturing a photographic image or recording a video. The one or more cameras may be placed on at least one of the display lenses (204aand204b). When the wearable device200is synchronized with a user computing device (as described in theFIG.1) and/or a user wearing the wearable device200is authorized, then scrambled text information displayed on the user computing device may be relayed to the user through the wearable device200as an overlay on the camera attached on the display lenses (204aand204b). Display lenses (204aand204b) may further include an LCD display. In some embodiments, the display lenses (204aand204b) may include an imaging system, which can be implemented with any number of micro display panels, lenses, and reflecting elements to display and project an image. The display panels, lenses, and/or reflecting elements of the imaging system can be implemented with various display technologies, such as implemented with a transparent LCD, or using a projection technology. The projection technology can be implemented using LCD type displays with powerful backlights and high optical energy densities. Alternatively, a micro display and/or reflecting element can be implemented using a reflective technology, such as digital light processing (DLP) and liquid crystal on silicon (LCOS), that reflects external light, which is reflected and modulated by an optical material. A wearable device200may be implemented as an independent, portable device that further includes communication electronics, which may include transmitters, receivers, cameras, sensors, memory, software, a processor, and/or a power source. The transmitter and the receiver may use communications signals to communicate information relating to each other in the form of signals carrying digital data. The transmitter and the receiver may use communications signals to communicate information (such as location data and credentials) relating to the wearable device200in the form of signals carrying digital data to user computing device (now shown). In addition, the wearable device200may be communicatively linked (using Bluetooth) to a controller such as a system server and/or a user computing device that includes any one or combination of the memory, software, processor, and/or power source, such as a battery unit. The system server and/or the user computing device can be implemented for wired or wireless communication with the wearable device200. The system server, the user computing device, and/or the wearable device200can also be implemented with any number and combination of differing components. For example, the system server, the user computing device, and/or the wearable device200includes a decipher/scrambler application implemented as computer-executable instructions, such as a software application, and executed by a processor to implement embodiments of the wearable device200. The execution of the software application results in configuration of the display lenses (204aand204b). The display lenses (204aand204b) then displays an image from a screen of the user computing device transmitted by cable or wireless technology from the computing device. The display lenses (204aand204b) contains a processor to unscramble a transmitted image (for example, a scrambled screen image) from the computing device such that only the user wearing the eyeglasses200can see the unscrambled data in the screen image. A wearable device200may further include a detector, which may comprise hardware, which may allow the detector to receive Bluetooth or other communication signals originating from a user computing device. The detector may be used by users using the wearable device200to identify a location of the user computing device, so that users may determine a placement of a screen of the user computer device. In some embodiments, the detector may comprise an indicator light that indicates when the detector is wirelessly connected with the user computing device. For example, when a detector of the wearable device200is located within the a signal range (Bluetooth range) generated by a Bluetooth transmitter of the user computing device, it may trigger the detector to turn on their respective indicator lights because the detector is receiving Bluetooth signals, whereas, the indicator light of the detector, is turned off, when the detector is not receiving the Bluetooth signals from the transmitter of the user computing device. FIG.3Aillustrates a user computing device302displaying unscrambled text content304based on a first position of a user308andFIG.3Billustrates the user computing device302displaying scrambled text306based on a second position of a user304. The user computing device302may include an output component such as a display screen310which may include one or more of the display components such as a cathode ray tube, a liquid crystal display, an OLED display, an AMOLED display, a super-AMOLED display, a plasma display, an incandescent light, a fluorescent light, a front or rear projection display, or a light emitting diode indicator. A user interface of the user computing device302may be connected to a processor of the user computing device302for entering data and commands in the form of text, touch input, gestures, etc. The user interface may be a touch screen device, but may alternatively be an infrared proximity detector or sensor or any input/output device combination capable of sensing gestures and/or touches including a touch-sensitive surface. In addition, the user interface may include one or more components, such as a video input component such as an optical sensor (for example, a camera or imaging technology), an audio input component such as a microphone, and a mechanical input component such as button or key selection sensors, a touch pad sensor, a touch-sensitive sensor, a motion sensor, and/or a pointing device such as a joystick, a touch pad, a touch screen, a fingerprint sensor, or a pad for an electronic stylus. One or more of these user interface devices may function in multiple modes. A user computing device302may include an authentication apparatus such as a sensor device for facial, iris, retina, eye vein, and/or face vein recognition or other facial feature or facial component recognition that capture images and/or emits sensor waves; and generate sensor data associated with face detection, head movement, and/or other facial features of a user308, which may be stored on a database in the user device302and/or a mapping memory. The authentication apparatus may further draw upon stored information in the mapping memory, such as a look up table to compare and contrast data of new user with known users, including data related to information on facial, iris, retina, and/or eye vein information, fingerprints, breath analysis, body odor, voice patterns, etc. A user computing device302may include one or more processors that may receive camera data and/or sensor data for facial, iris, retina, eye vein, and/or face vein recognition or other facial feature from the sensors, and/or fetch stored sensor data such as a look up table from the particular storage locations; thus, the sensor data produced by the respective sensor may be shared with the user computing device302. The processors of the user computing device302may then use currently captured sensor data, to scramble or unscramble screen and/or content on the screen of the user computing device302when the user308is detected within an operation range of the sensor of the user computing device302. For instance, in some embodiments, a user computing device302may be associated with an eye-tracking module that is implemented as a software module running on associated hardware, and configured to receive command data from a processor of a user computing device302, process the command data into hardware operation data, and provide the hardware operation data to an eye-tracking sensor module. The eye-tracking module is configured to receive ocular sensor data from eye-tracking sensor module, processes the ocular sensor data to generate ocular engagement data, and provides the ocular engagement data to the processor. Further to these embodiments, ocular engagement data includes one or more metrics characterizing the level of engagement of user308with content being displayed via a screen of a user computing device302. In an example, the ocular engagement data includes data describing whether or not the gaze of the user308is directed toward the content displayed via the screen, a general level of interest in the content displayed via the screen as determined by the eye movements of the user308, and the like. In these embodiments, hardware operation data includes instructions for hardware operation, such as instructions to activate eye-tracking sensors, to begin to track the gaze of the user308, to stop tracking the gaze of user308, and the like. An eye-tracking sensor module is implemented as a software configured to control associated hardware, and configured to receive hardware operation data from the eye-tracking module, interact with the user308in order to generate ocular sensor data, and provide the ocular sensor data to the eye-tracking module. In these embodiments, ocular sensor data includes data describing the movement of the eyes of the user308. In one example, the eye-tracking sensor module is implemented as software configured to control a camera hardware (e.g., a camera pair, not shown) included within the user computing device302that is configured to determine the direction of the gaze of the user308. In this example, ocular sensor data includes the length of stare of the user308on one or more regions of content being displayed via the screen, whether or not the user308is looking at one or more portions of content being displayed via the screen, and the path of the gaze of the user308as the user308views content being displayed via the screen. The processors of the user computing device302may then use currently captured ocular sensor data, to scramble or unscramble screen and/or content on the screen of the user computing device302. As shown, the processor of the user computing device302displays the unscrambled content304on the display screen310of the user computing device302when the sensor detects and authenticates the user308viewing the screen based on the ocular sensor data, and the processor of the user computing device302displays the scrambled content306on the display screen310of the user computing device302when the sensor detects the user308facing away from the screen based on the ocular sensor data. A eye-tracking module may utilize sensor or camera data to determine the gaze of the user308. In one embodiment, a light (e.g., infrared) is reflected from the user308eye and a video camera or other sensor can receive the corneal reflection. The eye-tracking module analyzes the ocular sensor data of the user308to determine eye rotation of the user308from a change in the light reflection. A vector between a pupil center of the user308and the corneal reflections of the user308can be used to compute a gaze direction of the user308. Eye movement data of the user308may be based upon a saccade and/or a fixation, which may alternate. A fixation is generally maintaining a visual gaze on a single location, and it can be a point between any two saccades. A saccade is generally a simultaneous movement of both eyes of the user308between two phases of fixation in the same direction. In one implementation, the eye-tracking module can use a dark-pupil technique, whereby if the illumination source is offset from the optical path, then the pupil appears dark as the retro reflection from the retina of the user308is directed away from the camera. In another implementation, the eye-tracking module can use a bright-pupil technique, whereby if the illumination is coaxial with the optical path, then the eye of the user308acts as a retro reflector as the light reflects off the retina creating a bright pupil effect. In yet another implementation, a camera or sensor can track eye image features (e.g., retinal blood vessels) and follow the features as the eye of the user308rotates. It is preferable that the eye tracking data is obtained in a manner that is non-invasive. In yet another implementation, a camera or sensor can identify a location of an iris of the user308or pupil of the user308based on the circular shape or by detection an edge. The movement of the iris or pupil of the user308can then be detected. The processors of the user computing device302may then use currently captured iris/pupil data, to scramble or unscramble screen and/or content on the screen of the user computing device302. As shown, the processor of the user computing device302displays the unscrambled content304on the display screen310of the user computing device302when the sensor detects and authenticates the user308viewing the screen based on the iris/pupil data, and the processor of the user computing device302displays the scrambled content306on the display screen310of the user computing device302when the sensor detects the user308facing away from the screen based on the iris/pupil data. In some embodiments, a user computing device302may be associated with an expression processing module, which may be an eye-tracking processing module or a head tracking module. The expression processing module can use a coding system that recognizes eye movement and/or gaze direction of the user308and generates a score based on duration and direction. Eye movement or gazing may have a duration of about 1/25 of a second to 2 seconds or longer, so the expression processing module will receive a data feed of eye movements of the user308from a high speed camera having increments of less than one second to account for very quick changes. Some micro-eye movements occur so quickly that a human observer cannot detect or sense the shift in gaze or eye movement. In one implementation, supplemental content will be displayed when the eye movement of the user308meets a threshold value, when the gaze of the user308is directed away from the displayed content, or both. The processors of the user computing device302may then use currently captured eye movement data, to scramble or unscramble screen and/or content on the screen of the user computing device302. As shown, the processor of the user computing device302displays the unscrambled content304on the display screen310of the user computing device302when the sensor detects and authenticates the user308viewing the screen based on the eye movement data, and the processor of the user computing device302displays the scrambled content306on the display screen310of the user computing device302when the sensor detects the user308facing away from the screen based on the eye movement data. In some embodiments, a user computing device302may be associated with a tracking sensor module such as a head tracking sensor module which is implemented as software configured to control associated hardware, and configured to receive hardware operation data from the head-tracking module, interact with the user308in order to generate head position data of the user308, and provide the head position data of the user308to the head tracking module. In these embodiments, the head position data of the user308includes data describing the movement of the head of the user308. In an example, head-tracking sensor module is implemented as software configured to control camera hardware (e.g., a camera pair, not shown) included within the user computing device302that is configured to determine the position of the head of the user308. In this example, head position data of the user308includes the position of the head of the user308with respect to one or more regions of content being displayed via the screen of the user computing device302, whether or not the user308is looking at one or more portions of content being displayed via the screen, and the path of the head movement of the user308as the user308views content being displayed via the screen. A head tracking module may utilize sensor or camera data to determine the initial head position of a user308and any subsequent change from the initial head position of the user308. In one embodiment, a light (e.g., infrared) is reflected from the user308head and a video camera or other sensor can receive the reflection from the user308head. The head tracking module analyzes the head position data of the user308to determine head movement of the user308from a change in the light reflection. A vector between a location on the user308head and the head reflections can be used to compute a change in head position or direction. Head position data of the user308may be based upon a movement and/or a fixation, which may alternate. A fixation is generally maintaining a head position in single location. A movement is generally any change in position of the head of the user308from an initial position. The processors of the user computing device302may then use currently captured head movement data, to scramble or unscramble screen and/or content on the screen of the user computing device302. As shown, the processor of the user computing device302displays the unscrambled content304on the display screen310of the user computing device302when the sensor detects and authenticates the user308viewing the screen based on the head movement data, and the processor of the user computing device302displays the scrambled content306on the display screen310of the user computing device302when the sensor detects the user308facing away from the screen based on the head movement data. In another example case, as depicted inFIG.3G, a user308may be using a device320such as a mobile phone, and when the user308and/or the device320comes into proximity of a user computing device302displaying scrambled content322, then an authentication apparatus of the user computing device302may authenticate the user308and/or the device320. Upon successful authentication of the user308and/or the device320, then the device320may determine a decryption technique to unscramble/decrypt the scrambled data/content322that is unreadable to a human. For instance, a first decryption technique may be applicable to unscramble a first type of scrambled content (for example, scrambled content such as scrambled pixels322displayed on the user computing device302) and a second decryption technique may be applicable to unscramble a second type of scrambled content (for example, jumbled alphabets). Thus, firstly, the device320may determine a type of scrambled content322displayed on a screen of the user computing device302, and upon determining the type of scrambled content322, the device320may then identify a decryption technique applicable to the determined type of scrambled content322to unscramble the content. Upon analysis, the device320may determine that the scrambled content322comprises scrambled pixels, and thus a first decryption technique is applicable based on records associated with decryption techniques. Upon identification of the first decryption technique, then the device320execute the first decryption technique, which may result in transmission and display of unscrambled content324, which may be readable by humans on a GUI of the device320. In yet another example case, as depicted inFIG.311, a user computing device302displays encrypted content on its screen in form of a machine readable code326. The machine readable code326image may be a QR code image, barcode image, or other known code image for use with an optical scanner. In some embodiments, the machine readable code326image may represent a code that is a string of alphanumeric characters that are generated by an algorithm contained within the user computing device302application. A user308may be using a device320such as a mobile phone comprising an optical scanner to scan the machine readable code326image on the user computing device302. After the machine readable code326image is scanned by the optical scanner of the device320, the user computing device302may first determine identification data associated with the device320to authenticate the device320. The user computing device302may use the identification data associated with the device320to search a database comprising records of approved devices that are eligible to view content on the user computing device302. The successful authentication of the device320by the user computing device302may then result in transmission and display of decrypted content328on a GUI of the device320, which may be readable by humans. In alternate embodiments, upon the identification of the first encryption technique, the device320may transmit a notification regarding the first encryption technique to the user computing device302, and the user computing device302may then execute the first encryption technique, which may result in transmission and display of unscrambled content324, which may be text readable by a human on a GUI of the device320. FIG.4illustrates a user computing device402displaying scrambled text due to presence of an unauthorized person. The user computing device402may include an imaging device408such as a sensor or camera, which may be used to scan an area (a zone within which content on a screen of a user computing device402are readable) and find all users that are available within said area. Then the user computing device402, in response to determining that an unauthorized user406has entered into viewable range of the screen, the screen and/or content displayed on the screen may be automatically scrambled, which is permitted to be visible only to authorized user404. An imaging device408may include a camera. The camera is an optical instrument for recording or capturing images within the area, which may be stored locally, transmitted to another location, or both. The images may be individual still photographs or sequences of images constituting videos or movies of objects and users within the area. The camera may use an electronic image sensor, such as a charge coupled device or a CMOS sensor to capture images within the area, which may be transferred or stored in a memory or other storage inside the camera, a system server, or the user computing device402for processing. The raw images from an imaging device408are transmitted to a processor of a user computing device402or a system server, which segregates the images (based on content within it) and normalize the images. The user computing device402and system server may be connected to each other through a network to share data among each other. While processing the images captured within the area, the processor of the user computing device402may employ face recognition technology for processing the normalized image. The face recognition technology may use pattern recognition and facial expression analysis to recognize users captured within the images. In one method, the face recognition technology may detect facial area within the images using a neural network. In another method, the face recognition technology may detect facial area within the images using statistical features of facial brightness, which may be a principal component analysis of brightness within the captured images. In operation, in order to recognize user faces within images captured of an area, a user computing device402may employ an extracted face image as an input of a face recognition technology as a means of detecting the exact position of facial components or facial features in the extracted face region. In other words, in order to compare an input image with a face recognition model, face position extraction and a size normalizing process for compensating for differences in size, angle, and orientation of the facial image extracted from the input image relative to a facial image of the face recognition model template are performed. In some embodiments of the face recognition models, an eye area may be used as a reference facial component in the alignment and the normalizing processes since the feature of the eye area remain unchanged compared with those of other facial components, even if a change occurs in the size, expression, lighting, etc., of a facial image. One or more techniques may be employed for eye detection, which may normalize correlation at all locations within an input image by making eye templates of various sizes and forming a Gaussian pyramid image of the input image. In one technique, a matrix for eyes, nose, and mouth areas may be provided according to a size of a template, and features of interest are searched through comparison with an input image in all areas within the template image. In another technique, a template having two ellipses for detecting facial ellipses may be used to detect a facial location through evaluating a size of edge contours which may encircle a face in a region between the two ellipses. A user computing device402or a system server upon identifying users within an area using face recognition technology may then determine whether the users are authorized or unauthorized. In some embodiments, the user computing device402or the system server may compare biometric or facial data of the users that has been identified with information in a biometric or facial feature database to determine the authorization of the identified users to view certain content on a screen of the user computing device402. Where the captured biometric or facial data from an identified user matches a template within the biometric or facial feature database, the user may be identified as being authorized. The identified user may be treated as an unauthorized person in the absence of authenticating the user as an authorized user. A user computing device402or a system server, upon identifying an unauthorized user406within a pre-defined area (when a face of user406doesn't matches a known template within a database), may determine a location of the unauthorized user406within the area. The user computing device402or the system server may use one or more motion sensors directly or indirectly associated with the user computing device402or the system server to determine exact location of the unauthorized user406within the area. In some embodiments, the user computing device402or the system server may use one or more location sensors directly or indirectly associated with the user computing device402or the system server to determine exact location of the unauthorized user406within the area. The one or more location sensors may detect the actual location of the unauthorized user406by generating an electromagnetic beam, such as an infrared or laser beam, and analyzing reflections from the electromagnetic beam to determine the position of the unauthorized user406based on the reflections. In some embodiments, any suitable location determination technique may be used by the user computing device402or the system server to determine the exact location of the unauthorized user406within the area. The user computing device402or the system server upon determining the location of the unauthorized user406may further determine whether a screen of the user computing device402is within viewable range of the unauthorized user406. The user computing device402may determine whether the screen is within the viewable range of the unauthorized user406depending on whether there is an unobstructed line of sight between one or both of the unauthorized user406eyes and the screen. In some embodiments, whether a screen of the user computing device402is within viewable range of the unauthorized user406may also depend on the distance between the unauthorized user406eyes and the screen. In some embodiments, whether a screen of the user computing device402is within viewable range of the unauthorized user406may also depend on the distance between the unauthorized user406and the screen. In some configurations, the user computing device402or the system server, upon identifying that the unauthorized user406is within the viewable range of the screen, may generate and execute software programs to lock the screen, scramble the screen, scramble the content on the screen such that content is not readable by a human, and/or hide sensitive data displayed on the screen (and only display insensitive data). The user computing device402or the system server may continuously monitor the location and/or movement of the unauthorized user406, and upon identifying that the unauthorized user406has moved away from the viewable range of the screen, may generate and execute software programs to unlock the screen, unscramble the screen, unscramble the content on the screen such that content is readable by a human, and/or display sensitive data displayed on the screen. FIG.5illustrates a user computing device502displaying scrambled text due to a current location of a user504. The user computing device502may include a transmitter that transmits connection signals to connect with a receiver of a wearable device (for example, eyeglasses) operated by the user504. The user computing device502and a system server may be connected to each other through a network to share data among each other. Non-limiting examples of the user computing device502may include laptops, mobile phones, smartphones, tablets, electronic watches, among other types of devices. A connection signal may serve as data input used by various communication elements responsible for controlling production of communication signals. The connection signal may be produced by the receiver of the wearable device or the transmitter of the user device502using an external power supply and a local oscillator chip, which in some cases may include using a piezoelectric material. The connection signal may be any communication medium or protocol capable of communicating data between processors of the user device502and the wearable device, such as Bluetooth®, RFID, infrared, near-field communication (NFC). The connection signal may be used to convey information between the transmitter of the user device502and the receiver of the wearable device used to adjust the connection signal, as well as contain information related to status, device identifier, geo-location, and other types of information. Initially, a wearable device establishes a wired or wireless connection or otherwise associates with a user device502. That is, in some embodiments, the user device502and the wearable device may communicate control data over using a wireless communication protocol capable of transmitting information between two processors of the user device502and the wearable device (e.g., Bluetooth®, Bluetooth Low Energy (BLE), Wi-Fi, NFC, ZigBee®). For example, in present embodiments implementing Bluetooth® or Bluetooth® variants, the user device502may scan for wearable device broadcasting advertisement signals or a wearable device may transmit an advertisement signal to the user device502. The advertisement signal may announce the wearable device's presence to the user device502, and may trigger an association between the user device502and the wearable device. As described herein, in some embodiments, the advertisement signal may communicate information that may be used by various devices (e.g., user device502, wearable device, sever computers, etc.) to execute and manage secure display of content on screen of the user device502. Information contained within the advertisement signal may include a device identifier (e.g., wearable device address) and a user identifier (e.g., user name). The user device502may use the advertisement signal transmitted to identify the wearable device (and the user504) and, in some cases, locate the wearable device (and the user504) in a two-dimensional space or in a three-dimensional space. Once the user device502identifies the wearable device used by a user504and/or the user504itself, the user device502may then establish a wireless connection with the wearable device and/or authorizes the user504, allowing the user device502and wearable device to communicate control signals over a communication channel. In some cases, the user device502may use the advertisement signal to authenticate user502, determine a role of user502, and then display unscrambled content on screen of the user device502based on permissible unscrambled content based on the role of the user. The user device502may use information contained in the wearable device advertisement signal, or in subsequent connection signals received from the wearable device, to determine what unscrambled content and for how much time to display on the screen of the user device502. In some embodiments, when a user device502identifies and wirelessly connects with a wearable device, a system server, the user device502and/or the wearable device may then initiate steps to authenticate a user504using the wearable device, unlock a screen of the user device502, unscramble screen and/or content on the screen, and thereby allowing the user504to view unscrambled content on the unlocked screen of the user device502. The system server, the user device502and/or the wearable device may authenticate the user504based on security mechanisms, which may use biometric identification of the user504. For example, the security mechanisms may be biometric-based security processes, and based on, or include, use of a biometric component such as a fingerprint reader, an iris scanner, a voice recognition mechanism, an image analysis/facial detection mechanism, etc., that can be used to identify a particular user504using a particular wearable device. In some embodiments, the system server, the user device502and/or the wearable device may implement a pulse detection apparatus to authenticate the user504, which may use pulse waveform data of the user504and uses the pulse waveform data to conduct biometric identification of the user504. The pulse data measurements of the user504may be gathered using a variety of sensors of the pulse detection apparatus on fingers, wrists, temples, eyes, of the user504or through other similar means. In some embodiments, during an enrollment process of a user504for biometric-based security process, a biometric signature created by a system server to authenticate the user504may be generated from biometric profiles of the user504. For example, an exemplary number of biometric profiles that may be averaged by the system server to create the biometric signature as used herein is two biometric profiles. However, any number of biometric profiles may be combined, each of which is created through an operation of the biometric profile creation session, which is a first part of the biometric-based security process that includes the presentation and biometric data biometric data capture portion, biometric data pre-processing portion, biometric data segmentation portion, and biometric data feature extraction portion. Accordingly, one or more biometric profiles may be used to establish a biometric signature of the user504. In addition, during an authentication process, one or more biometric profiles of the user504may also be captured utilizing the biometric profile creation session previously used to capture the enrollment biometric profiles used to generate the biometric signature of the user504during the enrollment process for biometric-based security process. A pulse detection apparatus may include various electronic components (such as sensors), and be part of or a separate component associated with a system server, a user device502and/or a wearable device. In one example, the pulse detection apparatus that contains pulse sensors may be integrated into the wearable device to provide dynamic biometric based measurements, for example, measurements of pulse wave data at one or more measurement points on the user504. The measurements of the pulse wave data at the one or more measurement points on the user504is used to form a biometric signature for the user504. In another example, the pulse detection apparatus integrated into the wearable device may obtain pulse data of the user504when user504is wearing the wearable device, where inputs from the sensors providing the pulse data of the user504is utilized to form a biometric signature for the user504, which may be used to perform biometric identification of the user504. In another example, the pulse detection apparatus may be included in the user device502or any system server that obtains the pulse data of the user504to perform biometric identification, e.g., from pulse sensors disposed on or viewing a user504. The user504pulse data may include pulse data that permits an conclusion as to the identity of the user504. The pulse data of the user504maybe collected at a plurality of points in order to offer a more accurate identification of the user504. For example, two or more different blood vessels of the user504may be measured to obtain user504pulse data for each. The two or more measurements are combined or correlated with one another to further refine or improve the biometric identification. In some embodiments, one or more sensors may be used, e.g., on opposite sides of a wearable device, in order to obtain the pulse data of the user504at multiple locations. The pulse data for the multiple locations can be compared (as to time and magnitude, e.g., of a pulse wave) in order to form a biometric signature for the user504. In another example, a camera of the user device502may sample or obtain image data of two or more different blood vessels in order to derive pulse data of the user504, e.g., pulse wave data, for use in biometric identification of the user504. During a validation session of the biometric-based security process, a system server, a user device502and/or a wearable device may capture biometric data of the user504and then compare to the biometric signature of the user504to perform authentication of the user. For instance, the user device502and/or the wearable device may utilize the user504biometric pulse data to determine if the user504biometric pulse data matches expected user biometric pulse data. In other words, the currently detected user504pulse data obtained is compared to known user pulse data of a particular user in order to identify the particular user. The known user pulse data may be stored locally or accessed from a remote database. The known user pulse data may include a biometric signature or profile that has been generated based on historically detected user pulse data. In accordance with various aspects of the disclosed embodiments, during an authentication operation based on the matching operation of determined versus expected biometric data of the user504, each point of the captured biometric pulse data of the user504may be compared to a respective point in the biometric signature using a matching algorithm, such as Euclidean distance, hamming distance, etc., to evaluate if the verification biometric pulse data matches the biometric signature at a given threshold. Accordingly, the profile of the user504with a biometric pulse data distribution does not have to be identical to the biometric signature. If the profile of the user504matches with the biometric signature, then the user504is authenticated, and if there is no match then authentication of the user504is denied. If the user504is identified, using the pulse data of the user504, that is, the currently detected pulse data of the user504is similar or equivalent to known pulse data of the user504, the user504may be granted access to view unscrambled content on the screen of the user device502, and have continued access to the unscrambled content on the screen of the user device502. If the user504is not identified using the detected pulse data, the lack of user identification may lead to a requirement for further authentication data and/or may result in reduced user device502functionality. For example, if a user504is identified using the user pulse data, user-specific functionality may be provided by the user device502and the unscrambled screen will be displayed on the user device502. In contrast, if the particular user504is not identified using the user pulse data, a temporary setting may be applied to the user device502and the scrambled screen will be displayed on the user device502, subject to further identification being made, e.g., using certain information that the user504has knowledge of, such as a password, two factor identification methods, or certain information which the user504has possession of, such as a token, or one or more physical characteristics of the user504, such as the user's fingerprint profile. In some embodiments, when a wired or wireless connection between a wearable device used by a user504and a user device502is terminated because the wearable device or the user504wearing the wearable device are out of range from the user device502, then a system server and/or the user device502may generate and execute software programs to lock a screen of the user device502, scramble the screen, scramble content on the screen such that content is not readable by a human, and/or hide sensitive data displayed on the screen (and only display insensitive data). The user computing device402or the system server may monitor the location and/or movement of the wearable device or the user504wearing the wearable device, and upon re-establishing the wired or wireless connection between the wearable device used by the user504and the user device502when the wearable device used by the user504and the user device502are in range of each other, then the system server and/or the user device502may again initiate authentication process of the user504. Upon authentication of the user504, the system server and/or the user device502may then generate and execute software programs and/or algorithms to unlock the screen, unscramble the screen, unscramble content on the screen such that content is readable by a human, and/or display sensitive data on the screen. FIG.6shows execution of a method showing operations of a distributed data processing and display system, according to an exemplary method600. The exemplary method600shown inFIG.6comprises execution steps602,604,606, and608. However, it should be appreciated that other embodiments may comprise additional or alternative execution steps, or may omit one or more steps altogether. It should also be appreciated that other embodiments may perform certain execution steps in a different order; steps may also be performed simultaneously or near-simultaneously with one another. In addition, the exemplary method600ofFIG.6is described as being executed by a single server computer, referred to as a system server in this exemplary embodiment. However, one having skill in the art will appreciate that, in some embodiments, steps may be executed by any number of computing devices operating in a distributed computing environment. In some cases, a computer executing one or more steps may be programmed to execute various other, unrelated features, where such computer does not need to be operating strictly as a user computing device or a system server described herein. In a first step602, a user computing device may display on its graphical user interface (GUI) or a monitor screen, an image or a video content. The image and/or the video content may include textual or visual data/information. The screen may be an output device, which displays information such as the image or the video content in pictorial form. The screen may include a display device, circuitry, casing, and power supply. The display device may be a thin film transistor liquid crystal display, light-emitting diode display, or an organic light-emitting diode display. The screen may be connected to the user computing device via VGA, Digital Visual Interface (DVI), HDMI, Display Port, Thunderbolt, low-voltage differential signaling (LVDS) or other proprietary connectors and signals. Initially, when a user computing device is not being operated by any user, a screen of the user computing device may be scrambled or scrambled data/content may be displayed on the screen that is unreadable to a human. The term “scrambled” and “encrypted” may be interchangeably used. In some embodiments, the scrambled data may correspond to jumbled letters, which may not make any sense to the user. For example, the user computing device or a system server may randomly arrange words and letters, and put words or letters in a wrong order so that they do not make sense (even though maintaining the styles and numbers of letters in each word). The scrambling performed is random and may be undone using one or more unscrambling techniques. In some embodiments, scrambled data may correspond to a plurality of segments of an image displayed on a screen such that information within segmented image is unreadable to a human. A user computing device and/or a system server associated with the user computing device may generate and execute software programs and/or algorithms to divide the display screen and/or the image on the screen into multiple segments. Upon dividing the display screen and/or the image on the screen into the multiple segments, in one embodiment, the user computing device and/or the system server may orient each of the segment such that the data in the segmented image is unreadable to the human. In another embodiment, upon dividing the display screen and/or the image on the screen into the multiple segments, the user computing device and/or the system server compress each segment such that the data in the segmented image is unreadable to the human. In yet another embodiment, upon dividing the display screen and/or the image on the screen into the multiple segments, the user computing device and/or the system server overturn each segment such that the data in the segmented image is unreadable to the human. In a next step604, a user computing device may receive a request for a wired or wireless connection from a wearable device. The wearable device may be a display device in form of eyeglasses, goggles, or any other structure comprising a frame that supports and incorporates various components of the wearable device, as well as serves as a conduit for electrical and other component connections. A user computing device may transmit a request for the wired or the wireless connection to the wearable device when the wearable device is within a range of the user computing device. Each of the user computing device and the wearable device may include communication components, one or more transmitters, and one or more receivers. In one example, a transmitter of a user computing device may first identify and then transmit a request for connection to a receiver of a wearable device. In another example, a transmitter of a wearable device may first identify and then transmit a request for connection to a transmitter of a user computing device. A transmitter and a receiver may communicate to each other with or without communication components. The communications component may include electromechanical components (e.g., processor, antenna) that allow the communications component to communicate various types of data with the receivers, transmitters, and/or other components of the transmitters. In some implementations, communications signals between the transmitter and the receiver may represent a distinct channel for hosting communications. The data may be communicated using the communications signals, based on predetermined wired or wireless protocols and associated hardware and software technology. The communications component may operate based on any number of communication protocols, such as Bluetooth®, Wireless Fidelity (Wi-Fi), Near-Field Communications (NFC), ZigBee, and others. However, it should be appreciated that the communications component is not limited to these technologies, but may include radar, infrared, and sound devices as well. In a next step606, a user computing device may connect to a wearable device. The computing device may connect to the wearable device, in response to the user computing device determining that a set of purported credentials associated with the wearable device received from the wearable device through communications signals matches a set of credentials authenticating the wearable device that are stored in a system database. For example, after the communication channel between the user computing device and the wearable device is established, then the user computing device may generate a graphical user interface (GUI) on the user computing device containing a credentials prompt requesting a user of the wearable device to input a set of user credentials. In some cases, after the communication channel between the user computing device and the wearable device is established, then the user computing device may transmit to the wearable device the GUI containing the credentials prompt. The wearable may then transmit to the user computing device, the set of user credentials, in response to the credentials prompt. The user computing device may then match the set of user credentials received from the wearable device with a set of credentials authenticating the wearable device that are stored in a system database. Once the match is confirmed, then the wearable device and the user computing device may be authenticated and connected. In some embodiments, upon the user computing device receiving the set of user credentials from the wearable device, in response to the credentials prompt, the user computing device may transmit the set of user credentials to a system server, which may be directly or indirectly connected to the user computing device. The system server may then match the set of user credentials received from the wearable device with a set of credentials authenticating the wearable device that are stored in a system database. Once the match is confirmed, the system server may authenticate the wearable device and the user computing device, and connect them to each other. In some embodiments, during operation, a user computing device may receive a request from a wearable device to become a trusted wearable device for allowing a user using the wearable device access to content on a screen of the user computing device. The request may be generated in any suitable manner. For example, the user of the wearable device logs into a secure display application service installed on the user computing device and/or the wearable device where the request is generated. The user may log into the secure display application service by entering username and/or user ID of a user. When the user enters the login details, a request for authorizing the wearable device to become the trusted device may be generated, and then transmitted to a user computing device and/or a system server. Upon the receipt of the request by the user computing device and/or the server, the user computing device and/or the server may implement a series of security protocols in order to verify the wearable device and the user. For instance, in a first layer of security protocol implemented by the user computing device and/or the server, the user computing device and/or the server may generate a security code that may be transmitted to a phone number of a mobile device of the user, and the user may be requested to read and/or enter the code on an user interface of the user computing device. The code may include a secret token, which may be, for example, a globally unique identifier (GUID), such as for example but not limited to a unique string of characters including, but not limited to letters or numbers or both. In another example, the code may also include one or more Uniform Resource Locators (URLs). In some embodiments, the code may be associated with an expiry time. The expiry time may be included in the code. The user may then read and enter the code into an user interface of the user computing device to establish secure connection and synchronization between the user computing device and the wearable device. In a next step608, once a wearable device and a user computing device are wirelessly connected to each other, a wearable device may determine a decryption technique to unscramble the scrambled data/content displayed on a screen of said user computing device that is unreadable to a human. In some embodiments, a first decryption technique may be applicable to unscramble a first type of scrambled content (for example, jumbled alphabets) and a second decryption technique may be applicable to unscramble a second type of scrambled content (for example, scrambled pixels). Thus, the wearable device may first determine a type of scrambled content displayed on the screen, and upon determining the type of scrambled content, may then identify an encryption technique applicable to the determined type of scrambled content to unscramble the content. Upon identification of the decryption technique, which may be applicable for the determined type of scrambled content, then the wearable device may execute the decryption technique, which may result in transmission and display of unscrambled content on the wearable device. In alternate embodiments, upon the identification of the decryption technique, which may be applicable for the determined type of scrambled content, the wearable device may transmit a notification regarding the identified decryption technique to the user computing device, and the user computing device may then execute the decryption technique, which may result in transmission and display of unscrambled (or decrypted) content on the wearable device. At a time when the user of the wearable device is able to view unscrambled content on the wearable device, the screen of the user computing device will continue to display scrambled content. In some embodiments, once a wearable device and a user computing device are wirelessly connected to each other, the user computing device may execute software programs/algorithms for unscrambling the scrambled data displayed on the screen such that a jumbled alphabets of the image are reconfigured and information within the image makes sense when the screen of the user computing device is viewed through one or more lenses of the wearable device. In some embodiments, the user computing device may execute software programs/algorithms for unscrambling the scrambled data displayed on the screen such that a plurality of segments of the image are reconfigured to original arrangement, and information within the image is readable when the screen of the user computing device is viewed through one or more lenses of the wearable device. In some embodiments, once a wearable device and a user computing device are wirelessly connected to each other, the user computing device may transmit the scrambled data to the wearable device. The user computing device may also transmit configuration information of the plurality of segments of the scrambled data to the wearable device. In response to receipt of the configuration information of the plurality of segments of the scrambled data, a processor of the wearable device causes the configuration of the plurality of segments to be such that the plurality of segments of the image are reconfigured to original arrangement, and the data in the image is readable by viewing at the one or more lenses. In some embodiments, a wearable device may include an imaging sensor, which may receive the scrambled data from the user computing device. The imaging sensor or a processor of the wearable device may then generate instructions to execute software programs/algorithms to unscramble the scrambled data. Subsequently, the processor of the wearable device may transmit the unscrambled data to the user computer device for display on the screen of the user computing device. In some cases, the processor of the wearable device may transmit the unscrambled data to a system server, and the system server may then transmit the unscrambled data to the user computer device for display on the screen of the user computing device. FIG.7shows execution of a method showing operations of a distributed data processing and display system, according to an exemplary method700. The exemplary method700shown inFIG.7comprises execution steps702,704,706, and708. However, it should be appreciated that other embodiments may comprise additional or alternative execution steps, or may omit one or more steps altogether. It should also be appreciated that other embodiments may perform certain execution steps in a different order; steps may also be performed simultaneously or near-simultaneously with one another. In addition, the exemplary method700ofFIG.7is described as being executed by a single server computer, referred to as a system server in this exemplary embodiment. However, one having skill in the art will appreciate that, in some embodiments, steps may be executed by any number of computing devices operating in a distributed computing environment. In some cases, a computer executing one or more steps may be programmed to execute various other, unrelated features, where such computer does not need to be operating strictly as user computing device or a server described herein. In a first step702, a user computing device may display on its graphical user interface (GUI) or a monitor screen, an image or a video content. The image and/or the video content may include textual or visual data/information. The screen may be an output device, which displays information such as the image or the video content in pictorial form. The screen may include a display device, circuitry, casing, and power supply. The display device may be a thin film transistor liquid crystal display, light-emitting diode display, or an organic light-emitting diode display. The screen may be connected to the user computing device via VGA, Digital Visual Interface (DVI), HDMI, Display Port, Thunderbolt, low-voltage differential signaling (LVDS) or other proprietary connectors and signals. Initially, when a user computing device is not being operated by any user, a screen of the user computing device may be scrambled or scrambled data/content may be displayed on the screen that is unreadable to a human. In some embodiments, the scrambled data may correspond to jumbled letters, which may not make any sense to the user. For example, the user computing device or a system server may randomly arrange words and letters, and put words or letters in a wrong order so that they do not make sense (even though maintaining the styles and numbers of letters in each word). The scrambling performed is random and may be undone using one or more unscrambling techniques. In some embodiments, scrambled data may correspond to a plurality of segments of an image displayed on a screen such that information within segmented image is unreadable by a human. A user computing device and/or a system server associated with the user computing device may generate and execute software programs and/or algorithms to divide the display screen and/or the image on the screen into multiple segments. Upon dividing the display screen and/or the image on the screen into the multiple segments, in one embodiment, the user computing device and/or the system server may orient each of the segment such that the data in the segmented image is unreadable by the human. In another embodiment, upon dividing the display screen and/or the image on the screen into the multiple segments, the user computing device and/or the system server compress each segment such that the data in the segmented image is unreadable by the human. In yet another embodiment, upon dividing the display screen and/or the image on the screen into the multiple segments, the user computing device and/or the system server overturn each segment such that the data in the segmented image is unreadable by the human. In a next step704, a user computing device may capture via one or more cameras directly or indirectly associated with the user computing device, a real-time facial image of a user adjacent to a user computing device. In some embodiments, a camera may be a thermal camera, which is configured to capture one or more facial images of a user that will only detect shape of a head of a user and will ignore the user accessories such as glasses, hats, or make up. The cameras may be used to capture a series of exposures to produce the panoramic image within a region of the user computing device. The camera includes a zoom lens for directing image light from a scene toward an image sensor, and a shutter for regulating exposure time. Both the zoom and the shutter are controlled by a microprocessor in response to control signals received from a system server including a shutter release for initiating image capture. A flash unit may be used to illuminate the scene when needed. The image sensor includes a discrete number of photosite elements or pixels arranged in an array to form individual photosites corresponding to the pixels of the image. The image sensor can be either a conventional charge coupled device (CCD) sensor or a complementary metal oxide semiconductor (CMOS) imager. The camera may be operable in a regular mode and a panoramic mode, and in different angles to create a 3D model of an image. In the regular mode, the camera captures and produces individual still digital images in a manner well known to those skilled in the art. In the panoramic mode, the camera captures a series of overlapping digital images to be used in constructing a panoramic image. The memory of the camera stores the instructions for the processor for implementing the panoramic mode. During operation, once images are captured, a user computing device may then determine users within the captured images. The user computing device may implement one or more techniques to identify the users within the captured images. Once the users are identified, then the user computing device may extract face recognition information from a facial image of each user. The face recognition information may correspond to information associated with a shape of a face. In some embodiments, the face recognition information may correspond to features on a surface of a face such as a contour of eye sockets, nose, and chin of a user. In a next step706, a user computing device may track eye position of a user based on information retrieved from a real-time facial image of a user. The user computing device may execute eye position tracking technologies on the real-time facial image of the user to track eye position of the user. In one example, the user computing device may use an illuminator, a tracking camera, and an image processor to track the eye position of the user. The illuminator, which may be an infrared illuminator, generates an IR beam that illuminates a user's face. The user's eyes may generate a comparatively high level of reflection relative to other features of the user's face, which may be used to distinguish the position of the eyes from those other features. The tracking camera captures the reflected light from the user's cornea. The image processor locates the position of the user's eyes by examining the image captured by the tracking camera. The position of the user's eyes may be determined relative to the other parts of the user's body. In a next step708, a user computing device may determine whether a user is authorized to view readable data on a screen of the user computing device, in response to matching a set of purported identifications associated with a facial image received from cameras with a set of identifications authenticating the user that is stored in a system database. For example, the user computing device may compare and match contour of eye sockets, nose, or chin of a user with a template of face features of known users stored in the database. When there is a match between determined and stored face features, the user is then authenticated, and unscrambled readable data is then displayed on the screen. In some embodiments, a user computing device may monitor a current eye position of an authenticated user, and only when the current eye position of the authenticated user is determined to be in line of sight with the screen, then the user computing device may display unscrambled readable data on the screen. The user computing device may continuously monitor a current eye position of an authenticated user, and when the current eye position of the authenticated user is determined to not be in line of sight with the screen (i.e., the user is not viewing the screen), then the user computing device may display scrambled data on the screen. In some embodiments, a user computing device may monitor a head position of an authenticated user, and only when the head position of the authenticated user is determined to be in line of sight with the screen, then the user computing device may display unscrambled readable data on the screen. The user computing device may continuously monitor a current head position of an authenticated user, and when the current head position of the authenticated user is determined to not be in line of sight with the screen (i.e., the head of user is not towards the screen), then the user computing device may display scrambled data on the screen. In some embodiments, a user computing device may monitor a current eye position and a head position of an authenticated user, and only when the current eye position and the head position of the authenticated user is determined to be in line of sight with the screen, then the user computing device may display unscrambled readable data on the screen. The user computing device may continuously monitor a current eye position and head position of an authenticated user, and when the current eye position and head position of the authenticated user is determined to not be in line of sight with the screen (i.e., the user is not viewing the screen), then the user computing device may display scrambled data on the screen. FIG.8shows execution of a method showing operations of a distributed data processing and display system, according to an exemplary method800. The exemplary method800shown inFIG.8comprises execution steps802,804,806, and808. However, it should be appreciated that other embodiments may comprise additional or alternative execution steps, or may omit one or more steps altogether. It should also be appreciated that other embodiments may perform certain execution steps in a different order; steps may also be performed simultaneously or near-simultaneously with one another. In addition, the exemplary method800ofFIG.8is described as being executed by a single server computer, referred to as a system server in this exemplary embodiment. However, one having skill in the art will appreciate that, in some embodiments, steps may be executed by any number of computing devices operating in a distributed computing environment. In some cases, a computer executing one or more steps may be programmed to execute various other, unrelated features, where such computer does not need to be operating strictly as user computing device or a server described herein. In a first step802, a user computing device may display on its graphical user interface (GUI) or a monitor screen, an image or a video content. The image and/or the video content may include textual or visual data/information. The screen may be an output device, which displays information such as the image or the video content in pictorial form. The screen may include a display device, circuitry, casing, and power supply. The display device may be a thin film transistor liquid crystal display, light-emitting diode display, or an organic light-emitting diode display. The screen may be connected to the user computing device via VGA, Digital Visual Interface (DVI), HDMI, Display Port, Thunderbolt, low-voltage differential signaling (LVDS) or other proprietary connectors and signals. Initially, when a user computing device is not being operated by any user, a screen of the user computing device may be scrambled or scrambled data/content may be displayed on the screen that is unreadable by a human. In some embodiments, the scrambled data may correspond to jumbled letters, which may not make any sense to the user. For example, the user computing device or a system server may randomly arrange words and letters, and put words or letters in a wrong order so that they do not make sense (even though maintaining the styles and numbers of letters in each word). The scrambling performed is random and may be undone using one or more unscrambling techniques. In some embodiments, scrambled data may correspond to a plurality of segments of an image displayed on a screen such that information within segmented image is unreadable by a human. A user computing device and/or a system server associated with the user computing device may generate and execute software programs and/or algorithms to divide the display screen and/or the image on the screen into multiple segments. Upon dividing the display screen and/or the image on the screen into the multiple segments, in one embodiment, the user computing device and/or the system server may orient each of the segment such that the data in the segmented image is unreadable by the human. In another embodiment, upon dividing the display screen and/or the image on the screen into the multiple segments, the user computing device and/or the system server compress each segment such that the data in the segmented image is unreadable by the human. In yet another embodiment, upon dividing the display screen and/or the image on the screen into the multiple segments, the user computing device and/or the system server overturn each segment such that the data in the segmented image is unreadable by the human. In a next step804, a user computing device, via one or more image sensors, associated with the user computing device may capture at least a portion of a face of a user adjacent to the user computing device. The image sensor may be used to capture a series of exposures to produce the panoramic image within a region of the image sensor. The image sensor may analyze portion of the face to identify biometric and facial features of the user such as shape of face, shape of eyes, shape of nose, and shape of other parts of the face. In a next step806, a user computing device may determine whether a user is authorized to view data on a screen of the user computing device, in response to matching a set of purported identifications associated with a portion of a face received from imaging sensors with a set of identifications authenticating the user that is stored in a system database. In one example, a user computing device may compare and match biometric features of a user with a template of biometric features of known users stored in the database. When there is a match between determined and stored biometric features, the user is then authenticated. In another example, the user computing device may compare and match contour of eye sockets, nose, or chin of a user with a template of such face features of known users stored in the database. When there is a match between determined and stored face features, the user is then authenticated. In a next step808, a user computing device may execute software programs and/or algorithms to unlock a screen, unscramble a screen, unscramble scrambled content on a screen such that content is readable by a human, and/or display sensitive data on the screen. In one example, upon the execution of the software programs and/or algorithms, a plurality of segments of segmented and scrambled image are reconfigured to make information within the unscrambled image readable. FIG.9shows execution of a method showing operations of a distributed data processing and display system, according to an exemplary method900. The exemplary method900shown inFIG.9comprises execution steps902,904,906,908,910, and912. However, it should be appreciated that other embodiments may comprise additional or alternative execution steps, or may omit one or more steps altogether. It should also be appreciated that other embodiments may perform certain execution steps in a different order; steps may also be performed simultaneously or near-simultaneously with one another. In addition, the exemplary method900ofFIG.9is described as being executed by a single server computer, referred to as a system server in this exemplary embodiment. However, one having skill in the art will appreciate that, in some embodiments, steps may be executed by any number of computing devices operating in a distributed computing environment. In some cases, a computer executing one or more steps may be programmed to execute various other, unrelated features, where such computer does not need to be operating strictly as user computing device or a server described herein. In a first step902, a user computing device may display on its graphical user interface (GUI) or a monitor screen, an image or a video content. The image and/or the video content may include textual or visual data/information. The screen may be an output device, which displays information such as the image or the video content in pictorial form. The screen may include a display device, circuitry, casing, and power supply. The display device may be a thin film transistor liquid crystal display, light-emitting diode display, or an organic light-emitting diode display. The screen may be connected to the user computing device via VGA, Digital Visual Interface (DVI), HDMI, Display Port, Thunderbolt, low-voltage differential signaling (LVDS) or other proprietary connectors and signals. Initially, when a user computing device is not being operated by any user, a screen of the user computing device may be scrambled or scrambled data/content may be displayed on the screen that is unreadable by a human. In some embodiments, the scrambled data may correspond to jumbled letters, which may not make any sense to the user. For example, the user computing device or a system server may randomly arrange words and letters, and put words or letters in a wrong order so that they do not make sense (even though maintaining the styles and numbers of letters in each word). The scrambling performed is random and may be undone using one or more unscrambling techniques. In some embodiments, scrambled data may correspond to a plurality of segments of an image displayed on a screen such that information within segmented image is unreadable by a human. A user computing device and/or a system server associated with the user computing device may generate and execute software programs and/or algorithms to divide the display screen and/or the image on the screen into multiple segments. Upon dividing the display screen and/or the image on the screen into the multiple segments, in one embodiment, the user computing device and/or the system server may orient each of the segment such that the data in the segmented image is unreadable by the human. In another embodiment, upon dividing the display screen and/or the image on the screen into the multiple segments, the user computing device and/or the system server compress each segment such that the data in the segmented image is unreadable by the human. In yet another embodiment, upon dividing the display screen and/or the image on the screen into the multiple segments, the user computing device and/or the system server overturn each segment such that the data in the segmented image is unreadable by the human. In a next step904, a user computing device may capture via one or more cameras associated with the user computing device, a real-time facial image of a first user adjacent to a user computing device. For example, a camera may be installed on the user computing device and is an optical instrument for recording or capturing images within an area, which may be stored locally, transmitted to another location, or both. The images may be individual still photographs or sequences of images constituting videos or movies of objects and users within the area. The images captured from the camera are fed to a processor of a user computing device or a system server which segregates the images (based on content within it) and normalize the images. While processing the images captured within the area, the processor of the user computing device may employ face recognition technology for processing the normalized image. The face recognition technology may use pattern recognition and facial expression analysis to recognize first user captured within the images. In one method, the face recognition technology may detect facial area within the images using a neural network. In another method, the face recognition technology may detect facial area within the images using statistical features of facial brightness, which may be a principal component analysis of brightness within the captured images. In a next step906, a user computing device may determine whether a first user is authorized to view image data on the screen, in response to matching a set of purported identifications associated with the facial image of the first user received from the one or more sensors with a set of identifications authenticating the first user that is stored in a system database. The set of purported identifications associated with the facial image of the first user comprises face recognition information. The face recognition information may correspond to information associated with a shape of the face. In some embodiments, the face recognition information may correspond to features on a surface of a face such as a contour of eye sockets, nose, and chin of a user. The user computing device may compare and match extracted face features of the user with a template of face features of known users stored in a database. When there is a match between determined and stored face features, the user is then authenticated. In a next step908, a user computing device may execute software programs and/or algorithms to unlock a screen, unscramble a screen, unscramble scrambled content on a screen such that content is readable from a naked eye of a first user, and/or display sensitive data on the screen. In one example, upon the execution of the software programs and/or algorithms, a plurality of segments of segmented and scrambled image are reconfigured to make information within the unscrambled image readable to the first user. In a next step910, a user computing device upon processing of images captured by one or more sensors and/or cameras may detect a second user with operation area of the one or more sensors and/or cameras. In some cases, when the user computing device determines that there exists the second user within the area of operation of the camera and/or sensor, then the user computing device may determine authorization and authorization status of the user. In some cases, when the user computing device determines that there exists the second user within the area of operation of the camera and/or sensor, then the user computing device may determine whether the second user is in line of sight of a screen. In some cases, when the user computing device determines that there exists the second user within the area of operation of the camera and/or sensor, then the user computing device may determine authorization and authorization status of the user as well as whether the second user is in line of sight of a screen. In operation, to determine whether a second user is in line of sight of a screen, a user computing device or a system server may determine a location of the second user. The user computing device or the system server may use one or more motion sensors directly or indirectly associated with the user computing device or the system server to determine exact location of the second user. The user computing device or the system server may use one or more location sensors directly or indirectly associated with the user computing device or the system server to determine exact location of the second user. The one or more location sensors may detect the actual location of the second user by generating an electromagnetic beam, such as an infrared or laser beam, and analyzing reflections from the electromagnetic beam to determine the position of the second user based on the reflections. In some embodiments, any suitable location determination technique may be used by the user computing device or the system server to determine the exact location of the second user within the area. The user computing device or the system server upon determining the location of the second user may further determine whether a screen of the user computing device is within viewable range of the second user based on eye position and/or head position of the second user. The user computing device may determine whether the screen is within the viewable range of the second user depending on whether there is an unobstructed line of sight between one or both of the second user eyes and the screen. In some embodiments, whether a screen of the user computing device is within viewable range of the second user may also depend on the distance between the second user eyes and the screen. In some embodiments, whether a screen of the user computing device is within viewable range of the second user may also depend on the distance between the second user and the screen. In a next step912, a user computing device or a system server upon identifying that the second user is within the viewable range of a screen may generate and execute software programs to lock the screen, scramble the screen, scramble content on the screen such that content is not readable from a naked eye of the second user, and/or hide sensitive data displayed on the screen (and only display insensitive data). The user computing device or the system server may continuously monitor the location and/or movement of the second user, and upon identifying that the second user has moved away from the viewable range of the screen, may generate and execute software programs to unlock the screen, unscramble the screen, unscramble the content on the screen such that content is readable from a naked eye, and/or display sensitive data displayed on the screen. FIG.10shows execution of a method showing operations of a distributed data processing and display system, according to an exemplary method1000. The exemplary method1000shown inFIG.10comprises execution steps1002,1004,1006,1008, and1010. However, it should be appreciated that other embodiments may comprise additional or alternative execution steps, or may omit one or more steps altogether. It should also be appreciated that other embodiments may perform certain execution steps in a different order; steps may also be performed simultaneously or near-simultaneously with one another. In addition, the exemplary method1000ofFIG.10is described as being executed by a single server computer, referred to as a system server in this exemplary embodiment. However, one having skill in the art will appreciate that, in some embodiments, steps may be executed by any number of computing devices operating in a distributed computing environment. In some cases, a computer executing one or more steps may be programmed to execute various other, unrelated features, where such computer does not need to be operating strictly as user computing device or a server described herein. In a first step1002, a server may store records of pulse waveform data collected from known users in a database. The pulse waveform data may correspond to measurement of a pulse waveform transit time, blood pressure, respiratory rate, oxygen saturation, and stroke volume in the user. In some embodiments, the server may receive via one or more pulse sensors, the pulse waveform data collected from one or more measurement positions of a known user. In some embodiments, the server may receive via the one or more pulse sensors, the pulse waveform data collected from the one or more measurement positions of the known user while wearing eyeglasses. In some embodiments, the server may receive via the one or more pulse sensors, the pulse waveform data collected from the one or more measurement positions of the known user while wearing any appropriate wearable device. The one or more measurement positions may include a temple pulse position, a hand pulse position, an eye pulse position, a neck pulse position, or the like. The pulse waveform data is collected from the one or more measurement positions at one or more points on a body of the known user. The one or more points may include one or more blood vessel points of the known user. The one or more pulse sensors may be electronic devices for detecting the pulse wave of a user from reflected light or transmitted light by irradiating the site of a blood vessel with light having an infrared or near infrared range. In some embodiments, the pulse wave sensor may comprise a pair of a light emitting diode (LED) and a phototransistor (photo detector) is attached to a portion of a body to measure the heart rate by calculating the cycle (frequency) of pulse waves from the waveform of reflected light or transmitted light detected by the above photo detector. In some embodiments, a pulse sensor may be a piezoelectric sensor. The piezoelectric sensor may be a capacitive electromechanical transducer that generate electrical charge in proportion to applied stress. The piezoelectric sensor may generate an electrical signal that is proportional to the force caused by blood flow (pulse) in the area of the artery or other areas of the body where a pulse could be detected. The piezoelectric sensor may not be mechanically clamped at their periphery, and may be sensitive to longitudinal stress. Although the piezoelectric sensor material is somewhat sensitive to stress applied normal to its thickness and width, the piezoelectric sensor may be designed to be most sensitive to stresses applied normal to its length. In a next step1004, a server may store records of biometric pulse signatures characterizing pulse waveform data collected from known users in a database. The biometric pulse signature is unique for each known user, and may be used to uniquely identify and authenticate a known user. In some embodiments, a server may generate biometric pulse signatures characterizing pulse waveform data identifying known users wearing eyeglasses. In some embodiments, the server may generate biometric pulse signatures characterizing pulse waveform data identifying known users wearing any suitable wearable device. The biometric pulse signature generated for each known user is unique for each known user, and may be used to uniquely identify and authenticate a known user. The biometric pulse signature associated with the known user wearing the eyeglasses or any wearable device may be stored in the database comprising a non-transitory machine readable storage medium configured to store a plurality of biometric pulse signatures associated with a plurality of known users. Each of the plurality of biometric pulse signatures associated with the plurality of known users may be refined over time, for example, by collecting known user pulse data repeatedly and thereby updating known user's biometric signature. In a next step1006, a server may receive via one or more pulse sensors, pulse waveform data collected from one or more measurement positions of a new and unknown user (a candidate user). A pulse sensor may be an electronic device configured for detecting the pulse wave of the candidate user from reflected light or transmitted light by irradiating the site of a blood vessel with light having an infrared or near infrared range. The pulse wave sensor may comprise a pair of a light emitting diode (LED) and a phototransistor (photo detector) is attached to a portion of a candidate body to measure the heart rate by calculating the cycle (frequency) of pulse waves from the waveform of reflected light or transmitted light detected by the above photo detector. A server may receive via the one or more pulse sensors, the pulse waveform data collected from the one or more measurement positions of the new and unknown user (a candidate user) who is wearing eyeglasses. In some embodiments, the server may receive via the one or more pulse sensors, the pulse waveform data collected from the one or more measurement positions of the new and unknown user (a candidate user) wearing any suitable wearable device. The one or more measurement positions may include a temple pulse position, a hand pulse position, an eye pulse position, a neck pulse position, or the like. The pulse waveform data is collected from the one or more measurement positions at one or more points on a body of the candidate user. The one or more points may include one or more blood vessel points of the candidate user. In a next step1008, a server may initiate a process to authenticate a new user (a candidate user), in response to the server determining the pulse waveform data associated with the new user matches at least one biometric pulse signature of the plurality of biometric pulse signatures stored in the system database. Initially, a server may generate a biometric pulse signature characterizing pulse waveform data identifying a new user wearing eyeglasses. In some embodiments, the server may generate a biometric pulse signature characterizing pulse waveform data identifying a new user wearing any suitable wearable device. The biometric pulse signature generated for new user is unique for the new user, and may be used to uniquely identify and authenticate the new user. For instance, the server may use the new user pulse data and/or biometric pulse signature to determine if the new user pulse data and/or the biometric pulse signature matches any known user records stored in a database. For example, the server may compare the biometric pulse signature of the new user to known users signatures in order to identify the new user. In a next step1010, if the new user is identified using the user pulse data records of the new user, that is, the currently detected user pulse data (such as biometric pulse signature) of the new is similar to known user pulse data, the new user may be granted access to a user computing device. A server may execute software programs/algorithms for unscrambling scrambled data displayed on a screen of a user computing device. For instance, the execution of the software programs/algorithms by the server may result in reconfiguration of the jumbled alphabets of the image such that information within the image makes sense when the screen of the user computing device is viewed through one or more lenses of the wearable device. In some embodiments, the server may execute software programs/algorithms for unscrambling the scrambled data displayed on the screen such that a plurality of segments of the image are reconfigured to original arrangement, and information within the image is readable when the screen of the user computing device is viewed through one or more lenses of the wearable device. In some embodiments, a server may transmit scrambled data from a user computing device to a wearable device. The server may also transmit configuration information of a plurality of segments of the scrambled data to the wearable device. In response to receipt of the configuration information of the plurality of segments of the scrambled data, a processor of the wearable device causes the configuration of the plurality of segments to be such that the plurality of segments of the image are reconfigured to original arrangement, and the data in the image is readable by viewing though the one or more lenses of the eyeglasses. In some embodiments, a wearable device may include an imaging sensor, which may receive the scrambled data from a server via a user computing device. The imaging sensor or a processor of the wearable device may then generate instructions to execute software programs/algorithms to unscramble the scrambled data. Subsequently, the processor of the wearable device may transmit the unscrambled data to the user computer device for display on the screen of the user computing device. In some cases, the processor of the wearable device may transmit the unscrambled data to a system server, and the system server may then transmit the unscrambled data to the user computer device for display on the screen of the user computing device. FIG.11shows execution of a method showing operations of a distributed data processing and display system, according to an exemplary method1100. The exemplary method1100shown inFIG.11comprises execution steps1102,1104,1106,1108,1110, and1112. However, it should be appreciated that other embodiments may comprise additional or alternative execution steps, or may omit one or more steps altogether. It should also be appreciated that other embodiments may perform certain execution steps in a different order; steps may also be performed simultaneously or near-simultaneously with one another. In addition, the exemplary method1100ofFIG.11is described as being executed by a single server computer, referred to as a system server in this exemplary embodiment. However, one having skill in the art will appreciate that, in some embodiments, steps may be executed by any number of computing devices operating in a distributed computing environment. In some cases, a computer executing one or more steps may be programmed to execute various other, unrelated features, where such computer does not need to be operating strictly as user computing device or a server described herein. In a first step1102, a server may receive via one or more pulse sensors, pulse waveform data collected from one or more measurement positions of a new user (candidate user). The pulse waveform data may correspond to measurement of a pulse waveform transit time, blood pressure, respiratory rate, oxygen saturation, and stroke volume in the user. In some embodiments, the server may receive via the one or more pulse sensors, the pulse waveform data collected from the one or more measurement positions of the new user while wearing eyeglasses. In some embodiments, the server may receive via the one or more pulse sensors, the pulse waveform data collected from the one or more measurement positions of the new user while wearing any appropriate wearable device. The one or more measurement positions may include a temple pulse position, a hand pulse position, an eye pulse position, a neck pulse position, or the like. The pulse waveform data is collected from the one or more measurement positions at one or more points on a body of the new user. The one or more points may include one or more blood vessel points of the new user. The one or more pulse sensors may be electronic devices for detecting the pulse wave of a user from reflected light or transmitted light by irradiating the site of a blood vessel with light having an infrared or near infrared range. In some embodiments, the pulse wave sensor may comprise a pair of a light emitting diode (LED) and a phototransistor (photo detector) is attached to a portion of a body to measure the heart rate by calculating the cycle (frequency) of pulse waves from the waveform of reflected light or transmitted light detected by the above photo detector. In some embodiments, a pulse sensor may be a piezoelectric sensor. The piezoelectric sensor may be a capacitive electromechanical transducer that generate electrical charge in proportion to applied stress. The piezoelectric sensor may generate an electrical signal that is proportional to the force caused by blood flow (pulse) in the area of the artery or other areas of the body where a pulse could be detected. The piezoelectric sensor may not be mechanically clamped at their periphery, and may be sensitive to longitudinal stress. Although the piezoelectric sensor material is somewhat sensitive to stress applied normal to its thickness and width, the piezoelectric sensor may be designed to be most sensitive to stresses applied normal to its length. In a next step1104, a server may generate a biometric pulse signature characterizing pulse waveform data identifying a new user wearing eyeglasses. In some embodiments, the server may generate the biometric pulse signature characterizing pulse waveform data identifying the new user wearing any suitable wearable device. The biometric pulse signature generated for the new user is unique for the new user, and may be used to uniquely identify and authenticate the new user. In a next step1106, a server may authenticate a new user (a candidate user), in response to the server determining the pulse waveform data associated with the new user matches at least one biometric pulse signature of the plurality of biometric pulse signatures stored in the system database. For instance, a server may use new user pulse data and/or biometric pulse signature to determine if the new user pulse data and/or the biometric pulse signature matches any known user records stored in a database comprising a non-transitory machine readable storage medium configured to store a plurality of biometric pulse signatures associated with a plurality of known users. Each of the plurality of biometric pulse signatures associated with the plurality of known users may be refined over time, for example, by collecting known user pulse data repeatedly and thereby updating known user's biometric signature. In response to determining that the biometric pulse signature associated with the new user matches at least one biometric pulse signature of the known user stored in the system database, the server identifies and authenticates the new user. In a next step1108, if the new user is identified using the user pulse data records of the new user, that is, the currently detected user pulse data (such as biometric pulse signature) of the new is similar to known user pulse data, a server may grant the new user access to unscrambled content on a user computing device. In some embodiments, a server establish a wireless connection (such as Bluetooth connection) between the user computing device and the wearable device after the authentication of the new user. In some embodiments, a wireless connection between the user computing device and the wearable device may be present before the authentication of the new user. EXAMPLES A server may execute software programs/algorithms for unscrambling scrambled data displayed on a screen of a user computing device. For instance, the execution of the software programs/algorithms by the server may result in reconfiguration of the jumbled alphabets of the image such that information within the image makes sense when the screen of the user computing device is viewed through one or more lenses of the wearable device. In some embodiments, the server may execute software programs/algorithms for unscrambling the scrambled data displayed on the screen such that a plurality of segments of the image are reconfigured to original arrangement, and information within the image is readable when the screen of the user computing device is viewed through one or more lenses of the wearable device. In some embodiments, a server may transmit scrambled data from a user computing device to a wearable device. The server may also transmit configuration information of a plurality of segments of the scrambled data to the wearable device. In response to receipt of the configuration information of the plurality of segments of the scrambled data, a processor of the wearable device causes the configuration of the plurality of segments to be such that the plurality of segments of the image are reconfigured to original arrangement, and the data in the image is readable by viewing though the one or more lenses of the eyeglasses. In some embodiments, a wearable device may include an imaging sensor, which may receive the scrambled data from a server via a user computing device. The imaging sensor or a processor of the wearable device may then generate instructions to execute software programs/algorithms to unscramble the scrambled data. Subsequently, the processor of the wearable device may transmit the unscrambled data to the user computer device for display on the screen of the user computing device. In some cases, the processor of the wearable device may transmit the unscrambled data to a system server, and the system server may then transmit the unscrambled data to the user computer device for display on the screen of the user computing device. In a next step1110, a server may detect via one or more motion sensors/detectors, a movement of the new user or the wearable device relative to the user computing device. The one or more motion sensors/detectors may be connected to the user computing device, the wearable device, or may be located at any place in a room where the user computing device is situated. In some embodiments, an electronic motion detector contains an optical, microwave, or acoustic sensor. The changes in the optical, microwave, or acoustic field in the device's proximity are interpreted by the electronics based on one of the sensor technologies. For example, an ultrasonic transducer emits an ultrasonic wave (sound at a frequency higher than a human ear can hear) and receivers reflections from nearby new users. Similar to Doppler radar, detection of the received field indicates motion of the new user. The detected Doppler shift is also at low audio frequencies (for walking speeds of new user) since the ultrasonic wavelength of around a centimeter is similar to the wavelengths used in microwave motion detectors. In another example, infrared sensors may be used, which are sensitive to a user's skin temperature through emitted black body radiation at mid-infrared wavelengths, in contrast to background objects at room temperature. The emitted black body radiation may be used to determine movement of the new user. In yet another example, a camera may be used detect motion of a new user in its field of view using software. The camera may be configured to record video triggered by motion detection of the user. Since the observed field may be normally illuminated, use of camera sensor may be considered passive technology. However it can also be used together with near-infrared illumination to detect motion of user in the dark, that is, with the illumination at a wavelength undetectable by a user eye. In a next step1112, a server upon detecting a movement of new user or a wearable device, may then compare a current location of the new user or the wearable device with respect to the user computing device. The server upon determining that a distance between the current location of the new user or the wearable device with respect to the user computing device is more than a pre-defined threshold range (such as Bluetooth range), then the server switch off the connection between the user computing device and the wearable device. In some embodiments, an employer of a user operating a user computing device may determine a value of pre-defined threshold range. The server may also generate and execute instructions to display scrambled screen on the user computing device such that the data on the scrambled screen is not readable for any user. For example, a server may execute software programs/algorithms for scrambling unscrambled data displayed on a screen of a user computing device. For instance, the execution of the software programs/algorithms by the server may result in configuration of jumbled alphabets of the image such that information within the image does not make sense when the screen of the user computing device is viewed by any user. In some embodiments, the server may execute software programs/algorithms for scrambling the unscrambled data displayed on the screen such that a plurality of segments of the image are configured to an arrangement where information within the image is not readable when view by a human or when the screen of the user computing device is viewed through one or more lenses of the wearable device. A server may continuously monitor location and/or movement of the user, and upon identifying that the user has moved within the pre-defined threshold range, may generate and execute software programs to again authenticate the user, unlock the screen, unscramble the screen, unscramble the content on the screen such that content is readable from a human, and/or display sensitive data displayed on the screen. FIG.12shows execution of a method showing operations of a distributed data processing and display system, according to an exemplary method1200. The exemplary method1200shown inFIG.12comprises execution steps1202,1204,1206,1208, and1210. However, it should be appreciated that other embodiments may comprise additional or alternative execution steps, or may omit one or more steps altogether. It should also be appreciated that other embodiments may perform certain execution steps in a different order; steps may also be performed simultaneously or near-simultaneously with one another. In addition, the exemplary method1200ofFIG.12is described as being executed by a single server computer, referred to as a system server in this exemplary embodiment. However, one having skill in the art will appreciate that, in some embodiments, steps may be executed by any number of computing devices operating in a distributed computing environment. In some cases, a computer executing one or more steps may be programmed to execute various other, unrelated features, where such computer does not need to be operating strictly as a user computing device or a system server described herein. In a first step1202, a server and/or a user computing device may display on its graphical user interface (GUI) or a monitor screen, an image or a video content. The image and/or the video content may include textual or visual data/information. The screen may be an output device, which displays information such as the image or the video content in pictorial form. Initially, when a user computing device is not being operated by any user, a screen of the user computing device may be scrambled or scrambled data/content may be displayed on the screen that is unreadable by a human. In some embodiments, the scrambled data may correspond to a shadow around one or more fonts in text data of the image or video content such that the text data become unreadable by the human. In some embodiments, the scrambled data may correspond to jumbled letters, which may not make any sense to the user. For example, the user computing device or a system server may randomly arrange words and letters, and put words or letters in a wrong order so that they do not make sense (even though maintaining the styles and numbers of letters in each word). The scrambling performed is random and may be undone using one or more unscrambling techniques. In some embodiments, scrambled data may correspond to a plurality of segments of an image displayed on a screen such that information within segmented image is unreadable by a human. A user computing device and/or a system server associated with the user computing device may generate and execute software programs and/or algorithms to divide the display screen and/or the image on the screen into multiple segments. Upon dividing the display screen and/or the image on the screen into the multiple segments, in one embodiment, the user computing device and/or the system server may orient each of the segment such that the data in the segmented image is unreadable by the human. In another embodiment, upon dividing the display screen and/or the image on the screen into the multiple segments, the user computing device and/or the system server compress each segment such that the data in the segmented image is unreadable by the human. In yet another embodiment, upon dividing the display screen and/or the image on the screen into the multiple segments, the user computing device and/or the system server overturn each segment such that the data in the segmented image is unreadable by the human. In a next step1204, a server and/or a user computing device may receive a request for a wired or wireless connection from a wearable device. The wearable device may be a display device in form of eyeglasses, goggles, or any other structure comprising a frame that supports and incorporates various components of the wearable device, as well as serves as a conduit for electrical and other component connections. A user computing device may transmit a request for the wired or the wireless connection to the wearable device when the wearable device is within a range of the user computing device. Each of the user computing device and the wearable device may include communication components, one or more transmitters, and one or more receivers. In one example, a transmitter of a user computing device may first identify and then transmit a request for connection to a receiver of a wearable device. In another example, a transmitter of a wearable device may first identify and then transmit a request for connection to a transmitter of a user computing device. A transmitter and a receiver may communicate to each other with or without communication components. The communications component may include electromechanical components (e.g., processor, antenna) that allow the communications component to communicate various types of data with the receivers, transmitters, and/or other components of the transmitters. In some implementations, communications signals between the transmitter and the receiver may represent a distinct channel for hosting communications. The data may be communicated using the communications signals, based on predetermined wired or wireless protocols and associated hardware and software technology. The communications component may operate based on any number of communication protocols, such as Bluetooth®, Wireless Fidelity (Wi-Fi), Near-Field Communications (NFC), ZigBee, and others. However, it should be appreciated that the communications component is not limited to these technologies, but may include radar, infrared, and sound devices as well. In a next step1206, a server may connect a user computing device to a wearable device. The user computing device may connect to the wearable device, in response to the user computing device determining that a set of purported credentials associated with the wearable device received from the wearable device through communications signals matches a set of credentials authenticating the wearable device that are stored in a system database. For example, after the communication channel between the user computing device and the wearable device is established, then the user computing device may generate a graphical user interface (GUI) on the user computing device containing a credentials prompt requesting a user of the wearable device to input a set of user credentials. In some cases, after the communication channel between the user computing device and the wearable device is established, then the user computing device may transmit to the wearable device the GUI containing the credentials prompt. The wearable may then transmit to the user computing device, the set of user credentials, in response to the credentials prompt. The user computing device may then match the set of user credentials received from the wearable device with a set of credentials authenticating the wearable device that are stored in a system database. Once the match is confirmed, then the wearable device and the user computing device may be authenticated and connected. In some embodiments, upon the user computing device receiving the set of user credentials from the wearable device, in response to the credentials prompt, the user computing device may transmit the set of user credentials to a system server, which may be directly or indirectly connected to the user computing device. The system server may then match the set of user credentials received from the wearable device with a set of credentials authenticating the wearable device that are stored in a system database. Once the match is confirmed, the system server may authenticate the wearable device and the user computing device, and connect them to each other. In some embodiments, during operation, a user computing device may receive a request from a wearable device to become a trusted wearable device for allowing a user using the wearable device access to content on a screen of the user computing device. The request may be generated in any suitable manner. For example, the user of the wearable device logs into a secure display application service installed on the user computing device and/or the wearable device where the request is generated. The user may log into the secure display application service by entering username and/or user ID of a user. When the user enters the login details, a request for authorizing the wearable device to become the trusted device may be generated, and then transmitted to a user computing device and/or a system server. Upon the receipt of the request by the user computing device and/or the server, the user computing device and/or the server may implement a series of security protocols in order to verify the wearable device and the user. For instance, in a first layer of security protocol implemented by the user computing device and/or the server, the user computing device and/or the server may generate a security code that may be transmitted to a phone number of a mobile device of the user, and the user may be requested to read and/or enter the code on an user interface of the user computing device. The code may include a secret token, which may be, for example, a globally unique identifier (GUID), such as for example but not limited to a unique string of characters including, but not limited to letters or numbers or both. In another example, the code may also include one or more Uniform Resource Locators (URLs). In some embodiments, the code may be associated with an expiry time. The expiry time may be included in the code. The user may then read and enter the code into a user interface of the user computing device to establish secure connection and synchronization between the user computing device and the wearable device. In a next step1208, a server and/or a user computing device may generate and execute instructions to adjust a focus value of one or more lenses of a wearable device. A button may be placed on the wearable device, and a processor of the wearable device may receive instructions from the server and/or the user computing device to adjust a focus value of one or more lenses. In some embodiments, the server and/or the user computing device may directly activate the button of the wearable device to adjust a focus value of one or more lenses. The server and/or the user computing device may adjust the focus value of the one or more lenses to synchronize with respect to readability of the screen and/or a page displayed on the screen. The user computing device may adjust the focus value of the one or more lenses to synchronize with respect to readability of the screen and/or a page displayed on the screen based on the one or more attributes associated with the session. The one or more attributes associated with the session may include an identifier associated with the user computing device, an identifier associated with the wearable device, and an identifier of one or more users associated with the wearable device. The server and/or the user computing device may adjust the focus value of the one or more lenses to synchronize with respect to readability of the screen and/or a page displayed on the screen for each new session based on one or more attributes associated with each new session. In some embodiments, a server and/or a user computing device may adjust the focus value of the one or more lenses to synchronize with respect to the readability of the screen and/or a page displayed on the screen, based on a current eye position of a user wearing the wearable device. In some embodiments, a server and/or a user computing device may adjust the focus value of the one or more lenses to synchronize with respect to the readability of the screen and/or a page displayed on the screen, based on a current eye position of a user wearing the wearable device in addition to one or more session attributes. The server and/or the user computing device may monitor a current eye position of the user using one or more motion detector and sensor devices. The one or more motion detector and sensor devices may be directly or indirectly associated with the user computing device and/or the server. For example, using the information obtained from the motion detector and sensor devices, when it is determined by the server and/or the user computing device that the user is looking at the screen based on the current eye position of the user, then the server and/or the user computing device may adjust the focus value of the one or more lenses to synchronize with respect to readability of the screen and/or a page displayed on the screen based on the current eye position of the user, which is that the user is looking at the screen. The one or more motion detector and sensor devices may continuously monitor movement of the eyes of the user, and when using information obtained from the motion detector and sensor devices, it is determined by the server and/or the user computing device that the user is not looking at the screen based on the current eye position of the user, then the server and/or the user computing device may adjust the focus value of the one or more lenses to synchronize with respect to readability of the portion of the user computing device which the user is looking at based on the current eye position of the user. For example, when using information obtained from the motion detector and sensor devices, it is determined by the server and/or the user computing device that the user is looking at a keyboard of the user computing device based on the current eye position of the user, then the server and/or the user computing device may adjust the focus value of the one or more lenses to synchronize with respect to readability of text on the keyboard which the user is looking at based on the current eye position of the user. In a next step1210, a user computing device may execute software programs/algorithms for unscrambling the scrambled data displayed on the screen. For instance, the user computing device may execute software programs/algorithms for unscrambling the scrambled data displayed on the screen such that the shadow is removed and the one or more fonts in the image are readable when the screen of the user computing device is viewed through the one or more lenses with adjusted focus value. In some embodiments, the user computing device may execute software programs/algorithms for unscrambling the scrambled data displayed on the screen such that jumbled alphabets of the image are reconfigured and information within the image makes sense when the screen of the user computing device is viewed through one or more lenses of the wearable device with adjusted focus value. In some embodiments, the user computing device may execute software programs/algorithms for unscrambling the scrambled data displayed on the screen such that a plurality of segments of the image are reconfigured to original arrangement, and information within the image is readable when the screen of the user computing device is viewed through one or more lenses of the wearable device with adjusted focus value. In some embodiments, a user computing device may transmit the scrambled data to the wearable device. The user computing device may also transmit configuration information of the plurality of segments of the scrambled data to the wearable device. In response to receipt of the configuration information of the plurality of segments of the scrambled data, a processor of the wearable device causes the configuration of the plurality of segments to be such that the plurality of segments of the image are reconfigured to original arrangement, and the data in the image is readable by viewing at the one or more lenses. In some embodiments, a wearable device may include an imaging sensor, which may receive the scrambled data from the user computing device. The imaging sensor or a processor of the wearable device may then generate instructions to execute software programs/algorithms to unscramble the scrambled data. Subsequently, the processor of the wearable device may transmit the unscrambled data to the user computer device for display on the screen of the user computing device. In some cases, the processor of the wearable device may transmit the unscrambled data to a system server, and the system server may then transmit the unscrambled data to the user computer device for display on the screen of the user computing device. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, and the like, may be passed, forwarded, or transmitted via memory sharing, message passing, token passing, network transmission, or the like. The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the invention. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein. When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product. The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. While various aspects and embodiments have been disclosed, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims. | 173,730 |
11943220 | DETAILED DESCRIPTION OF THE DISCLOSURE Embodiments of the present application are further described in detail below with reference to the drawings and embodiments. The following embodiments are intended to illustrate the application, but are not intended to limit the scope of the application. In the description of this specification, descriptions with reference to the terms “one embodiment”, “some embodiments”, “examples”, “specific examples”, or “some examples” etc. mean that specific features, structure, materials or characteristics described in conjunction with the embodiment or example are included in at least one embodiment or example of the embodiments of the present application. In this specification, the schematic expressions of the above terms do not necessarily refer to the same embodiment or example. Also, the described specific features, structures, materials or characteristics can be combined in any one or more embodiments or examples in a suitable manner. Moreover, the terms “first”, “second”, “third”, and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. FIG.1is a schematic diagram of the connection between a smart device and a cloud server. As shown inFIG.1, the smart device101can establish a network connection with the cloud server102. After the network connection is established, the cloud server102needs to identify an identity of the smart device101to prevent counterfeit smart devices from occupying resources of the cloud server102or to prevent a smart device from being used as a hacker attack tool to attack the cloud server. The smart devices include devices with computing processing capabilities such as smart phones, computers, tablets, and smart wearable devices (smart watches, smart glasses, smart bracelets), as well as smart devices such as refrigerators, air conditioners, washing machines, microwave ovens, dust collectors, smoke exhaust ventilators, and smart speakers having networking functions. The identity of the smart device can be identified based on the method for identifying the smart device, a smart device identity recognition system, an electronic device, and a storage medium according to the embodiments of the present application. Further explanation is given below. FIG.2is a flowchart of a method for identifying a smart device according to an embodiment of the present application. As shown inFIG.2, the method for identifying a smart device according to an embodiment of the present application, applied in a cloud server, includes: step201, determining a first data randomness degree of inter-packet difference data in a network data packet sent by a smart device to be identified; After the smart device to be identified has established a connection with the cloud server, the smart device to be identified can send network data packets to the cloud server. After receiving these network data packets, the cloud server extracts data from the network data packets and calculates the first data randomness degree of the extracted data. In the network data packets sent by different smart devices, there are some common data, such as the 4-bit version number configured to describe the IP protocol version in the IP protocol data packet, and the 32-bit source address and 32-bit destination address in the IP protocol data packet (source address and destination address of network data packets sent by different smart devices may be the same); as well as some personalized data, such as Window Size field, a preset value of 8-bit time to live (TTL), and a maximum of a packet hop count for a packet to reach the destination in the IP protocol data packet, and the specific content of the IP protocol data packet. In an embodiment of the present application, the data extracted from the network data packet cannot only contain common data, otherwise the data in the network data sent by different smart devices will have the same first data randomness degree, and it would be impossible to identify smart devices according to the data randomness degree. Therefore, the data extracted from the network data packet should be personalized data to distinguish different network data packets, or a set of the personalized data to distinguish different network data packets and common data in network data packets. In an embodiment of the present application, the personalized data to distinguish different network data packets, or the set of personality data and common data is referred to as inter-packet difference data. When extracting inter-packet difference data from network data packets, the amount of extracted data should be moderate. Excessive extracted amount causes a workload of calculating the data randomness degree to increase accordingly, to increase the calculation load; while less extracted amount will weaken a basis on which the data randomness degree is calculated and thus the accuracy of data randomness degree will be affected. In an embodiment of the present application, inter-packet difference data are extracted by the cloud server from network data packets received within 5 seconds after a connection between the smart device to be identified and the cloud server is established. In another embodiment of the present application, inter-packet difference data are extracted from network data packets received within another duration after a connection between the smart device to be identified and the cloud server is established. The data randomness degree is configured to indicate the randomness degree of generating data, and the randomness degree of generating data can be indicated by the types of characters that make up the data and the frequency of occurrence of each character. For example, if a piece of data is randomly generated, each character contained in the data should have roughly equivalent frequency of occurrence according to the laws of mathematical statistics as long as the statistical sample is large enough. Such data will have relatively high randomness degree. Conversely, if the data is not randomly generated, for example as in an English book, all English letters contained in the book have different the frequency of occurrence due to the regularity of English words and thus such data will have relatively low randomness degree. Therefore, the data randomness degree can be calculated through statistics on the type and frequency of the characters contained in the data. In an embodiment of the present application, the first data randomness degree is represented by information entropy. The information entropy of the data in the network data packet can be calculated as follows: calculating the number of occurrence of each character in the data in the network data packet and a sum of the number of occurrences of all characters; and calculating the information entropy based on the number of occurrence of each character and the sum of the number of occurrences of all characters by the following equation: H=-∑P(x)log2P(x);P(x)=num(x)TotalCount where H represents information entropy; P(x) represents the ratio of the number of occurrence of character x to the sum of number of occurrences of all characters; num(x) represents the number of occurrences of character x; TotalCount represents the sum of number of occurrences of all characters. In an embodiment of the present application, the characters may be letters, numbers, symbols, and so on. The information entropy calculated according to the above equation has a value within a range of 0 to 1, and the closer the value is to 1, the greater the data randomness degree. In another embodiment of the present application, the first data randomness degree may also be calculated by other methods. For example, the data randomness degree can be determined according to the occurrence of the vowels a, e, i, o, and u in the network data packet. Step202, determining an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device. The identified smart device refers to the identity of the smart device has been identified. In an embodiment of the present application, the identity information includes type information. The type information includes information such as the brand and type of the smart device. Smart devices with different brands are provided by different manufacturers, and generally use significantly different communication protocols, compilation environment, and communication contents and thus data in the network data packets sent by smart devices with different brands will have significantly difference in data randomness degrees, accordingly. Different brands of smart devices can be identified based on the difference in data randomness degrees. Different types of smart devices, such as a refrigerator and an air conditioner, belong to different categories. Even if they are of the same brand, the communication protocols used by them will inevitably have different implementation details due to the different operating principles thereof. Communication contents between different types of smart devices and the cloud server is usually different. For example, the smart smoke exhaust ventilator transmits recipe information between the cloud server while the smart air conditioner transmits temperature information between the cloud server. Whether the difference in the implementation details of the communication protocol or the difference in the communication contents will cause some differences in the data randomness degree of the data in the network data packets sent by different types of smart devices. Different types of smart devices can be identified based on the difference in data randomness degrees. Smart devices of the same brand and type may have models, and the communication protocols used by different models of smart devices may have different implementation details. For example, the OpenSSL protocol used by a model A of a smart refrigerator has a version number of OpenSSL-0.9.8m, while the OpenSSL protocol used by a model B thereof has a version number of OpenSSL-1.0.1c. Data packet headers generated by the two different versions of the OpenSSL protocol will have somewhat different contents. If the communication protocols used by different types of smart devices have different implementation details, the data in the network data packets sent by the different types of smart devices will have significantly different data randomness degrees. In this case, the type information configured to identify the identity further includes the model information of the smart device. For smart devices of the same brand, same type, and same model, if communication protocols used in some production batches of smart devices and other production batches of smart devices have different implementation details, the data in the network data packets sent by these smart devices will have different data randomness degrees. In this case, the type information configured to identify the identity further includes the production batch information of the smart device. For example, the OpenSSL protocol used by one or some production batches of a model of a smart refrigerator has a version number of OpenSSL-0.9.8m, and the OpenSSL protocol used by other production batches of the same model of the refrigerator has a version number of OpenSSL-1.0.1c. Data packet headers generated based on the two different versions of the OpenSSL protocol will have somewhat different contents and thus the data in the network data packets sent by the different batches of products based on the different protocol version numbers will have significantly difference in data randomness degrees. In this case, the type information configured to identify the identity further includes the production batch information of the smart device. The identified smart device are not limited to one type, but can be multiple types, such as a brand or model of air conditioner, a brand or model of refrigerator, and a brand or model of washing machine. The second data randomness degree of the data in the network data packet sent by the identified smart device may be pre-calculated and stored, or may be calculated in real-time before or during the execution of step202. In an embodiment of the present application, the second data randomness degree can be pre-calculated and stored. As mentioned in the previous, a identified smart device is not limited to one type. Therefore, the second data randomness degree can be a set including the second data randomness degrees of different types of smart devices, such as the second data randomness degrees of all types of smart devices served by the cloud server. In an embodiment of the present application, the second data randomness degree of the data in the network data packet sent by the identified smart device can be represented by information entropy. The information entropy is obtained by whether a pre-calculation method or a real-time calculation method, the calculation method for the information entropy of the data in the network data packet sent by the identified smart device is the same as that for the information entropy of the data in the network data packet sent by the smart device to be identified. The description will not be repeated here. In another embodiments of the present application, the second data randomness degree may also be calculated by other methods. For example, the data randomness degree is determined according to the occurrence of the vowels a, e, i, o, and u in the network data packet. It should be noted that the calculation method of the second data randomness degree should be the same as that of the first data randomness degree, so that both can be compared. It should be noted that when data for calculating information entropy are extracted from network data packets sent by a identified smart device before calculating information entropy, the data extracted should have the same amount as that of data extracted from network data packets sent by the smart device to be identified. For example, in step201, data are extracted by the cloud server from network data packets received within 5 seconds after a connection between the smart device to be identified and the cloud server is established while in the present step, data are extracted by the cloud server from network data packets received also within 5 seconds after a connection between the identified smart device and the cloud server is established. The comparison between the first data randomness degree and the second data randomness degree is performed by taking the absolute value of difference between the first data randomness degree and the second data randomness degree to obtain the comparison result. The obtained comparison result is compared with the first threshold and the second threshold. It is indicated that the smart device to be identified and the identified smart device have the same identity when the comparison result is less than or equal to the first threshold; it is indicated that the smart device to be identified and the identified smart device have different identities when the comparison result is equal to or greater than the second threshold; and the identity of the smart device to be identified cannot be determined yet when the comparison result is between the first threshold and the second threshold. The first threshold is configured to determine that the smart device to be identified has the same identity as the identified smart device, and the second threshold is configured to determine that the smart device to be identified has the different identity from the identified smart device. In an embodiment of the present application, a difference between the information entropy (which represents the first data randomness degree) of the data in the network data packet sent by the smart device to be identified and the information entropy (which represents the randomness of the second data) of the data in the network data packet sent by the identified smart device can be calculated, and an absolute value of the difference can be used as the comparison result. The first threshold has a size of 0.1, and the second threshold has a size of 0.2. It is indicated that the smart device to be identified and the identified smart device have the same identity when the comparison result is less than 0.1; it is indicated that the smart device to be identified and the identified smart device have different identities when the comparison result is greater than 0.5; and the identity of the smart device to be identified cannot be determined yet when the comparison result is between 0.1 and 0.5. In another embodiment of the present application, the first threshold and the second threshold may be adjusted accordingly according to actual conditions. In some embodiments, the identity information also includes an identification result of whether the smart device is a device that the cloud server needs to serve, such as a true device or a counterfeit device. In some applications, users do not care about the brand, type, or model, production batch and other information of the smart device to be identified, but only need to know whether the device is a true device or a counterfeit device. For such applications, during the determination of the identity of the smart device to be identified, a conclusion that the smart device to be identified is a true device or a counterfeit device may be directly given according to the comparison result between the second data randomness degree of the data in the network data packet sent by the identified smart device and the first data randomness degree For example, the comparison result is obtained by taking an absolute value of a difference between the first data randomness degree and the second data randomness degree. The obtained comparison result is compared with the first threshold and the second threshold. It is indicated that the smart device to be identified is a true device when the comparison result is less than or equal to the first threshold; it is indicated that the smart device to be identified is a counterfeit device when the comparison result is equal to or greater than the second threshold; and that whether the smart device to be identified is a true device cannot be determined yet when the comparison result is between the first threshold and the second threshold. It should be noted that a further processing, for example, interrupting the established network connection with the smart device to be identified may be performed when the smart device to be identified and the identified smart device have different identities, and the identified smart device includes a wide range of types of smart devices, such as all types of smart devices served by the cloud server. Furthermore, the prerequisite for establishing a network connection between the smart device to be identified and the cloud server is that the smart device to be identified may provide legal network connection authentication information, such as smart device ID, MAC address and other information, and these devices to provide legal network connection authentication information and unable to pass identity authentication may be added to the blacklist as counterfeit devices or suspected hacker attack tools, and connection of these devices in the blacklist can be refused by the cloud server. The method for identifying the smart device according to an embodiment of the present application are described above. The method for identifying the smart device according to an embodiment of the present application may be used alone, or may be used in combination with the identity recognition method using smart device ID, MAC address and other information in prior art. In the method for identifying the smart device according to the embodiments of the present application, the identity can be identified by calculating data randomness degree for network layer information and based on the calculated data randomness degree. If a counterfeiter wants to counterfeit by modifying the network layer information and not to be discovered by the method for identifying the smart device according to the embodiment of the present application, the counterfeiter can only replace characters in the original network layer information with characters with approximate randomness degree. However, it is very limited for characters having approximate randomness degree that may be selected by the counterfeiter due to the restriction of word formation or usage habits or grammatical rules and thus the replaced network layer information is generally meaningless. For example, original data is TTIS IISS a BBOOKK, if the character T should be replaced and the data randomness degree of the replaced data should be similar to that of the original data, then according to the number and frequency of each letter in English, T may be only replaced with G. That is, the original data TTIS IISS a BBOOKK is transformed into GGIS IISS a BBOOKK, and the replaced data is meaningless. According to the method for identifying the smart device of an embodiment of the present application, the identity of the smart device may be identified by discriminating network layer information that is not easy to be counterfeited, to ensure that the objects served by the cloud server are legal and safe, preventing resources of the cloud server being occupied by a counterfeiting smart device and avoiding hacker attack tools from threatening the cloud server's security. Based on any of the above embodiments,FIG.3is a flowchart of a method for identifying the smart device according to another embodiment of the present application. As shown inFIG.3, the method for identifying the smart device according to another embodiment of the present application, applied in a cloud server, the method includes: step301, determining a first data randomness degree of inter-packet difference data in a network data packet sent by a smart device to be identified; step302, determining an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device; and step303, determining the identity of the smart device to be identified based on a comparison result between a first characteristic value extracted from a transport layer message sent by the smart device to be identified and a second characteristic value extracted from a transport layer message sent by the identified smart device. As mentioned in the previous embodiments of the present application, it is impossible to determine whether the identity of the smart device to be identified and the identified smart device have the same type yet and that whether the smart device to be identified is a true device yet when the comparison result between the first data randomness degree and the second data randomness degree of the data in the network data packet sent by the identified smart device is between the first threshold and the second threshold. In an embodiment of the present application, the smart device to be identified that cannot be determined based on the data randomness degree can be further determined. TLS (Transport Layer Security) protocol is a security protocol that provides security and data integrity protection for Internet communications. TLS includes many types of messages, such as client hello, server hello, and so on. Among the TLS messages sent by different smart devices, some messages may have the same content, such as Protocol Version, while other messages have different content, such as Cipher Suite (key algorithm suite). When extracting data from a message to form a characteristic value, rather than just extracting data from a message with the same content, data should be extracted from a message with different content to form a characteristic value, or extracted from both a message with the same content and a message with different content to form a characteristic value. In an embodiment of the present application, these transport layer messages with different content, or a set of transport layer messages with the same content and transport layer messages with different content are referred to as transport layer messages with inter-packet difference. In an embodiment of the present application, the client hello is selected, and the value of the related field is extracted therefrom to form the characteristic value. A characteristic value extracted from a handshake message of the transport layer security protocol sent by the smart device to be identified is referred to as a first characteristic value, and a characteristic value extracted from a handshake message of the transport layer security protocol sent by the identified smart device is referred to as a second characteristic value. The second characteristic value may be pre-calculated and stored, or may be calculated in real time before or during execution of this step. In an embodiment of the present application, the second characteristic value is pre-calculated and stored. In an embodiment of the present application, fields extracted from the client hello message include Version, Cipher Suites, Extensions Length, elliptic_curves, and ec_point_formats. In another embodiment of the present application, one or more fields can be selected from the above five fields to form a characteristic value. However, it should understand that the misjudgment rate in subsequent identity identification operations using the characteristic value is inversely proportional to the number of fields forming the characteristic value. In another embodiment of the present application, other types of transport layer messages may be used to generate the characteristic value. For example, the TTL (Time to Live) field and the WindowSize field in the TCP (Transmission Control Protocol) packet can be used to generate the characteristic value. The first characteristic value is compared with the second characteristic value, and it is indicated that the smart device to be identified and the identified smart device have the same type when the first and second characteristic values are consistent. On the contrary, it is indicated that the smart device to be identified and the identified smart device do not have the same type when the first and second characteristic values are inconsistent. A method for identifying the smart device according to an embodiment of the present application are described above. According to the method for identifying the smart device of an embodiment of the present application, the identity of the smart device may be identified by discriminating network layer information that is not easy to be counterfeited, the identity of the smart device, which is impossible to be identified through network layer information may be further identified by discriminating transport layer information that is not easy to be counterfeited, ensuring that the objects served by the cloud server are legal and safe, preventing resources of the cloud server being occupied by a counterfeiting smart device and avoiding hacker attack tools from threatening the cloud server's security. Based on any of the above embodiments,FIG.4is a flowchart of a method for identifying the smart device according to an embodiment of the present application. As shown inFIG.4, the method for identifying the smart device according to the embodiment of the present application, applied in a smart device to be identified, the method includes: step401, sending a network data packet to allow a cloud server to determine a first data randomness of the inter-packet difference data in the network data packet, and determining an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device. The smart device to be identified sends a network data packet to the cloud server after establishing a connection with the cloud server. After receiving the network data packet, the cloud server extracts the inter-packet difference data from the network data packet, and calculates the first data randomness degree for the extracted data. The first data randomness degree may be information entropy, or may be a data randomness degree calculated based on the occurrence of the vowels a, e, i, o, and u in the network data packet. After the first data randomness degree is obtained, the first data randomness is compared with the second data randomness obtained based on the inter-packet difference data in the network data packets sent by the identified smart device, and identity of the smart device to be identified is determined based on the comparison result. If the identity cannot be determined, the characteristic value in a transport layer message sent by the smart device to be identified may be further extracted, and the characteristic value may be compared with a characteristic value extracted from a transport layer message sent by the identified smart device, and the identity of the smart device to be identified is determined based on the comparison result. According to the method for identifying the smart device of an embodiment of the present application, the identity of the smart device may be recognized by discriminating network layer information that is not easy to be counterfeited, ensuring that the objects served by the cloud server are legal and safe, preventing resources of the cloud server being occupied by a counterfeiting smart device and avoiding hacker attack tools from threatening the cloud server's security. Based on any of the foregoing embodiments,FIG.5is a schematic diagram of a cloud server according to an embodiment of the present application. As shown inFIG.5, the cloud server according to an embodiment of the present application includes: a data randomness degree calculator501configured to determine a first data randomness degree of inter-packet difference data in a network data packet sent by a smart device to be identified; and an identity recognizer502configured to determine an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device. According to the cloud server of an embodiment of the present application, the identity of the smart device may be recognized by discriminating network layer information that is not easy to be counterfeited, ensuring that the objects served by the cloud server are legal and safe, preventing resources of the cloud server being occupied by a counterfeiting smart device and avoiding hacker attack tools from threatening the cloud server's security. Based on any of the foregoing embodiments,FIG.6is a schematic diagram of a smart device according to an embodiment of the present application. As shown inFIG.6, the smart device according to an embodiment of the present application includes: a network data packet sender601configured to send a network data packet and a cloud server receives the network data packet, calculates a first data randomness degree of inter-packet difference data in the network data packet, and determines an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device. The smart device according to an embodiment of the present application sends the network data packet to the cloud server, the identity of the smart device may be identified by discriminating network layer information that is not easy to be counterfeited through the cloud server, ensuring that the objects served by the cloud server are legal and safe, preventing resources of the cloud server being occupied by a counterfeiting smart device and avoiding hacker attack tools from threatening the cloud server's security. FIG.7is a schematic diagram of the physical structure of an electronic device. As shown inFIG.7, the electronic device may include a processor710, a communication interface720, a memory730, and a communication bus740. The processor710, the communication interface720, and the memory730communicate with each other through the communication bus740. The processor710may call logic instructions in the memory730to execute the following method: receiving a network data packet sent by a smart device to be identified, calculating a first data randomness degree of inter-packet difference data in the network data packet, and determining an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device; or execute the following method: sending a network data packet for being received, calculating a first data randomness degree of inter-packet difference data in the network data packet, and determining an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device. In addition, the logic instructions in the memory730described above may be implemented in the form of a software functional unit and may be stored in a computer readable storage medium while being sold or used as a separate product. Based on such understanding, the embodiments of the present disclosure, which is essential or contributes to the prior art, may be embodied in the form of a software product, which is stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the methods described in various embodiments of the present application. The storage medium described above includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. Further, an embodiment of the present application discloses a computer program product including computer programs stored on a non-transitory computer-readable storage medium. The computer programs include program instructions and a computer may perform the method according to embodiments of the present application when executing the program instructions, for example, including: receiving a network data packet sent by a smart device to be identified, calculating a first data randomness degree of inter-packet difference data in the network data packet, and determining an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device; or including: sending a network data packet for being received, calculating a first data randomness degree of inter-packet difference data in the network data packet, and determining an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device. On the other hand, an embodiment of the present application discloses a non-transitory computer-readable storage medium on which computer programs is stored, and the computer programs are executed by a processor to perform the method according to embodiments of the present application, for example, including: receiving a network data packet sent by a smart device to be identified, calculating a first data randomness degree of inter-packet difference data in the network data packet, and determining an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device; or including: sending a network data packet for being received, calculating a first data randomness degree of inter-packet difference data in the network data packet, and determining an identity of the smart device to be identified based on a comparison result between the first data randomness degree and a second data randomness degree of inter-packet difference data in a network data packet sent by a identified smart device. The device embodiments described above are merely illustrative, the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located at the same place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Through the description of the embodiments above, can be implemented by means of software and a necessary general hardware platform, and of course, by hardware. Based on such understanding, the embodiments of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product, which is stored in a storage medium such as ROM/RAM, magnetic Discs, optical discs, etc., including several instructions to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform various embodiments or a part of the methods described in various embodiments. | 38,239 |
11943221 | DETAILED DESCRIPTION Security and authentication form a critical part of cloud servicing due to the remote, distributed nature of cloud resources. For example, it is not uncommon for a variety of unrelated clients to access, transact with, and otherwise use the same remote cloud service. Ensuring that each client's own use and data is protected from third party actors, including other clients, is very important, especially when the data being accessed or stored remotely is personal, sensitive, or proprietary. Nowadays, more and more cloud services run as separate components in a shared cloud environment, complicating matters further, as access to and between each component may need to be separately authenticated on a per client basis. Component authentication can be handled in a variety of ways. In some implementations, credentials like username and password pairs, application programming interface (API) keys, certificates, etc., are used for authentication between components. These credentials are usually managed by an Identity and Access Management (IAM) server within or communicatively coupled to the cloud environment. However, these credentials are usually known and configured by a components administrator, such as, for example, a site reliability engineer (SRE). Unfortunately, these types of approaches are inherently susceptible to a risk that an actor (e.g., an SRE) could exploit a known credential for a component to masquerade as that component in the cloud environment. For example, the actor can provide the credential for a first component to the IAM server to receive an authentication token for the first component. This token can then be provided as part of a request to other components in the cloud environment, which would see the request as a valid request coming from another component (i.e., the first component) in the cloud environment. This type of attack can allow the actor to effectively bypass the authentication scheme for the other components in the cloud environment. For clarity, an example of this type of masquerading attack is illustrated inFIG.1. As shown inFIG.1, a cloud environment100can include an IAM server102communicatively coupled to an SRE104, a first server (e.g., Server A106), and a second server (e.g., Server B108). In some embodiments of the invention, Server A106can include one or more components (e.g., Component A110) and an API key112. In some embodiments of the invention, server B108can include one or more other components (e.g., Component B114). To ensure privacy and security, the various servers and components of the cloud environment100are protected via a security scheme whereby a request for access to a component in the cloud environment must be validated by the IAM102. The expected authentication path116illustrates an example validation path for providing Component A110in Server A106access to the Component B114in Server B108. It should be understood that while the expected authentication path116illustrates one example validation path (i.e., Component A110requests access to Component B114), others are possible (e.g., Component B114requests access to Component A110, either of Components110,116requests access to one or more additional components, etc.). To initiate the request, Component A110provides a credential (e.g., the stored API key112) for Server A106to the IAM102(step 1 in the expected authentication path116). The IAM102authenticates the API key112and, if valid, provides a token to the Component A110(step 2 in the expected authentication path116). Component A110can then provide this token to Component B114along with a request for access (step 3 in the expected authentication path116). Component B114forwards the request for access, including the token, to the IAM102(step 4 in the expected authentication path116). Finally, the IAM102authenticates the token by ensuring that the token matches the original token in step 2. Once authenticated, the IAM102provides an acknowledgment to Component B114indicating that the request for access from Component A110is valid (step 5 in the expected authentication path116). Access to Component B114is then provided to Component A110(not separately shown). Unfortunately, the SRE104can leverage its administrative access to the API key112in Server A106to masquerade as the Component A110within the cloud environment100. An example of this type of attack vector is shown as the masquerade authentication path118. During this attack vector the SRE104will directly access the API key112(step 0 in the masquerade authentication path118). The SRE104will then provide the API key112to the IAM102(step 1 in the masquerade authentication path118). The IAM102authenticates the API key112and, if valid, provides a token to the SRE104(step 2 in the masquerade authentication path118). The SRE104can then provide this token to Component B114along with a request for access (step 3 in the masquerade authentication path118). Component B114forwards the request for access, including the token, to the IAM102(step 4 in the masquerade authentication path118). Finally, the IAM102authenticates the token by ensuring that the token matches the original token in step 2. Once authenticated, the IAM102provides an acknowledgment to Component B114indicating that the request for access is valid (step 5 in the masquerade authentication path118). Access to Component B114is then provided to SRE104(not separately shown). One or more embodiments of the present invention address one or more of the above-described shortcomings by providing computer-implemented methods, computing systems, and computer program products that prevent masquerading service attacks. Embodiments of the present invention provide a mechanism to issue credentials directly to components within servers in the data center or cloud environment instead of issuing those credentials to the data center or cloud administrators (e.g., system admins, SREs, etc., who have physical access to the cloud data centers and/or are responsible for cloud service deployment). Under this mechanism servers are registered with the IAM and components are deployed on the registered servers prior to allowing access (communication) between components in the cloud environment. In some embodiments of the invention, the component credential is generated internally by an IAM server and issued directly to the component within the cloud environment. In some embodiments of the invention, the component is run within a lock-down operating system of its respective server. As used herein, a “lock-down” system refers to a system which can only be accessed using a limited API, ensuring that nobody (e.g., third parties, admins, SREs, etc.) can know or access the credential (excepting, of course, the IAM server and the respective component server itself). In some embodiments of the invention, all credential requests to the IAM server must be signed by a private key stored on the component's server. This approach ensures that all requests are made from “trusted” servers. As used herein, a “trusted” server from the point of view of the IAM server refers to a server whose requests are signed using a private key known only to that server. Advantageously, a credentialing deployment system configured according to one or more embodiments offers several technical solutions over conventional cloud-based credentialing approaches. As an initial matter, separating the data center or cloud administrators from the component credentialing process greatly reduces the risk of being masqueraded by eliminating the previously mentioned attack vector. Requests to IAM server can be signed using a server's private key to ensure that all the requests originate from trusted servers. Notably, the IAM server will only issue a credential to components running on a trusted server. The credential (e.g., an API key) can then be stored within a lock-down operating system of the component's respective server. Consequently, nobody, including even the data center's own administrators (e.g., data center admins, system admins, SREs, etc.), can access the credential. Turning now toFIG.2, a computer system200is generally shown in accordance with one or more embodiments of the invention. The computer system200can be an electronic, computer framework comprising and/or employing any number and combination of computing devices and networks utilizing various communication technologies, as described herein. The computer system200can be scalable, extensible, and modular, with the ability to change to different services or reconfigure some features independently of others. The computer system200may be, for example, a server, desktop computer, laptop computer, tablet computer, or smartphone. In some examples, computer system200may be a cloud computing node (e.g., a node10ofFIG.10below). Computer system200may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system200may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.2, the computer system200has one or more central processing units (CPU(s))201a,201b,201c, etc., (collectively or generically referred to as processor(s)201). The processors201can be a single-core processor, multi-core processor, computing cluster, or any number of other configurations. The processors201, also referred to as processing circuits, are coupled via a system bus202to a system memory201and various other components. The system memory201can include a read only memory (ROM)204and a random access memory (RAM)205. The ROM204is coupled to the system bus202and may include a basic input/output system (BIOS) or its successors like Unified Extensible Firmware Interface (UEFI), which controls certain basic functions of the computer system200. The RAM is read-write memory coupled to the system bus202for use by the processors201. The system memory201provides temporary memory space for operations of said instructions during operation. The system memory201can include random access memory (RAM), read only memory, flash memory, or any other suitable memory systems. The computer system200comprises an input/output (I/O) adapter206and a communications adapter207coupled to the system bus202. The I/O adapter206may be a small computer system interface (SCSI) adapter that communicates with a hard disk208and/or any other similar component. The I/O adapter206and the hard disk208are collectively referred to herein as a mass storage210. Software211for execution on the computer system200may be stored in the mass storage210. The mass storage210is an example of a tangible storage medium readable by the processors201, where the software211is stored as instructions for execution by the processors201to cause the computer system200to operate, such as is described herein below with respect to the various Figures. Examples of computer program product and the execution of such instruction is discussed herein in more detail. The communications adapter207interconnects the system bus202with a network212, which may be an outside network, enabling the computer system200to communicate with other such systems. In one embodiment, a portion of the system memory201and the mass storage210collectively store an operating system, which may be any appropriate operating system to coordinate the functions of the various components shown inFIG.2. Additional input/output devices are shown as connected to the system bus202via a display adapter215and an interface adapter216. In one embodiment, the adapters206,207,215, and216may be connected to one or more I/O buses that are connected to the system bus202via an intermediate bus bridge (not shown). A display219(e.g., a screen or a display monitor) is connected to the system bus202by the display adapter215, which may include a graphics controller to improve the performance of graphics intensive applications and a video controller. A keyboard221, a mouse222, a speaker221, etc., can be interconnected to the system bus202via the interface adapter216, which may include, for example, a Super I/O chip integrating multiple device adapters into a single integrated circuit. Suitable I/O buses for connecting peripheral devices such as hard disk controllers, network adapters, and graphics adapters typically include common protocols, such as the Peripheral Component Interconnect (PCI) and the Peripheral Component Interconnect Express (PCIe). Thus, as configured inFIG.2, the computer system200includes processing capability in the form of the processors201, and, storage capability including the system memory201and the mass storage210, input means such as the keyboard221and the mouse222, and output capability including the speaker221and the display219. In some embodiments, the communications adapter207can transmit data using any suitable interface or protocol, such as the internet small computer system interface, among others. The network212may be a cellular network, a radio network, a wide area network (WAN), a local area network (LAN), or the Internet, among others. An external computing device may connect to the computer system200through the network212. In some examples, an external computing device may be an external webserver or a cloud computing node. It is to be understood that the block diagram ofFIG.2is not intended to indicate that the computer system200is to include all of the components shown inFIG.2. Rather, the computer system200can include any appropriate fewer or additional components not illustrated inFIG.2(e.g., additional memory components, embedded controllers, modules, additional network interfaces, etc.). Further, the embodiments described herein with respect to computer system200may be implemented with any appropriate logic, wherein the logic, as referred to herein, can include any suitable hardware (e.g., a processor, an embedded controller, or an application specific integrated circuit, among others), software (e.g., an application, among others), firmware, or any suitable combination of hardware, software, and firmware, in various embodiments. FIG.3is a block diagram of a system300that prevent masquerading service attacks in accordance with one or more embodiments of the present invention.FIG.3depicts one or more computer systems302coupled to one or more computer systems304via a wired and/or wireless network. For example, computer system302can be representative of one or more cloud-based resources (e.g. remote computers, etc.), and computer systems304can be representative of numerous client (local) computers. One or more of the computer systems302can be configured to deploy a resource (software, hardware, etc.) for use by one or more computer systems304. Elements of the computer system200ofFIG.2may be used in and/or integrated into computer systems302and computer systems304. In some embodiments of the invention, computation is done directly at the local level. In other words, elements of the computer system302can instead (or in addition) be elements of the computer systems304. The software applications306can include a registration module308, a deployment module310, and a communication module312. The software applications306may utilize and/or be implemented as software211executed on one or more processors201, as discussed inFIG.2. Memory350of the computer systems302can store, for example, cloud admin data352, server data354, service data356, component API keys358, token data360, and acknowledgment(s)362. Block diagrams400,500, and600ofFIGS.4,5, and6, respectively, illustrates interactions between various components of the software applications306and memory350ofFIG.3for preventing masquerading service attacks. FIG.4illustrates a data flow for server registration400in accordance with one or more embodiments of the present invention. As shown inFIG.4, server registration400can include a data flow between a cloud admin402, a server404, and an TAM server406(simply, “TAM406”). In some embodiments of the invention, cloud admin402triggers the server registration400data flow by providing the cloud admin data352to the server404. In some embodiments of the invention, the cloud admin data352includes a cloud admin API key and a service policy (also referred to as an authorization policy) for the respective server. The service policy defines the rules regarding which services can be accessed by other services. For example, the service policy can stipulate that a service A can only be accessed by service A and service B. In some embodiments of the invention, each component is assigned to a service. In this scenario the service policy actually defines which components can be accessed by other components. The server404sends, responsive to receiving the cloud admin data352, server data354to the IAM406. In some embodiments of the invention, the server data354includes registration information for the server, such as, for example, the cloud admin data352, a server ID, and the server's own public key. In some embodiments of the invention, the server404attaches the server data354to the cloud admin data352. In some embodiments of the invention, the IAM406attempts to verify the server data354, and if successful, sends acknowledgement data (e.g., acknowledgement362) as a response to the server404. In some embodiments of the invention, proper verification of server registration400requires matching one or more components of the server data354(e.g., the cloud admin API key, the service policy, the server ID, the server public key, etc.) against preconfigured data stored in a database (not separately shown) within or accessible to the IAM406. For example, the IAM406can maintain a database of known cloud admin API keys as well as a list of known server IDs for authenticating the cloud admin402and the server404, respectively. In some embodiments of the invention, the IAM406includes one or more private keys which can be used to authenticate the cloud admin's API key and the server's public key. In some embodiments of the invention, failure to authenticate any portion of the server data354results in a denial of the registration process. Once verified (i.e., once the server404receives the acknowledgement362), the server404forwards or provides acknowledgement data (e.g., acknowledgement362) to the cloud admin402, completing the registration process. In some embodiments of the invention, the AIM406adds the verified server404to an internally maintained list of trusted servers. FIG.5illustrates a data flow for component deployment500in accordance with one or more embodiments of the present invention. As shown inFIG.5, component deployment500can include a data flow between an SRE502, a server504, and an IAM506. To register a new component508with the IAM506, the SRE502triggers the component deployment500data flow by providing the service data356to the server504(in this context, the service data356can also be referred to as component deployment request data). In some embodiments of the invention, the service data356includes a component ID and a service ID for the respective component. Here, the component ID identifies a component, and the service ID identifies the respective service. The server504sends, responsive to receiving the service data356, server data354to the IAM506. In some embodiments of the invention, the server data354includes registration information for the component508, such as, for example, the service data356and a server ID. In some embodiments of the invention, the server504attaches the server data354to the service data356. In some embodiments of the invention, the IAM506attempts to verify the server data354, and if successful, sends acknowledgement data362that includes a component API key358. In some embodiments of the invention, the IAM506internally generates the component API key358. In some embodiments of the invention, the IAM506generates the component API key358in response to receiving the server data354(i.e., on-demand key generation). In some embodiments of the invention, the IAM506pre-generates a list of API keys and selects an unused key for use as the component API key358. In some embodiments of the invention, proper verification of the component deployment500data flow requires matching one or more components of the server data354(e.g., the component ID, the service ID, the server ID, etc.) against preconfigured data stored in a database (not separately shown) within or accessible to the IAM506. For example, the IAM506can maintain a database of trusted servers (e.g., trusted server IDs) collected during the server registration400data flow discussed previously with respect toFIG.4. In this manner, the IAM506can ensure that the component deployment500data flow only involves components within trusted servers. In some embodiments of the invention, failure to authenticate any portion of the server data354results in a denial of the component deployment process. Once verified (i.e., once the server504receives the acknowledgement362and the component API key358), the server504forwards or provides acknowledgement data (e.g., acknowledgement362) to the SRE502, completing the deployment process. Notably, the component API key358is removed from the acknowledgement262prior to transmitting the acknowledgement data to the SRE502. In this manner, the SRE502is never provided direct access to the component API key358. In some embodiments of the invention, the server504and/or the component508is configured as a lock-down system (i.e., a lock-down server and/or a lock-down component, not separately shown). In some embodiments of the invention, the component API key358is stored within the lock-down system. In this manner, the SRE502cannot be retrieved by the SRE502(or anyone else, including the cloud admin402). For example, the server504can be running a secured, lock-down operating system having only a limited API. In some embodiments of the invention, the limited API does not include functionality for transmitting the component API key358in response to a request. Instead, the limited API includes functionality which only allows the respective server to provide the component API key358to the IAM506. In some embodiments of the invention, the component API key358can only be provided when necessary for internal requirements (e.g., when a component on the trusted server needs access to another component in the cloud environment). Notably, the decision to supply the component API key358lies within the component and the respective server and no functionality is provided which would allow either to provide the component API key358to the SRE502. FIG.6illustrates a data flow for component communication600in accordance with one or more embodiments of the present invention. As shown inFIG.6, component communication600can include a data flow between an IAM602, a first server (e.g., Server A604), and a second server (e.g., Server B606). In some embodiments of the invention, the IAM602, the IAM506, and the IAM406are the same IAM server. Similarly, in some embodiments of the invention, the server A604and/or the server B606undergoes the same server registration and component deployment processes400,500discussed previously with respect to either or both of server504(FIG.5) and server404(FIG.4). Notably, the component communication600data flow does not involve an SRE608. In some embodiments of the invention, a component in Server A604(here, “Component A”) needs access to another component (here, “Component B”) stored on Server B606. To initiate a request to access Component B, the Server A604sends server data354to the IAM602(in this context, the server data354can also be referred to as a communication request). The server data354can include a component API key (here, “Component A API key”) previously provided to the IAM506during the component deployment500(FIG.5). In some embodiments of the invention, the server data354is signed by the server A604using, e.g., the same server ID provided to the IAM506during the component deployment500(FIG.5). In some embodiments of the invention, the IAM602attempts to verify the server data354, and if successful, sends token data360(also referred to as a credential, or, in the illustrated example, “Component A token”) to Server A604. In some embodiments of the invention, the IAM602matches the Component A API key against the component API key previously provided during the component deployment500(FIG.5). In this manner, the IAM602can ensure that the request originates from a properly deployed component. In some embodiments of the invention, the IAM602matches the Server ID against the Server ID previously provided during the server registration400(FIG.4). In this manner, the IAM602can ensure that the request originates from a trusted server. In other words, the IAM602verifies both the component and the server during verification. In some embodiments of the invention, failure to authenticate any portion of the server data354results in a denial of the request. Once verified (i.e., once the Server A604receives the component A token), server A604sends the token data360to Component B, either directly or through the Server B606. Component B packages the component A token with authenticating information, such as, for example, Component B ID, Service ID, and Server B ID, each generated according to one or more embodiments. The packaged information is then provided as server data354to the IAM602. In some embodiments of the invention, the server data354is signed by the private key of the Server B606. In some embodiments of the invention, the IAM602attempts to verify the server data354, and if successful, sends acknowledgement data (e.g., acknowledgement362) as a response to server B606. In some embodiments of the invention, proper verification of the server data354requires matching one or more components of the server data354(e.g., component B ID, service ID, component A token, server B ID, etc.) against data stored in a database (not separately shown) within or accessible to the IAM602. As discussed previously, this data can be provided or generated during the prior server registration404and component deployment500data flows. In some embodiments of the invention, failure to authenticate any portion of the server data354results in a denial of the access request (i.e., the component-to-component communication request). Once verified (i.e., once server B606receives the acknowledgement362), server B606forwards or provides acknowledgement data (e.g., acknowledgement362) to Server A604, completing the communication process. In some embodiments of the invention, Component A and Component B begin communication (sharing data, etc.) following the receipt of the acknowledgement362by the Server A604. Referring now toFIG.7, a flowchart700for preventing masquerading service attacks is generally shown according to an embodiment. The flowchart700is described in reference toFIGS.1-6and may include additional blocks not depicted inFIG.7. Although depicted in a particular order, the blocks depicted inFIG.7can be rearranged, subdivided, and/or combined. At block702, a server in a cloud environment receives a cloud admin API key and a service policy from a cloud administrator of the cloud environment. At block704, the server sends server data that includes the cloud admin API key, the service policy, and a server identifier to an IAM server of the cloud environment. In some embodiments of the invention, the server identifier includes identification data that is unique to the server in the cloud environment. At block706, the server receives a registration acknowledgment from the IAM server. In some embodiments of the invention, the registration acknowledgment indicates that the IAM server has verified the public key against a private key internal to the IAM server. In some embodiments of the invention, the registration acknowledgment indicates that the IAM server has added the server to an internally maintained list of trusted servers. At block708, the server sends the registration acknowledgment to the cloud administrator. The method can further include signing, by the server, the server data using a public key of the server. In some embodiments of the invention, the server signs the server data prior to sending the server data to the IAM server. Referring now toFIG.8, a flowchart800for preventing masquerading service attacks is generally shown according to an embodiment. The flowchart800is described in reference toFIGS.1-6and may include additional blocks not depicted inFIG.8. Although depicted in a particular order, the blocks depicted inFIG.8can be rearranged, subdivided, and/or combined. At block802, a server in a cloud environment receives a request for component deployment from an administrator of the cloud environment. In some embodiments of the invention, the request includes an identifier for a specific component within the server. In some embodiments of the invention, the administrator is a site reliability engineer (SRE) of the cloud environment. At block804, the server sends server data that includes the request for component deployment and a server identifier to an IAM server of the cloud environment. In some embodiments of the invention, the server identifier includes identification data that is unique to the server in the cloud environment. At block806, the server receives an acknowledgment from the IAM server that includes a component API key. In some embodiments of the invention, the acknowledgement from the IAM server indicates that the component has been deployed by the IAM server. At block808, the server sends an acknowledgment to the administrator that does not include the API key. In some embodiments of the invention, the server removes the API key from the acknowledgment from the IAM server. In some embodiments of the invention, the server generates a new acknowledgement that does not include the component API key. The method can further include signing, by the server, the server data using a public key of the server. In some embodiments of the invention, the server signs the server data prior to sending the server data to the IAM server. Referring now toFIG.9, a flowchart900for preventing masquerading service attacks is generally shown according to an embodiment. The flowchart900is described in reference toFIGS.1-6and may include additional blocks not depicted inFIG.9. Although depicted in a particular order, the blocks depicted inFIG.9can be rearranged, subdivided, and/or combined. At block902, a first server in a cloud environment sends a communication request that includes an API key and a first server identifier to an IAM server of the cloud environment. In some embodiments of the invention, the API key is uniquely assigned by the IAM server to a first component of the first server. In some embodiments of the invention, the first server identifier includes identification data that is unique to the first server in the cloud environment. At block904, the first server receives a credential that includes a token for the first component. In some embodiments of the invention, the first component is stored within a lock-down system of the first server. In some embodiments of the invention, the lock-down system includes a limited API that does not include functionality for providing the API key to an SRE of the cloud environment. At block906, the first server sends the credential to a second server in the cloud environment. In some embodiments of the invention, the second server identifier includes identification data that is unique to the second server in the cloud environment. At block908, the second server sends server data that includes the credential, a second server identifier, and an identifier for a second component of the second server to the IAM server. In some embodiments of the invention, the server data further includes a service identifier associated with a service policy of the cloud environment. At block910, the second server receives an acknowledgment from the IAM server. At block912, the second server sends the acknowledgment to the first server. Notably, the acknowledgments do not include the API key. The method can further include initializing a communication channel between the first component and the second component after the first server receives the acknowledgement from the second server. In this manner the first component can access data from the second component that is otherwise restricted. It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. Referring now toFIG.10, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50includes one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described herein above, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.10are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.11, a set of functional abstraction layers provided by cloud computing environment50(FIG.10) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.11are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers61; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks71, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal81provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery91; data analytics processing94; transaction processing95; and software applications96(e.g., software applications206ofFIG.2), etc. Also, software applications can function with and/or be integrated with Resource provisioning81. Various embodiments of the invention are described herein with reference to the related drawings. Alternative embodiments of the invention can be devised without departing from the scope of this invention. Various connections and positional relationships (e.g., over, below, adjacent, etc.) are set forth between elements in the following description and in the drawings. These connections and/or positional relationships, unless specified otherwise, can be direct or indirect, and the present invention is not intended to be limiting in this respect. Accordingly, a coupling of entities can refer to either a direct or an indirect coupling, and a positional relationship between entities can be a direct or indirect positional relationship. Moreover, the various tasks and process steps described herein can be incorporated into a more comprehensive procedure or process having additional steps or functionality not described in detail herein. One or more of the methods described herein can be implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc. For the sake of brevity, conventional techniques related to making and using aspects of the invention may or may not be described in detail herein. In particular, various aspects of computing systems and specific computer programs to implement the various technical features described herein are well known. Accordingly, in the interest of brevity, many conventional implementation details are only mentioned briefly herein or are omitted entirely without providing the well-known system and/or process details. In some embodiments, various functions or acts can take place at a given location and/or in connection with the operation of one or more apparatuses or systems. In some embodiments, a portion of a given function or act can be performed at a first device or location, and the remainder of the function or act can be performed at one or more additional devices or locations. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, element components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiments were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated. The diagrams depicted herein are illustrative. There can be many variations to the diagram or the steps (or operations) described therein without departing from the spirit of the disclosure. For instance, the actions can be performed in a differing order or actions can be added, deleted or modified. Also, the term “coupled” describes having a signal path between two elements and does not imply a direct connection between the elements with no intervening elements/connections therebetween. All of these variations are considered a part of the present disclosure. The following definitions and abbreviations are to be used for the interpretation of the claims and the specification. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” “contains” or “containing,” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a composition, a mixture, process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but can include other elements not expressly listed or inherent to such composition, mixture, process, method, article, or apparatus. Additionally, the term “exemplary” is used herein to mean “serving as an example, instance or illustration.” Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. The terms “at least one” and “one or more” are understood to include any integer number greater than or equal to one, i.e. one, two, three, four, etc. The terms “a plurality” are understood to include any integer number greater than or equal to two, i.e. two, three, four, five, etc. The term “connection” can include both an indirect “connection” and a direct “connection.” The terms “about,” “substantially,” “approximately,” and variations thereof, are intended to include the degree of error associated with measurement of the particular quantity based upon the equipment available at the time of filing the application. For example, “about” can include a range of ±8% or 5%, or 2% of a given value. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. For example, any or all of the blocks depicted with respect toFIGS.7,8, and9, can be implemented as part of a computer-implemented method, a system, or as a computer program product. The system can include a memory having computer readable instructions and one or more processors for executing the computer readable instructions, the computer readable instructions controlling the one or more processors to perform operations including those depicted with respect toFIGS.7,8, and9. The computer program product can include a computer readable storage medium having program instructions embodied therewith, the program instructions executable by one or more processors to cause the one or more processors to perform operations including those depicted with respect toFIGS.7,8, and9. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instruction by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments described herein. | 56,795 |
11943222 | DETAILED DESCRIPTION The present disclosure is directed towards improved systems and methods for security authentication. In some embodiments, the disclosed systems and methods provide multi-device multi-factor authentication. The present disclosure may provide an improved method for security authentication that may be used to determine whether a user is authentic and should be provided with access to a secured environment or secured information. Alternatively, the improved method for security authentication may be used to determine whether a secured action should be completed. Secured environments, may include online portals, registries, email accounts, and the like. Secured environments may be used in connection with financial applications (banking, credit cards), e-commerce applications, healthcare applications, utilities, social media accounts, educational logins, workplace logins, and the like. Secured actions may include password resets, completing large banking transactions, and the like. The improved method for security authentication disclosed herein may be used in place of existing multi-factor authentication schemes. FIG.1illustrates a system for improved security authentication in accordance with some embodiments of the present disclosure. As illustrated inFIG.1, a system100may include a database101, server system103, network105, initiating computing device107and a plurality of authorization providing computing devices109-A to109-N (collectively,109). As illustrated inFIG.1, each of the initiating computing device107and/or authorization providing computing devices109may be communicatively coupled to the server system103via a network105. Further, the initiating computing device107may have a separate communication link with the authorization providing computing devices109. In some embodiments, the initiating computing device107may be separate and distinct from the authorization providing computing device109. The network105may include, or operate in conjunction with, an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. The initiating computing device107may be a computing device such as a mobile device, smartphone, tablet, laptop, desktop, computing system and the like. In some embodiments, the initiating computing device107may include a user interface such as an application, website, email or the like. A user of the initiating computing device107may use the user interface to login to a secured environment, to access secured information, or complete secured actions. When the user of the initiating computing device107attempts to login to the secured environment, to access secured information, or complete secured actions, the server system103may receive the request for security authentication, determine an authorization providing computing device109based on authentication preferences stored in the database101communicatively coupled to the server system103, generate and transmit authentication information to the determined authorization providing computing device109, receive from the initiating computing device107or the determined authorization providing computing device109an authentication input, determine whether the received authentication input matches the transmitted authentication information, and complete the request for security authentication when the received authentication input matches the generated and transmitted authentication information. In some embodiments, the initiating computing device107may be separate and distinct from the authorization providing computing device109. In some embodiments, the systems and methods for improved security authentication described herein may utilize a server system103that receives a request for security authentication from a first party by way of the initiating computing device107, and provides authenticating information to a second party by way of the authorization providing computing device109. The second party may be distinct from the first party. The second party may be pre-identified by the first party in accordance with authentication preferences stored in the database101. In some embodiments, the second party may receive that authentication information using a push notification on a software application and the like. In some embodiments, once the second party has received the authenticating information, the second party may initiate a communication with the first party to verify that the first party has indeed requested access to the secured environment or information and if so, provide the first party with the authenticating information. In some embodiments, this may be effectuated by a communication link separate from the network105. The first party may then provide the information to the server system103. In some embodiments, the first party may be provided with an updated user interface that is configured to receive the authentication input from the first party. In some embodiments, the first party may have to request the updated user interface from the server system by a clicking a button, or the like. In some embodiments, once the first party has requested access to the secured environment, or secured information, the updated user interface may be automatically provided. Alternatively, in some embodiments, once the second party has communicated with the first party and verified (i) the identity of the first party and (ii) that the first party has submitted the request for security authentication, the second party may provide the server system103with the authenticating information. In some embodiments, the second party may provide the server system103with an approval indicating that the first party should be granted access to the secured environment and/or secured information, or that the secured action should be permitted. In some embodiments, the second party may have to verify their own identity by using a biometric tool (e.g., fingerprint, facial scan), password, pin and the like in order to be able to provide approval of the requested access. In some embodiments, where the second party may be configured to grant access to the first party without providing the first party with the authenticating information, the server system103may be further configured to verify that the second party and the first party are using unique devices, when the server system103receives authorization from the second party (and not the first party). In particular, in some embodiments, the devices associated with each of the first party and the second party, namely the initiating device107and the authorizing device109may have unique device identifiers. Accordingly, this may prevent a user from simply using the same device to log out and back in, thus subverting the security provided by having a second authorizing device separate from the first initiating device. Further, requiring that the initiating device107and the authorizing device109have unique device identifiers may also prevent a user having a phone capable of interfacing with multiple SIM cards from subverting the system. Communication between the sever system, initiating computing device107and/or authorization providing computing device109may be effectuated by one or more software applications, push notifications, text messages, webpages, emails, and the like. In some embodiments, the server system103may be communicatively coupled to a database101that is configured to store authentication preferences. Authentication preferences may include a list of authorization providers (i.e., users of the authorization providing device), a list of individuals the user of the initiating computing device107indicates as being acceptable to verify that the user of the initiating computing device107is who they say they are. Additionally, in some embodiments, the authentication preferences may include an authorization provider mode of communication, which may be indicative of the mode of communication the authorization provider prefers to communicate with the server system103using. For example, the authorization provider mode of communication may indicate that the authorization provider prefers to receive the authentication information from the server system103via a text, voice call, email message, software application, and the like. The authentication preferences may also include response timing preferences. Response timing preferences may include a lifespan (i.e., how long the authentication information generated by the server system103is valid), and/or a timeout amount (i.e., how long the server system103will wait to receive an authentication input after transmitting the authentication information to the authorization providing computing device109). The response timing preferences may also include an indication as to whether the server system103should try to generate and transmit authentication information to a different authorization providing computing device109if the authentication input has not been completed within the timeout amount. The authentication preferences may also include a prioritization scheme for authorization providers. The prioritization scheme may specify the order in which a set of authorization providers be contacted. For example, a user may specify that the server system103first attempts to contact their spouse, followed by siblings, parents, friends, and the like. The authentication preferences may also include a concurrent/consecutive setting, in which the user specifies whether authorization providers are contacted concurrently or consecutively. In accordance with the concurrent setting, multiple authorization providers from the list of authorization providers may be contacted at once and provided with the same authentication information. In accordance with the consecutive setting, the authorization providers may be contacted sequentially, one after the other, if the authentication information has timed out. In the consecutive setting, authorization providers may be contacted consecutively in accordance with a prioritization scheme. Alternatively, the authorization providers may be contacted in any suitable order. In some embodiments, each authorization provider may be provided with a different authentication information. This way, the authorization provider that actually provides approval of the secured action, or access to the secured environment and/or information may be identified (in the event that multiple authorization providers are contacted at once). In some embodiments, the authentication preferences may be configured such that a newly added authorization provider may not be allowed to provide authorization for an initiating time period. In this way, a malicious user cannot merely add a “friend” that is capable of approving a secured action, or access to a secured environment, or information. The authentication preferences may be entered by the user of the initiating computing device107using a user interface of the initiating computing device107. The initiating computing device107may be further configured to transmit the received authentication preferences to the server system103. The server system103may then store the received authentication preferences in the database101coupled to the server system103. In some embodiments, the authentication information is a one-time pin (OTP). In some embodiments, the request for authentication is received at the server system103after the user of the initiating computing device107has successfully entered information they know (e.g., password or Personal Identification Number (PIN)). In some embodiments, the authentication information is randomly generated by the server system103and/or stored in the database101. In some embodiments, each authorization providing computing device109may be associated with distinct authentication information. In some embodiments, the server system103is configured to receive an authentication input. The authentication input may be provided by the initiating computing device107or the authorization providing computing device109. The server system103may then determine whether the received authentication input matches the authentication information that was transmitted to the authorization providing computing device109. Determining whether the received authentication input matches the authentication information that was transmitted to the authorization providing computing device109may include comparing the authentication information that was generated and transmitted to the authorization providing computing device109to the authentication input received at the server system103. In some embodiments, the authentication information that was generated and transmitted to the authorization providing computer device may be stored in the database101. In some embodiments, the authentication information may be retrieved from the database101in order to complete the comparison. In some embodiments, when the authentication input received by the server system103matches the generated authentication information that was transmitted to the authorization providing computing device109, the request for security authentication may then be completed. This may entail providing the user of the initiating computing device107access to the secured environment or secured information. For example, the user may then be able to reset their password, login to an online banking account, access a social media page, and the like. FIG.2illustrates a process for improved security authentication in accordance with some embodiments of the present disclosure. In a first step201, a server system103may receive a request for security authentication from an initiating computing device107. In a second step203, the server system103may determine at least one authorization providing computing device109based on authentication preferences. In a third step205, the server system103may generate and transmit authentication information to the determined authorization providing computing devices109. In a fourth step207, the server system103may receive authentication input. In a fifth step209, the server system103may determine whether the received authentication input matches the transmitted authentication information. In a sixth step211, the server system103may complete the request for security authentication when the received authentication input matches the generated and transmitted authentication information. In some embodiments, an improved method for security authentication may also include a server system that performs the steps of receiving a request for security authentication from an initiating computing device, determining an authorization providing computing device distinct from the initiating computing device based on authentication preferences stored in a database communicatively coupled to the server system, generating and transmitting authentication information to the determined authorization providing computing device, receiving authentication input from the initiating computing device, determining whether the received authentication input matches the transmitted authentication information and completing the request for security authentication when the received authentication input matches the generated and transmitted authentication information. FIG.3illustrates a computer system in accordance with some embodiments of the present disclosure.FIG.3illustrates a functional block diagram of a machine in the example form of computer system300, within which a set of instructions for causing the machine to perform any one or more of the methodologies, processes or functions discussed herein may be executed. In some examples, the machine may be connected (e.g., networked) to other machines as described above. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be any special-purpose machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine for performing the functions describe herein. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In some examples, each of the user computing device101and the server system103ofFIG.1may be implemented by the example machine shown inFIG.3(or a combination of two or more of such machines). Example computer system300may include processing device303, memory307, data storage device309and communication interface315, which may communicate with each other via data and control bus301. In some examples, computer system300may also include display device313and/or user interface311. Processing device303may include, without being limited to, a microprocessor, a central processing unit, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP) and/or a network processor. Processing device303may be configured to execute processing logic305for performing the operations described herein. In general, processing device303may include any suitable special-purpose processing device specially programmed with processing logic305to perform the operations described herein. Memory307may include, for example, without being limited to, at least one of a read-only memory (ROM), a random access memory (RAM), a flash memory, a dynamic RAM (DRAM) and a static RAM (SRAM), storing computer-readable instructions317executable by processing device303. In general, memory307may include any suitable non-transitory computer readable storage medium storing computer-readable instructions317executable by processing device303for performing the operations described herein. Although one memory device307is illustrated inFIG.3, in some examples, computer system300may include two or more memory devices (e.g., dynamic memory and static memory). Computer system300may include communication interface device311, for direct communication with other computers (including wired and/or wireless communication), and/or for communication with network105(seeFIG.1). In some examples, computer system300may include display device313(e.g., a liquid crystal display (LCD), a touch sensitive display, etc.). In some examples, computer system300may include user interface311(e.g., an alphanumeric input device, a cursor control device, etc.). In some examples, computer system300may include data storage device309storing instructions (e.g., software) for performing any one or more of the functions described herein. Data storage device309may include any suitable non-transitory computer-readable storage medium, including, without being limited to, solid-state memories, optical media and magnetic media. Various implementations of the systems and techniques described here may be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations may include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, soft ware, Software applications or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium’ “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. To provide for interaction with a user, the systems and techniques described here may be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user may provide input to the computer. Other kinds of devices may be used to provide for interaction with a user as well; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input. The systems and techniques described here may be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user may interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or frontend components. The components of the system may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet. The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. For example, much of this document has been described with respect to television advertisements, but other forms of future, viewership-based advertisements may also be addressed. Such as radio advertisements and on-line video advertisements. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims. Although the present disclosure may provide a sequence of steps, it is understood that in some embodiments, additional steps may be added, described steps may be omitted, and the like. Additionally, the described sequence of steps may be performed in any suitable order. While illustrative embodiments have been described herein, the scope thereof includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those in the art based on the present disclosure. For example, the number and orientation of components shown in the exemplary systems may be modified. Thus, the foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limiting to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. | 24,593 |
11943223 | DETAILED DESCRIPTION Embodiments of a system and method for establishing a multi-cloud computing platform implemented over different types of public cloud network infrastructures that restricts data traffic between virtual private cloud networks through different security regions. Herein, the multi-cloud computing platform achieves network isolation through use of security domains and connection policies. Each security domain identifies a group of gateways that can communicate with each other. These gateways may be deployed within the same virtual network component or within different virtual network components, where these virtual network components may reside within the same region of a public cloud network, within different regions of the same public cloud network, or within different public cloud networks. As an illustrative example a first set (e.g. one or more) of gateways may operate within a first virtual network component supported by underlying network infrastructure provided by Amazon® Web Services (AWS) public cloud network while a second set of gateways may operate within a second virtual network component supported by Microsoft® Azure® public cloud network or Oracle® Cloud. Each connection policy identifies permitted communications across different security domains. Herein, for clarity, each of the virtual network components, sometimes referred to as a virtual private cloud network for AWS deployment, a virtual network (VNet) for Azure® deployment or virtual cloud network (VCN) for an Oracle® Cloud deployment for example, shall be universally referred to herein as a “virtual private cloud network” or “VPC” for clarity purposes. Hence, the multi-cloud computing platform may feature (i) a first plurality of virtual network components each operating as a virtual private cloud network that maintains cloud resources accessible by a set of gateways (hereinafter, “spoke VPCs”); (ii) a second plurality of virtual network components each operating as a virtual private cloud network that supports the routing of messages from/to the spoke VPCs (hereinafter, “transit VPCs”); and/or (iii) a shared services networking infrastructure including a controller for updating and maintaining data stores for spoke VPCs and/or transit VPCs to maintain routing information in accordance with deployed security domains and connection policies. More specifically, according to one embodiment of the disclosure, each spoke VPC includes a set (e.g., two or more) of gateways (hereinafter, “spoke gateways”), which are communicatively coupled to one or more cloud software instances with a particular subnet or particular subnets. Some or all of these cloud software instances may include executable applications. Similarly, each transit VPC may feature a gateway cluster including a set of gateways. The set of gateways deployed within the transit VPC (hereinafter, “transit gateways”) control the routing of messages between spoke VPCs, and in effect, control the routing of messages between cloud software instances deployed in different spoke VPCs. As shown, each of the spoke gateways and transit gateways may be accessed in accordance with a unique Classless Inter-Domain Routing (CIDR) routing address to propagate messages over the multi-cloud computing platform. Besides communicatively coupled to spoke gateways, the transit gateways may be communicatively coupled to one or more network edge devices (e.g., virtual or physical routers, etc.) deployed within an on-prem network (hereinafter, “on-prem network devices”). Herein, the transit VPC is configured to control the propagation of data traffic between a source spoke VPC that receives the data traffic from a cloud software instance and either a destination spoke VPC or the on-prem network. Additionally, according to embodiments of this disclosure, the transit VPC is further configured to restrict data traffic flow by controlling the propagation of data traffic to gateways residing within the same security domain or residing in different security domains provided a connection policy is established to permit gateway communications between these different security domains. Further details of the logic associated with one embodiment of the scalable multi-cloud computing platform are described below. I. Terminology In the following description, certain terminology is used to describe features of the invention. In certain situations, the terms “logic” and “component” is representative of hardware, software or a combination thereof, which is configured to perform one or more functions. As hardware, the logic (or component) may include circuitry having data processing or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a processor (e.g., microprocessor, one or more processor cores, a programmable gate array, a microcontroller, an application specific integrated circuit, etc.), semiconductor memory, or combinatorial logic. Alternatively, or in combination with the hardware circuitry described above, the logic (or component) may be software in the form of one or more software modules. The software module(s) may be configured to operate as a virtual hardware component (e.g., virtual processor) or a virtual network device including one or more virtual hardware components. The software module(s) may include, but are not limited or restricted to an executable application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or one or more instructions. The software module(s) may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; a semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. VPC routing table: A VPC routing table is a logical construct for instructing a software instance as to how to reach a selected destination. The destination may constitute another software instance, which may be identified by a routable network address. The VPC routing table includes a plurality of entries, where each entry includes a routable network address of the destination and an identifier for a software component with routing functionality that constitutes a next hop towards the destination (target). The identifier may include a gateway identifier for example. Gateway: This term may be construed as a virtual or physical logic with data routing functionality. As an illustrative example, the gateway may correspond to virtual logic, such as a data routing software component that is assigned an Internet Protocol (IP) address within an IP address range associated with a VPC including the gateway. Multiple gateways are deployed in a multi-cloud computing platform, which may control the ingress/egress of data traffic into/from a VPC. While having similar architectures, the gateways may be identified differently based on their location/operability within a public cloud network. For example, a “spoke” gateway is a gateway that supports routing between cloud software instances and “transit” gateways, where each transit gateway is configured to further assist in the propagation of data traffic (e.g., one or more messages) from one spoke gateway to another spoke gateway within the same spoke VPC or within different VPCs. Alternatively, in some embodiments, the gateway may correspond to physical logic, such as an electronic device that is communicatively coupled to the network and assigned the hardware (MAC) address and IP address. IPSec tunnels: Secure peer-to-peer communication links established between gateways of neighboring virtual network components such as neighboring VPCs. The peer-to-peer communication links may be secured through a secure network protocol suite referred to as “Internet Protocol Security” (IPSec). These IPSec tunnels are represented in gateways by virtual tunnel interface (VTI) and the tunnel states are represented by VTI states. Security domain: A controller-enforced network of software components associated with one or more VPCs that are permitted to exchange and transfer data. Hence, VPCs within the same security domain are permitted to communicate with each other while VPCs within different security domains are unable to communicate without a connection policy. A “connection policy” is one or more rules enforced to allow for cross security domain connectivity. Computerized: This term generally represents that any corresponding operations are conducted by hardware in combination with software. Message: Information in a prescribed format and transmitted in accordance with a suitable delivery protocol. Hence, each message may be in the form of one or more packets, frames, or any other series of bits having the prescribed format. Finally, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. As an example, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive. As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described. II. General System Architecture Referring now toFIG.1, an exemplary embodiment of a multi-cloud computing platform100implemented with a plurality of security domains1101-110N(N>1) is shown, where each security domain1101-110Nrestricts communications among gateways to those residing within the same security domain. Herein, the multi-cloud computing platform100is configured to provide connectivity between resources120of a first public cloud network125and resources160of a second public cloud network165. Herein, according to one embodiment of the disclosure, the first public cloud network125corresponds to an Amazon® Web Services (AWS) cloud network and the second public cloud network165corresponds to a Microsoft® Azure® cloud network, which is different from the first public cloud network125. According to one embodiment of the disclosure, the resources120of the first public cloud network125feature one or more spoke VPCs130, which is (are) communicatively coupled to a first transit VPC140including one or more transit gateways142. Herein, as shown, a first spoke VPC131of the one or more spoke VPCs130may be configured with connectivity to some of the cloud software (application) instances145and the transit gateway(s)142of the first transit VPC140. A network edge device147(e.g. virtual or physical router) for an on-premises network149is communicatively coupled to the transit gateway(s)142for the transmission of data traffic to and receipt of data traffic from the on-premises network149. Herein, as for each of the spoke VPCs130, the first spoke VPC131is configured, in accordance with a first VPC routing table155managed by a controller150within a shared VPC152, to exchange data traffic from certain cloud appliance instance(s)145with a selected gateway of a set of (e.g., two or more) gateways135maintained in the first spoke VPC131. Similarly, a second spoke VPC132of the one or more spoke VPCs130may be configured with connectivity to some of the cloud software instances145. In particular, for this embodiment, the second spoke VPC132is configured, in accordance with a second VPC routing table156also managed by the controller150, to exchange data traffic between some cloud appliance instance(s)145and a selected gateway of a set of gateways136maintained in the second spoke VPC132. These multiple spoke VPCs may formulate the construct of a first portion of the multi-cloud computing platform100. Additionally, according to one embodiment of the disclosure, the resources160of the second public cloud network165may feature one or more spoke VPCs170, such as a third, fourth and fifth spoke VPCs171-173, and a second transit VPC180including transit gateways182. Herein, each of these spoke VPCs171may be configured to communicate with different cloud software (application) instance(s)185. For example, in accordance with the third VPC routing table157managed by the controller150, certain cloud application instances185may be configured to exchange data traffic with a selected gateway of a set of (e.g., two or more) gateways175maintained in the third spoke VPC171. These multiple spoke VPCs may formulate the construct of this second portion of the multi-cloud computing platform100. As shown, a plurality of security domains1101-110Nmay be configured to restrict communications among spoke VPCs across the same public cloud network125or165or spoke VPCs across different public cloud networks125and165, as shown. As an illustrative embodiment, a first security domain1101may restrict communications to remain among the gateways135associated with the first spoke VPC131of the first public cloud network125and the gateways175associated with the third spoke VPC171. Likewise, a second security domain1102may restrict communications to remain among gateways136associated with the second spoke VPC132as well as the gateways176and177residing within the fourth and fifth spoke VPCs172and173of the second public cloud network165. Transit gateway(s)142and182serve to interconnect gateways within the spoke VPCs within the same security domain1101or1102or within different security domains1101and1102based on connection policies (described below). These security domains1101and1102may be configured by altering the routing information maintained within at least a first transit routing data store1901associated with the first transit gateway142and a second transit routing data store1902associated with the second transit gateway182. The transit routing data stores1901and1902may constitute routing tables. Hence, prior to routing a message to/from a source spoke VPC (e.g., first spoke VPC131) to a destination spoke VPC (e.g., third spoke VPC171), the first transit routing data store1901is considered to determine whether such communications are available and are not available when such communications are restricted by the security domains1101and1102. Referring now toFIG.2, another illustrative embodiment of the multi-cloud computing platform100ofFIG.1is shown, where the multi-cloud computing platform100is implemented with a plurality of security domains200including a first security domain210, a second security domain220, and a third security domain230. Herein, the first security domain210includes gateways135, the second security domain220includes gateways136and175, and the third security domain230includes gateways176and177. Gateways135,136and175are configured to communicate with each other through a first connection policy240. As a result, the multi-cloud computing platform100provides network isolation among gateways spanning multiple regions and/or multiple cloud networks through security domains210/220and230as well as the connection policies such as the first connection policy240. More specifically, as shown inFIG.2, the gateways136associated with the second spoke VPC132(e.g., AWS VPC) and the gateways175associated with the third spoke VPC171(e.g., Azure® VNet) can communicate with each other while gateways176and177associated with the fourth VPC172and fifth VPC173are isolated and precluded from communications with gateways135,136and175. Normally, there is no cross communication between the gateways135associated with the first security domain210and the gateways136and171associated with the second security domain220without one of the connection policies (e.g., connection policy240) providing for such connectivity. Herein, after setting of the connection policy240by a user (e.g., administrator, device user, etc.), the controller150accesses content within the connection policy240to determine that the resources maintained in the first security domain210are permitted to communicate with the resources maintained in the second security domain220. In particular, gateways135/136associated with the spoke VPCs131/132of the first public cloud network125and the gateways175within the spoke VPC171of the second public cloud network165can communicate with each other via a transit gateway. To accomplish this, the controller150dynamically programs and updates both transit routing data stores1901and1902so that instances in spoke VPCs can communicate with each other via transit gateways142and/or182. Referring toFIG.3, an exemplary embodiment of a transit gateway300, being one of the transit gateways142deployed within the transit VPC140, is shown. This transit gateway300constitutes a software component overlaying resources of a public cloud network infrastructure, such as an AWS-based transit VPC or an Azure®-based transit VNet ofFIG.1for example. The software component includes logic that is executed by a (virtual) processor, which is a compute service provided by the cloud provider and illustrated for completeness. Herein, according to one embodiment of the disclosure, the transit gateway300includes data traffic processing logic310, data traffic routing logic320and one or more transit routing data stores330, where the transit routing data stores310may maintain routing information for data traffic sources (e.g., router for on-premises network, spoke VPCs, other transit VPCs, etc.) relying on the transit gateway300for routing an incoming message340to a destination. The data traffic processing logic310is configured to receive the incoming message340from a data traffic source, such as the spoke gateway135deployed as part of the spoke VPC131ofFIG.1that is provisioned to access the transit gateway300. Upon receipt of the incoming message340, the data traffic processing logic310is configured to parse and conduct analytics on the incoming message340to determine a destination for the incoming message340. Where the destination is reachable from the spoke gateway135, namely any established security domain includes the VPC or network in which the destination resides and/or a connection policy, contents of the incoming message340are provided to the data traffic routing logic320. Where the destination is not reachable from the spoke gateway135, the incoming message340is discarded and/or an error message may be provided to the data traffic source and/or controller for reporting to the network administrator. The data traffic routing logic320may be configured to access a transit routing data store332of the transit routing data stores330to determine a next hop (e.g., gateway associated with the destination, transit gateway, etc.) for transmission of content associated with the incoming message340thereto. As an illustrative example, as shown inFIG.1, for transmission from an instance within the first spoke VPC131to an instance within the third spoke VPC171, the transit routing data store332may identify the transit gateway182overlaying cloud infrastructure associated with a different cloud provider. Thereafter, the data traffic routing logic320may organize the content of associated with the incoming message340(e.g., encapsulate, reinsert into a payload and/or header of a new message, etc.) to produce an output message350for transmission to the next hop identified by the transit routing data stores330and considered to be the optimal path based on selected policies and/or parameters (e.g., hop count to the destination, security parameters, etc.). III. Security Domain Formation Referring now toFIGS.4A-4C, exemplary embodiments of a series of graphical user interfaces (GUI) for establishing the security domain(s) within the multi-cloud computing platform100and optional connection policies associated with these security domains is shown. The GUIs may be generated by software that is executed by one or more processors, namely physical processors or logical processors that are based on one or more physical processors. Herein, as shown inFIG.4A, a first GUI400constitutes a web page accessible by a user to create a security domain by entry of a unique domain name420associated with that security domain within an assigned field410(e.g., Dev_Domain). At selection of the domain name420and completion of this process stage (e.g., activating “Enter” button425), the security domain is created, but no spoke gateway or network edge devices are identified. This process may be repeated to create multiple security domains. Upon creation of the security domain(s), a connection relationship may be established between multiple security domains through a second GUI430as shown inFIG.4B. In particular, the second GUI430may include a graphical element440, such as a drop-down menu listing the security domains available on the multi-cloud computing platform100. By selecting multiple security domains and activating the “Add” graphical element450, a connection policy is generated between the selected security domains. The connection policy permits spoke gateways in the selected security domains to communicate with each other despite the fact that these spoke gateways are in different security domains. Thereafter, as shown inFIG.4C, the third GUI460may include input fields465to associate each of the security domains with a transit gateway and populate each of the security domains with a spoke gateway and/or a network edge device (e.g., a router for connection with an on-premises network) communicatively coupled to the transit gateway. The third GUI460establishes the network segmentation based on identifying the security domain470, spoke gateway (or network edge device)480to be included as a member of the security domain470, and a transit gateway490to which the spoke gateway (or network edge device)480is in communication. These fields may be auto-populated from different entries so that entry of the transit gateway490automatically identifies the spoke gateway (or network edge device)480or provides a pull-down menu with the spoke gateway/edge device options. Although not shown, but in a similar graphical representation, it is contemplated that another GUI, similar in features as the third GUI460, may be used to disassociate a spoke gateway or network edge device with an identified security domain. Referring toFIGS.5A-5B, an exemplary embodiment of operations by the controller to create constructs directed to security domains and connection policies associated with these security domains is shown. Herein, the controller maintains the available routing for each of the transit gateways set forth in the multi-cloud computing platform. In a default state, without any security domains, the transit routing data store for each transit gateway may be configured to have active connections directed to most, if not all, spoke VPCs and other transit gateways forming the multi-cloud computing platform (operation500). However, in response to creation of a security domain assigned a security domain name, the controller generates a transit routing data store associated with that security domain (operations510and515). In the event that a plurality of security domains are created, the controller receives information associated with any connection policies established between two or more of the plurality of security domains (operation520). The identification of the connection policies is needed to ensure that routing connections associated with a spoke gateway placed into a security domain are also populated into the transit routing data store associated with other connected security domain(s) (operation525). Thereafter, as each security domain is configured to define member components for that security domain, namely one or more spoke gateways and/or network edge devices (e.g., router of on-premises network, etc.) and the transit gateway to which these member components are communicatively coupled, the controller receives the security domain membership information (operation530). Based on the security domain membership information, the controller is configured to include the routing connections directed to the spoke gateway(s) and/or network edge device(s) within transit routing data store(s) associated with the security domain maintained in each transit gateway to which at least one of these spoke gateway(s) and/or network edge device(s) are in communication (operation540). Also, the controller determines whether there exists a connection policy associated with the security domain, and if so, the controller alters transit routing data store(s) associated with the “connected” security domain to include the routing connections directed to the spoke gateway(s) and/or network edge device(s) (operations550and560). These transit routing data stores pertain to the transit gateways in communication with the spoke gateway(s) and/or network edge device(s) added to the security domain. Furthermore, based on the routing connections, the controller dynamically programs and updates the VPC route tables so that instances in different spoke VPCs in the same domain can communicate and instances in different spoke VPCs in different security domains, without connection policies, are precluded from communication by removal of any such routing connections (operation570). The controller continues to monitor for the removal/addition of security domains, connection policies and/or member components (operations580,585and590). IV. General Security Domain Operability Referring toFIGS.6A-6B, exemplary embodiments of permitted and non-permitted interoperability between two spoke gateways residing in different security domains of the multi-cloud computing platform ofFIGS.1-2. For this illustrated example, the first spoke gateway600may operate as the first spoke gateway135residing within the first spoke VPC131ofFIG.1and the second spoke gateway610may operate as the second spoke gateway136residing within the second spoke VPC132. Herein, the first spoke gateway600resides in a different security domain than the second spoke gateway610, where there is no connection policy established between the first spoke gateway600and the second spoke gateway610. As a result, via the transit gateway300, the first spoke gateway600associated with the first security domain (e.g., security domain1101ofFIG.1) is unable to communicate with the second spoke gateway610associated with the second security domain (e.g., security domain1102ofFIG.1). In contrast, as shown inFIG.1andFIG.6B, the first spoke gateway600may reside in either the same security domain as the second spoke gateway610or within different security domains provided there exists a connection policy with these spoke gateways600and610. As a result, via the transit gateway300, the first spoke gateway600associated with the first security domain is able and permitted to communicate with the second spoke gateway610associated with the second security domain. V. Operational Flow Referring toFIG.7, an exemplary embodiment of the operations for conducting transit segmentation of a multi-cloud computing platform through security domains and connection policies is shown. First, the multi-cloud computing platform is generated (operation700). Herein, the multi-cloud computing platform features a first set of spoke VPCs and a first set of transit VPCs associated with an underlying first public cloud network infrastructure along with a second set of spoke VPCs and a second set of transit VPCs associated with an underlying second public cloud network infrastructure. Thereafter, some or all of the transit gateways within the transit VPCs are enabled for segmentation (operation710). Where a transit gateway is not enabled for segmentation, the transit gateway will not be configured to be part of any security domain. Thereafter, a security domain is created and assigned a unique security domain name (operation720). This operation may be conducted in succession to create two or more security domains (operations725). Alternatively, this operation may be conducted asynchronously, where one or more security domains may be created to dynamically alter the segmentation and cause the controller to update the transit routing data stores accordingly. For clarity, the operations will focus on the creation of two security domains. After the security domains are created, a connection policy can be established to formulate a connection relationship between different security domains (operation730). The two connected security domains imply that spoke gateways in each of these security domains can communicate with each other despite the fact that they reside in different domains. Also, the presence of the connection policy is monitored by the controller to ensure that routing connections associated with the connected spoke gateway are replicated for different transit routing data store(s) maintained in different transit gateway(s). Thereafter, each security domain is configured to define member components for that security domain by associating the security domain to one or more spoke gateways and the transit gateway to which these spoke gateways are communicatively coupled (operation740). The security domain membership information (e.g., routable network address associated with the spoke gateway and its corresponding transit gateway, etc.) is provided to the controller, which is utilized, along with the connection policy, to set the routing connections directed to the spoke gateway(s) within transit routing data store(s) associated with the security domain maintained in each transit gateway to which at least these spoke gateway(s) are in communication (operations750and760). The segmentation of the multi-cloud computing platform is dynamic, as the segmentation may be altered by removal/addition of or changes to the security domains, connection policies and/or member components (operations770). Embodiments of the invention may be embodied in other specific forms without departing from the spirit of the present disclosure. The described embodiments are to be considered in all respects only as illustrative, not restrictive. The scope of the embodiments is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope. | 31,052 |
11943224 | DETAILED DESCRIPTION The embodiments disclosed herein are only examples of the many possible advantageous uses and implementations of the innovative teachings presented herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views. According to the disclosed embodiments, techniques for allowing blockchain-based admission to protected entities are provided. An admission (or access) to a protected entity is given to a client after granting a number of one or more access tokens in the blockchain. The “spending” of such tokens is also performed via the blockchain network. In an embodiment, the access tokens and, hence, the admission to access to a protected entity, are based on a non-linear model where there is a weight or “cost” is associated with each admission request. This is different than a traditional admission model implemented by existing cyber security systems and admission-services, such as a role-based access control (RBAC), where the admission is binary upon satisfying an access rule. In contrast, according to the disclosed embodiments, the admission is weighted based on one or more access parameters set to weight the risk of false positive detection of an attacker admission with the risk of false negative detection of a legitimate user denial of admission. Furthermore, according to the disclosed embodiments, the admission may optionally comprise a plurality of weighted parameters indicating the risk associated in multiple dimensions such as, but not restricted to, risk in reading, risk in modifying, risk in sharing, on so on. Accordingly, the disclosed embodiments provide for increased temporal-accuracy access token verifications and, thus, more secure admission, than existing admission solutions. The disclose embodiments also allow for detecting and mitigating the risk associated with Bots. The various disclosed embodiments will now be disclosed in greater detail. FIG.1shows an example network diagram100utilized to describe the various disclosed embodiments. The network diagram100illustrated inFIG.1includes an admission system110, a client120, a blockchain peer-to-peer network130(hereinafter referred to as the blockchain network130, merely for simplicity purposes), and a protected entity140. The entities in the network diagram100communicate via a network150, reside in the network150, or both. The admission system110controls and regulates admission of the client120by issuing to the client120access tokens to be consumed by the protected entity140as will be discussed in detail below. In certain implementations, a trust broker160is also connected to the network150and communicatively connected to the blockchain network130. As discussed below, the trust broker160is configured to replace one type of access token with another type of access token based, in part, on a non-linear conversion model for example using e−ctas a multiplication factor where ‘ct’ is driven from the time of last admission. In an embodiment, the various entities discussed herein, the admission system110, client120, protected entity140, trust broker160, and audit system170act as full peer members in the blockchain network130accessing it without intermediaries. In an embodiment, the audit system170is integrated in the admission system. In another embodiment, these entities can all be non-blockchain members. In such configuration, some or all these entities can enlist the services of a third-party cryptocurrency wallet service or application to provide access to the blockchain network130as well as holding all relevant crypto elements, such as keys. In yet another embodiment, the client120and the protected entity140may optionally include means for maintaining tokens issued by the admission system110or utilized the cryptocurrency wallet described herein. The network150may be, but is not limited to, a local area network, a wide area network, the Internet, one or more data centers, a cloud computing infrastructure, a cellular network, a metropolitan area network (MAN), or any combination thereof. The admission system110, the protect entity140, or both, can be hosted in a cloud computing platform, such as, but not limited to, a private cloud, a public cloud, a hybrid cloud, or any combination thereof. The client120may be a PC, a mobile phone, a smart phone, a tablet computer, a server, and the like. The client120may be operated by a legitimate user, a program or may be an attack tool. The protected entity140is the entity to be protected from malicious threats. The protected entity140may be any network or computing element including resources that can be accessed by the client120. For example, the protected entity140may be a function in a serverless architecture, an application server, a web server, a cloud application, a datacenter, a network device (e.g., a router, an ADC, etc.), a firewall (e.g., a web-application firewall), and so on. In certain implementations, the network diagram100further includes an audit system170utilized to record or log any transactions performed between two or more of the admission system110, the client120, the trust broker160, the protected entity140. For example, the audit system170is configured to record whether an admission request is accepted or not, the conversion value set for different types of access tokens, a number of admission requests requested by the client120, and so on. In another embodiment, the function of logging transactions by the audit system170is performed on the blockchain network130, i.e., by its distributed ledger. According to an embodiment, the admission system110may be communicatively connected with the protected entity140, where both the admission system110and the protected entity140are peers in a blockchain peer-to-peer (P2P) network and may act as a proxy to the P2P network. For example, translating from HTTPS messages to message that can be transported on the P2P network. The admission system110and the protected entity140may be configured each or both to receive or otherwise intercept requests (such as, but not limited to, HTTP/HTTPS requests) generated by the client120. The requests are directed using the device proxy function or using the native protocol redirect functions to the admission system110. The secured datacenter can be operable in a cloud-computing infrastructure, a hosting server datacenter, service provider networks, or a cooperative network. It should be noted that, although one client120and one protected entity140are illustrated inFIG.1merely for the sake of simplicity, the embodiments disclosed herein can be applied to a plurality of clients, a plurality of protected entities, or both. Further, a plurality of admission systems can be utilized, or the admission system110may be configured in a distributed implementation. The clients may be located in different geographical locations. Furthermore, a single client120may be served by a plurality of admission servers110concurrently. It should be further noted that in certain embodiments, the admission system110and the trust broker160may be implemented in a single system or as a distributed system. Further, each element discussed above can also be integrated in a single system for example the admission system110and protected entity140implemented as a single unified gateway. In another embodiment, the admission system110, the trust broker160, and/or the protected entity140can be implemented in a network device already deployed in the network150. Example for such a network device includes an application delivery controller (ADC), load balancer, a web application firewall (WAF), a router, a firewall, and the like. The blockchain network130is a collection of nodes (not labeled) utilized to maintain a distributed ledger and verified blocks, as discussed above. A node may be a physical machine, or a virtual machine executed by a machine having a sufficient computing power to verify a block. In an embodiment, the nodes of the blockchain network130may be configured to handle proprietary tokens (e.g., verified block of transactions for converting proprietary tokens). The proprietary tokens may be of different types. In another embodiment, the nodes of the blockchain network130may be configured to handle tokens of standard virtual currencies such as, but not limited to, Bitcoin, Ethereum, Litecoin, and the like. For example, the infrastructure to the Ethereum blockchain network is based on a collection of nodes realized as Ethereum Virtual Machines (EVM) connected in a mesh connection. Each such node runs a copy of the entire blockchain and competes to mine the next block or validate a transaction. Further, each node in the blockchain network130, implemented as Ethereum, maintains an updated copy of the entire Ethereum blockchain. In an embodiment, the blockchain network130can be configured as a private network. In this embodiment, tokens (or contracts) accessible to nodes that are connected to a private network, e.g., the blockchain network130, operate in a private mode. In yet another embodiment, a private token is utilized as part of a public or private Ethereum blockchain. In another embodiment, the admission system110and the protected entity140participate in a smart contract using the crypto EVM capabilities to grant and track admission of the client to the client120. A smart contract is a computer protocol intended to digitally facilitate, verify, or enforce the negotiation or performance of a contract defined as collection of executable computer code in one of many possible computer languages such as but not limited to “Python”, “Java”, “R”, “Solidity” or any other computer languages used now or in the future to implement blockchain based smart contracts. Smart contracts allow the performance of credible transactions without the existence of trusted third parties. These transactions are tractable and irreversible. In an embodiment, the smart contract is generated (“written”) by the admission system110. The admission system110may determine the pre-conditions or conditions for the smart contract. Such conditions may be determined based on, for example, the client120, protected entity140, logged history, and so on. The smart contract, although being fulfilled by, for example, the EVM (network130) the contract generated by the system110is independent of the network. In should be noted that other embodiments utilizing other forms of consensus-based technologies can be used to implement the admission system110. For example, a Byzantine Fault Tolerance algorithm can be used for control and reach consensus over the Linux Foundation hyper-ledger implementation, over Corda, and the like. It should be appreciated that, in some embodiments, utilizing the blockchain network130for the admission process allows maintaining privacy of the user accessing the protected entity140while providing transparency of admission transactions. Further, the blockchain network130can be easily scaled, therefore providing a scalable solution. In the example implementations described with respect toFIG.1, the admission to the protected entity140is delegated to and regulated by the admission system110and is based, in part, on access tokens provided by the admission system110. The access tokens may be of the same type or of different types. Further, the tokens may be proprietary tokens or may be based on standard virtual currencies or contracts, such as those described above. According to the disclosed embodiments, various blockchain-based admission processes can be implemented by the admission system110. In an embodiment, the admission system110provides access tokens to the client120, upon receiving a request to grant such tokens. The access tokens are granted through the blockchain network130. In other words, the granting of the access tokens is recorded as a transaction included in a block of a blockchain maintained by the blockchain network130. Upon requesting an access to the protected entity140by the client120, the client120“pays” the protected entity140the access tokens. This may be performed by revoking (spending) the access tokens granted to the client120in order to gain access. In other words, the use of the access tokens is recorded as a transaction and included in a block of the blockchain maintained by the blockchain network130. Thus, the validation of the transaction (access request) is performed through the blockchain network130. In an embodiment, the validation can be performed prior to admitting an access to the protected entity140. In yet another embodiment, the validation is performed after an admission is conditionally granted to the client. If the validation fails, the client120is disconnected from the protected entity140. In an embodiment, the access tokens can be granted by the admission system110after the client120performs computing challenges. An example for such challenges may include a requirement to solve a mathematical function. As such, the client120is required to invest computing resources to solve the function. For legitimate users it would be a one-time challenge, but for attackers (bots) generating thousands of requests it would drain their computing resources. In exemplary embodiment the complexity of the admission system110issued challenges to client120can be controlled by the admission system110dynamically. The admission system110may increase or decreases the challenge complexity according to one or more access parameters, discussed herein. In an embodiment, the admission system110is configured to protect a plurality of protected entities140the dynamic challenge cost can be controlled by referencing the combined activity of the client120accessing multiple protected entities. This allows the admission system110to act as an anti-bot service. As the challenge complexity increases, an embodiment can set thresholds on one or more access parameters and use the designated threshold to identify a bot threat. Once a bot threat has been identified an embodiment can change the dynamic challenge strategy and set a new challenge strategy, for example, setting an exponential growth challenge complexity so the risk can be slowed down quickly. According to another embodiment, a specific cost is associated with accessing the protected entity140. For example, various services provided by the entity140would require different numbers of access tokens. That is, the cost, in this embodiment, is defined by the number and type of access tokens. The cost for accessing may be maintained in a centralized location, by the admission system110, or by the protected entity140or in any combination thereof. The client120is configured to check the admission cost for accessing the protected entity140. If the client120does not possess enough access tokens (as recorded in the blockchain), the client120requests the admission system110for additional access tokens. The request may designate any combination of a number of access tokens and their types. To access the protected entity140, the access tokens are provided through the blockchain network130. In another embodiment, the admission cost is dynamically weighted and two or more different types of access tokens are utilized. In this embodiment, to access the protected entity140, the client120first acquires a number of first-type access tokens from the admission system110. The protected entity140also requires a number of second-type access tokens. Thus, the client120is configured to convert the first-type access tokens for the second-type access tokens using the trust broker service160. The conversion value can be dynamically determined, e.g., per access request based on one or more parameters. Such parameters may be related to the client120, the protected entity140, global indications, or a combination thereof. For example, the reputation, geographical location, or historical record as supplied by the blockchain130network or the audit system170and rate of access tokens spending may be parameters related to the client120. A service being accessed, and a current load may be parameters related to the protected entity140. An indication for an external source (not shown) an on-going attack campaign or the state of the internal anti-bot function may be an indication for an ongoing attack. The admission system110may issue more than one type of access tokens in a single transaction to the client120. In an embodiment, a transaction issued may include a number of cryptocurrency tokens as well as an SSL certificate issued to the client120, the SSL certificate is generated by a certificate issuer (not drawn) and may include the client120identity information, if such information was supplied as well as the client120public key. The certificate may have a revocation date that was influenced by the admission controller110one or more access parameters. The issued certificate can be used to establish a legacy secure service that does not have blockchain access for example an existing VPN, web or email solution The various embodiments for admission controls will be discussed in further detail with reference toFIGS.2-4. FIG.2is an example flow diagram200illustrating a non-weighted blockchain-based admission process according to an embodiment. The elements illustrated inFIG.2including the admission system110, the client120, the blockchain network130, and the protected entity140, are described herein above with reference toFIG.1. At S210, the client120sends a request (201) to grant (“buy”) access tokens from the admission system110. The access tokens may be used to access the protected entity140. In some embodiments, the request (201) is sent only after the client120successfully completes a challenge selected and set by the admission system110. The challenge process is discussed in greater detail with reference toFIG.7. The request (201) may include a public key of the client120and may further designate the entity to be accessed. The request (201) sent from the client120does not need to include any identifying information of a user of the client120. The request (201) may be sent using a standard layer-7 protocol, such as HTTP or HTTPS. In yet another embodiment, the request (201) may be sent via the blockchain network130as a non-interactive payment of the admission service. The admission system110may implement various known and unknown procedures such as, but not limited to, client certificates, one-time pads (OTPs), and so on, to identify, qualify, validate, check balances or historical record locations, or a combination thereof, of the client120. When keeping the privacy of the client120is mandatory, argument of knowledge such as SK-SNAKS and Zcach can be used instead of exposing the client identity directly. In some embodiments, the request (201) may be trigged after the client120failed to directly access the protected object140. In such case, the protected object140may redirect a request from the client120to the admission system110. At S220, upon identifying, validating, and approving the client's120request for admission, the admission system110is configured to grant (202) a number of access tokens to the client120. The access tokens are granted (e.g., paid) to the client120through the blockchain network130using the public key of the client120. In yet another embodiment, when the access tokens are implemented using zero-knowledge arguments of proof cryptography, such as ZK-SNARKs used in Zcash the public key of the client120may not be required. It should be noted that in some configurations, when the client120successfully passed a challenge set by the admission system110, S210and S220can be skipped, as the access tokens are granted as the client120passes the challenge. At S230, the client120is configured to identify, on the blockchain network130, the transaction ID holding the access tokens granted to the client120. At S240, the client120is configured to add a transaction (203) to the blockchain network130. The transaction (203) includes transaction data, a transaction hash value, and a hash value of a previous transaction(s). In an embodiment, the transaction may also include an arbitrary number of metadata elements as specified and requested by the admission system110, the client120or the protected object140. In an example embodiment, the transaction data may include a unique transaction ID, a number of access tokens to pay for access, an identification of the admission system110as the source granting the access tokens, and a target service, application, or resource at the protected entity140that the client120requests access to. It is not mandatory for the transaction (203) to include any information identifying the client120or a user using the client120. A hash of transaction is cryptographic hash of the transaction's content. The owner of the transaction, e.g., the admission system204sign the transaction with its private key or other cryptographic identities. The transaction can be verified using a public key of the transaction owner. At S250, the client120sends an access request (204) to the protected entity140to access a service or resource. For example, the access request (204) may be to login to a user account, perform an action (e.g., download a file), access a secured resource, and so on. The request (204) may be sent using a standard layer-7 protocol, such as HTTP or HTTPS. The access request (204) can include the transaction ID of the transaction (203) to ease the protect entity140to look up for the transaction, the protected entity140may search for a relevant transaction in case that such transaction ID is not provided. It should be noted that any access request sent from the client120to the protected entity140remains pending until admission by the protected entity140. The access requests may be iteratively sent, until access is granted. It should be further noted that the validation of the transaction (access request) is performed through the blockchain network130. As noted above, the validation can be performed prior to admitting an access to the protected entity140. Alternatively, the validation is performed after an admission is conditionally granted to the client. If the validation fails, the client120is disconnected from the protected entity140. In addition, the audit system170may log any failure to successfully complete a challenge to at least determine the determine trust of access value for the client, as discussed in detail below. At S260, the protected entity140is configured to validate the transaction (203) through the blockchain network130. In an embodiment, such validation is performed by immediately spending the access tokens designated in the transaction (203) as payment to the admission system110. As a result, the transaction (203) is marked as “spent” in a respective block maintained in the blockchain network130and cannot be referenced by again by the client. It should be noted that transactions are never deleted from blocks maintained in the network130. At S270, once the transaction is validated and access tokens are paid, access is granted to the client120to access the target resource or service at the protected entity140. In an embodiment, the protected entity140may use the number and type of access token paid to set temporal ACL on the accessing the client120. That is, the access may be limited in time based on the number and type of access tokens being paid. It should be noted that the access tokens are typically paid to the admission system110upon fulfillment of the access request (204). A fulfillment of the access request (204) may include allowing the client120to access the protected object140, conditionally allowing the client120access the protected object140, or denying an access to the protected object140. It should be further noted that each transaction (203) is a transfer of a certain value of access tokens between the client120, the system110, and the entity140. The blockchain block signature also prevents the transaction from being altered once it has been issued. Although not illustrated inFIG.2, all transactions are broadcast between nodes of the blockchain network130and validated through a mining process. It should be further noted that, to grant the access tokens, the admission system110and the client120may exchange their public keys. In order to revoke the access tokens, the system110and protected entity140may exchange their public keys. That is, there is no direct transaction with respect to the utilization of the access tokens between the client120and the protected entity140. A key (private or public) may be a cryptography key. Alternatively or collectively, the protected entity140and admission system110may employ other means of secure communication, such as pre-shared keys. Furthermore, the admission system110, the client120, and the protected entity140may be acting under a public key infrastructure (PKI) or certificate authority (CA) mechanism. As an example, an access token may be a chain of digital signatures. The system110may transfer the access token to the client by digitally signing a hash of the previous transaction and the public key of the protected entity140. The signatures (hash values) may be appended to each access token. To grant an access to the client120, the entity can validate the signatures to verify the chain of ownership. FIG.3shows an example flow diagram300illustrating a cost-based admission process according to another embodiment. The elements illustrated inFIG.3including the admission system110, the client120, the blockchain network130, and the protected entity140, are described herein above with reference toFIG.1. In this embodiment, there is a dynamic admission cost to access the protected entity140. The cost is defined by a number of access tokens that may be set based on a target service, application, or resource at the protected entity140from which an access is required; the behavior of the client120; or both. The cost is maintained in a cost table301. In an embodiment, the cost table301may be managed and reside in the admission system110. Alternatively, the cost table301may be managed by an admission system110, but saved in the centralized repository (not shown). This would allow the admission system110, the protected entity140, or another arbitrator (not shown) to control the admission costs across different protected entities, across different clients, or both. In yet another embodiment, the cost table301is saved in the protected entity140including the admission cost to access its services or resources. The cost table301may be managed by the protected entity140or the admission system110. In yet another embodiment, the cost table301may be maintained in the blockchain network130as distributed records in the distributed ledger. This would allow for consistent maintenance of cost values. In such an embodiment, the cost table301may be managed by the admission system110or the protected entity140. It should be noted that the cost table301discussed herein below refers to all of the different embodiments utilizing cost tables as described herein. At S310, the client120is configured to inquire about the admission cost (i.e., the number of access tokens) required for accessing a specific service or resource at the protected entity140. The inquiry is to the cost table301. At S320, if the client120does not have sufficient access token value in its wallet, the client120sends a request (302) to grant access tokens from the admission system110. The access tokens may be used to access the protected entity140. As noted below, the admission system110may implement different business flows to identify, qualify, or validate the client120, or a combination thereof. In some embodiments, the request (302) is sent only after the client120successfully completes a challenge selected and set by the admission system110. The challenge process is discussed in greater detail with reference toFIG.7. At S330, upon identifying, qualifying, or validating, and approving the client's120request for admission, the admission system110is configured to grant (202) a number of access tokens to the client120. The access tokens are granted (e.g., paid) to the client120through the blockchain network130using the public key of the admission system110. The number of granted access tokens should be enough to satisfy the admission cost designated in the cost table301. It should be noted that if the client120already has enough tokens (e.g., as verified via the blockchain network130), or the client120successfully passes the challenge S320and S330are not performed. At S340, the client120identifies, on the blockchain network130, the transaction ID holding the access tokens granted to the client120. The identified tokens may be stored in a cryptocurrency wallet included in the client120. At S350, the client120is configured to add a transaction (303) to the blockchain maintained by the blockchain network130. The transaction (303) includes transaction data, a transaction hash value, and a hash value of a previous transaction(s). In an example embodiment, the transaction data may include a number of access tokens to pay for access, an identification of admission system110as the source granting the access tokens, and a target service, application or resource at the protected entity140that the client120requests to access. The transaction (303) does not include any information identifying the client120or its user. In an embodiment, the transaction may also include an arbitrary number of metadata elements as specified and requested by the admission system110, the client120or the protected object140. At S360, the client120is configured to send an access request (304) to the protected entity140. The access request (304) includes the transaction ID of the transaction added to the blockchain. As mentioned above, the access request (304) can include the transaction ID of the transaction (303) to ease the protected entity140to look up for the transaction, the protected entity140may search for a relevant transaction in case that such transaction ID is not provided. As further noted above, any access request sent from the client120to the protected entity140is pending admission by the protected entity140. At S370, the protected entity140is configured to validate the transaction (303) through the blockchain network130. As noted above, such validation is performed by immediately spending the access tokens designated in the transaction (304) as payment to the admission system110. As a result, the transaction (304) is removed from a respective block maintained in the blockchain network130and cannot be referenced again by the client120. At S380, once the transaction is validated and tokens are paid, access is granted to the client120to access the target resource or service at the protected entity140. As noted above, the protected entity140may use the number and type of access token paid to set a temporary ACL on the accessing the client120. That is, the access may be limited in time based on the number and type of access tokens being paid. In some embodiments, S380may further include updating the cost table301, e.g., increasing or decreasing the cost associated with accessing the protected entity140. The cost may be updated based on of the parameters discussed herein above by the entity managing the cost table301(e.g., the admission system110, the protected entity140, etc.). As mentioned above, each transaction (e.g., a transaction304) is a transfer of a certain value of access tokens between the client120, the admission system110, and the protected entity140, each of which may maintain a cryptocurrency wallet. Further, to grant the access tokens, the admission system110and the client120may exchange their public keys. In order to revoke the access tokens, the admission system110and the protected entity140may exchange their public keys. That is, there is no direct transaction with respect to the utilization of the access tokens between the client120and the protected entity140. A key (private or public) is a cryptographic key. FIG.4shows an example flow diagram400illustrating a weighted blockchain-based admission process according to an embodiment. The elements illustrated inFIG.4including the admission system110, the client120, the blockchain network130, the protected entity140, and the trust broker160are described herein above with reference toFIG.1. In this embodiment, two types of access tokens are used, where a conversion value is dynamically determined for converting the first-type to the second-type of the access tokens. For example, the conversion value may be determined per access request, per conversion-token transaction, per session, or a combination thereof. In essence, the conversion value determines the admission cost, but in this embodiment, such cost is dynamically updated based on a plurality of access parameters. Further, there is no need to maintain any data structure (e.g., the cost table301,FIG.3). In an embodiment, the conversion value is determined by the trust broker160. The weighted admission process weights the risk of false positive detection of attacker admission against the risk of false negative detection of legitimate user denial of admission. The weighted admission process allows a defender to scale a cost to access a protected entity linearly with the attack, while exponentially increasing the admission cost as, for example, the volume of the attack, type of attack, and other risk factors associated with the attack. To this end, the conversion value is dynamically determined based on one or more access parameters, the interaction between the access parameters, or both. A first group of access parameters are related to the client120. A reputation of the client120, determined by the system110or received from an external service, can be utilized to determine the conversion value. That is, a bad reputation would lead to a higher conversion value, and vice versa. The geographical location of the client120can determine the conversion value. For example, a client120from a “reputable” country (e.g., USA) would positively affect the conversion value. A behavior of the client120can also determine the conversion value. The admission system110, the trust broker160, or both, may monitor the activity of the client120across multiple protected entities, non-protected entities, or both, to detect any suspicious activity and determine the conversion value respective thereof. For example, a client that frequently requests additional access tokens may be classified as suspicious (e.g., a bot), which would lead to a higher conversion value. As another example, a client120that does not perform any malicious activity may be classified as legitimate, which would lead to a lower conversion value. The conversion value can be determined over multiple access tokens representing different granular admission rights. For example, read-access-tokens indicting the client120admission may incur lower cost than write-access-tokens indicting the client permissions to edit data and incur higher cost therefor controlling the client temporal authorization in fine granularity over multiple dimensions as dictated by risk analysis and the protected entity admission granularity. A second group of access parameters are related to the protected entity140. For example, a sensitive resource or service of the protected entity140would require a higher conversion value, regardless of the trustworthiness of the client120. As another example, the current load on the protected entity140may affect the conversion value, i.e., the higher the load the higher the conversion value is. In an embodiment, a network load balancer or and ADC (not shown) deployed in network150anywhere in the path of client120or protected entity140can be used to provide load and utilization data as such external data. A third group of access parameters are global indications. An indication of an-going cyber-attack against the protected entity or other entities in the network is considered as a global parameter. A volume of an on-going attack is also considered as a global parameter. Such indications may be received from external systems (not shown) connected to the admission system110, the trust broker160, or both. For example, an indication of an on-going cyber-attack would lead to a higher conversion value. Other examples for global parameters may include time of the day, certain day (weekends, weekdays, or holidays), and the like. In all of the above example access parameters, the weight (i.e., conversion value) can be adapted as a non-linear function. The non-linear function does not impact legitimate users that occasionally access the protected entity140, but such a function significantly impacts attackers frequently accessing the protected entity140. Thus, using the access parameters to determine the conversion value allows to “discriminate” among clients, while the clients can maintain their privacy. The mentioned above example access parameters may be determined based on historical data logged in the audit system170. For example, such data may include the identified conversion transaction, determined conversion values, and more. At S410, the client120is configured to send a request (401) for grant (buy) of access tokens from the admission system110. The access tokens may be used to access the protected entity140only after being converted. That is, a first-type of access token is required. In an embodiment, the request (401) includes a public key of the client120and may further designate the entity to be accessed. In another embodiment, the first-type of access token is based on zero-knowledge cryptography and does not require a public key. As an example, the access token can be implemented using Zcash technologies. The request (401) sent from the client120does not need to include any identifying information on a user of the client120. The request (401) may be sent using a standard layer-7 protocol, such as HTTP or HTTPS. The admission system110may implement different business flows to identify, qualify, or validate the client120, or a combination thereof. As noted above, in some embodiments, the request (401) may be trigged after the client120failed to directly access the protected object140. In such case, the protected object140may redirect a request from the client120to the admission system110. In some embodiments, the request (401) is sent only after the client120successfully completes a challenge selected and set by the admission system110. The challenge process is discussed in greater detail with reference toFIG.7. At S420, upon identifying, validating, and approving the client's120request for admission, the admission system110is configured to grant a number of first-type access tokens to the client120. The access tokens are granted (e.g., paid) to the client120through the blockchain network130using the public key of the client120. In yet another embodiment, when the access tokens are implemented using zero-knowledge arguments of proof cryptography, such as ZK-SNARKs used in Zcash, the public key of the client120may not be required. When admission system110grants the first type access token, the system110add metadata in the blockchain network's130transaction record. At S430, the client120is configured to identify, on the blockchain network130, the transaction ID holding the first-type access tokens granted to the client120. The tokens may be stored in a cryptocurrency wallet included in the client120. It should be noted that S410-S430may not be performed if the client120holds enough first-type access tokens, for example, from previous transaction(s). At S440, the client120is configured to add a transaction (402) to the blockchain network130to convert the first-type of access tokens with a second-type of access tokens. The transaction (402) includes transaction data, a transaction hash value, and a hash value of a previous transaction(s). In an example embodiment, the transaction data may include a unique transaction ID, a number of available first-type access tokens, an identification of the admission system110as the source granting the access tokens, and a target service or resource at the protected entity140that the client120requests to access. The transaction (402) does not need to include any information identifying the client120or a user using the client120. At S450, the trust broker160, is configured to identify the transaction ID of the transaction (402) and determine the conversion value for the transaction (402) based on one or more access parameters. As noted above, such parameters may be related to the client120, the protected entity140, or any global indication. It should be noted that in some embodiments, the trust broker160may be integrated in the admission system110. Thus, the admission system110may perform the conversion operation. Further, the admission system110, trust broker160, or both can be implemented as a distributed system. In an embodiment, the conversion transaction can be realized as a smart contract written on the Ethereum network EVN or other blockchain network. In another embodiment, the conversion transaction can be realized using off-chain Oracle. As a result, the admission system110is configured to grant a number of second-type access tokens to the client120based on the conversion value. Such second-type access tokens are paid to the client120through the blockchain network130using, for example, a public key of the client120. In an example implementation, the first and second access-tokens may be different types of cryptography currencies. For example, the first-type token may be a Zcash coin and the second-type token may be Ethereum. It should be noted that the first-type and second-type of access tokens can be granted during different sessions. That is, the conversion of the first-type and second-type access tokens may not occur immediately after the first-type of access tokens are granted. That is, the client110may hold the first-type access tokens in its wallet or as unspent transactions in the ledger and request the conversion only when it is required to access the protected entity. Further, the first-type of access tokens may be a global currency, while the second-type of access token may be specific to certain types of protected entities. That is, various types of “second-types” tokens can be used. It should be further noted that, once the second-type of access tokens are granted to the client, the previous transaction, i.e., the transaction (402), is spent from a block maintained in the blockchain network130and cannot be referenced again by the client120. At S460, the client120is configured to identify, on the blockchain network130, the transaction ID holding the second-type of access tokens. The tokens may be stored in a cryptocurrency wallet included in the client120. At S470, the client120is configured to add a new transaction (403) to the blockchain network130to convert the first-type access tokens with a second-type access tokens. The transaction (403) includes transaction data, a transaction hash value, and a hash value of a previous transaction(s). In an example embodiment, the transaction data may include a unique transaction ID, a number of available second-type access tokens an identification of admission system110as the source granting the access tokens, and a target service or resource at the protected entity140that the client120requests to access. In an embodiment, the transaction may also include an arbitrary number of metadata elements as specified and requested by the admission system110, the client120or the protected object140. At S480, the client120is configured to send an access request (404) to the protected entity140. As mentioned above, the access request (404) can include the transaction ID of the transaction (403) to ease the protected entity140to look up for the transaction, the protected entity140may search for a relevant transaction in case that such transaction ID is not provided. As further noted above, any access request sent from the client120to the protected entity140is pending admission by the protected entity140. At S490, the protected entity140is configured to validate the transaction (403) through the blockchain network130. In an embodiment, such validation is performed by immediately spending the second-type access tokens designated in the transaction (403) as payment to the admission system110. As a result, the transaction (403) is marked as spent in a respective block maintained in the blockchain network130and cannot be referenced by again by the client120. At S495, once the transaction is validated and access tokens are paid, access is granted to the client120to access the target resource or service at the protected entity140. As mentioned above, each transaction (e.g., the transactions402and403) is a transfer of a certain value of access tokens between the client120, the admission system110, and the protected entity140, each of which may maintain a cryptocurrency wallet. Such information may be logged in an audit system (e.g., the audit system170,FIG.1). This allow the admission system110and the trust broker160to derive the full history of transactions between the client120, the admission system110, and the trust broker160. Further, to provide access tokens, the admission system110and the client120may exchange their public keys. In order to “spend”, the access tokens, the admission system110and the protected entity140may exchange their public keys. That is, there is no direct transaction with respect to the utilization of the access tokens between the client120and the protected entity140. A key (private or public) is a cryptographic key. It should be appreciated that the disclosed embodiments provide an improved security solution as the bots would not be able to access and load protected entities with access requests. That is, by shifting the processing to malicious clients (through access tokens processing), the protected entities would remain free to handle legitimate requests with more available computing resources. Further, the protected entities would not require executing authentication processes, thereby further reducing the utilization of computing resources by such entities. FIG.5is an example flowchart illustrating a method for blockchain-based admission to a protected entity according to an embodiment. At S510, a request to grant access tokens of a first-type is received from a client. The client utilizes the first-type access tokens as a means to later access a protected entity. At S520, upon validating and approving the request, a first-type of access tokens are granted to the client. As noted above, such tokens can be paid (sent) the client directly or through the blockchain network. At S530, a transaction to convert the first-type of access tokens with access tokens of a second-type is identified on the blockchain network. The transaction data of the conversion transaction may designate, for example, the protected entity to access and a number of available first-type access tokens. The conversion transaction may also be directed to exchange cryptographic identities (e.g., keys or argument of knowledge) between the owners of the first and second access tokens. To this end, the conversion transaction may designate the public keys of, for example, the trust broker and the client. It should be noted that the request to grant first type of tokens and the conversion transaction are separate and the cryptographic identities are not shared between such requests. Further, no entity (user, owner of the trust broker, etc.) needs to reveal its (real) identity to receive and/or convert tokens. It should be further noted that the entity receiving the first-type of access token and the entity requesting the conversion to the second entity may be different entities. At S540, a conversion value for converting the first-type of access tokens into the second-type of access tokens is determined. As discussed in detail above, the conversion value is determined based on one or more access parameters. Examples of which are provided above. At S550, based on the determined conversion value, a first sum of the first-type of access tokens is converted into a second sum of the second-type of access-tokens. The first-type and second-type of access tokens may be different cryptocurrencies. The second-type of access tokens are paid or sent to a client or any other entity requesting the conversion. As noted above, the client uses the second-type of access token to access the protected entity. To this end, an access request is sent from the client to the protected entity. In order to allow access or admission, the protected entity identifies, in the blockchain network130, the second-type of access tokens of the client, and further spends such tokens to allow payment the admission system. It should be noted that revoking of tokens does not delete any transaction records. In response, at S560, the second-type of access tokens are received as a payment from the protected entity. At S570, upon receipt of the second-type of access tokens, an admission or access to the protected entity is granted to the client. It should be noted that the number of second-type of access tokens to be converted is determined by the conversion value. There is a minimum number of access-tokens required to the access the protected entity. Such number is determined regardless of the access parameters. The access parameters define the number of first-type of access tokens required to be converted to reach the minimum number of access-tokens. For example, if the minimum number of second-type access tokens is 10, then a client-A may need to convert 40 first-type of access tokens, while a client-B may be required to convert only 5 first-type of access tokens. In an embodiment, the method discussed herein can be performed by the admission system (110). In such embodiment, the admission system implements or includes the trust broker (160). FIG.6is an example block diagram of the admission system110according to an embodiment. The admission system110includes a processing circuitry610coupled to a memory620, a storage630, and a network interface640. In an embodiment, the components of the admission system110may be communicatively connected via a bus650. The processing circuitry610may be realized as one or more hardware logic components and circuits. For example, and without limitation, illustrative types of hardware logic components that can be used include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), Application-specific standard products (ASSPs), GPUs, system-on-a-chip systems (SOCs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information. In an embodiment, the processing circuitry610(or the entire system110) may be implemented as a Turing machine over the blockchain network. The memory620may be volatile (e.g., RAM, etc.), non-volatile (e.g., ROM, flash memory, etc.), or a combination thereof. In one configuration, computer readable instructions to implement one or more embodiments disclosed herein may be stored in the storage630. In another embodiment, the memory620is configured to store software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing circuitry210to perform the various processes described herein. Specifically, the instructions, when executed, cause the processing circuitry610to perform blockchain-based admission, as discussed hereinabove. In a further embodiment, the memory620may further include a memory portion625including the instructions. The storage630may be magnetic storage, optical storage, and the like, and may be realized, for example, as flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs), hard-drives, SSD, or any other medium which can be used to store the desired information, such as log of transactions, public keys, and so on. The network interface640allows the admission system110to communicate with the blockchain network, clients, trust broker, and protected entities. The network interface640further peer-to-peer communication with these elements. It should be understood that the embodiments described herein are not limited to the specific architecture illustrated inFIG.6, and that other architectures may be equally used without departing from the scope of the disclosed embodiments. It should be further noted that the trust broker160may be realized using a computing architecture, similar to the architecture illustrated inFIG.6, and that other architectures may be equally used without departing from the scope of the disclosed embodiments. Further, the memory620may include instructions for executing the function of the trust broker160. FIG.7is an example flow diagram700illustrating a challenge process utilized in the any of the admission processes described above. The elements illustrated inFIG.7including the admission system110, the client120, the blockchain network130, and the protected entity140, are described herein above with reference toFIG.1. A S710, the client120sends a request (701) to create an anti-bot client identity and request a challenge. The requested identity will be used by the client120to access into one or more protected entities140. In an embodiment, the request (701) may already contain a public key of the client120. In another embodiment, the request (701) does not include such a key, and the admission system110may assign a key to the client120and generate a token ID. The request (701) sent from the client120does not need to include any identifying information of a user of the client120. The request (701) may be sent using a standard layer-7 protocol, such as HTTP or HTTPS. In yet another embodiment, the request (701) can be delivered using the blockchain network130. The admission system110may implement various known and unknown procedures such as, but not limited to, client fingerprinting, client certificates, human identification techniques such as CAPTHAs and other proof of knowledge techniques to further strengthen the understanding of client120natures. In some embodiments, the request (701) may be trigged after the client120failed to directly access the protected entity140. In such case, the protected entity140may redirect a request from the client120to the admission system110. At S720, upon identifying, validating, and approving the request (701), the admission system110selects a challenge and sends the challenge to the client120together with other needed information such as session IDs or token IDs. In an embodiment, the admission system110can interrogate other sources for information before selecting a challenge. Examples for such sources include external databases, reputation services, the anti-bot system historical records, the protected entity historical records, and the like. The selected challenge is characterized by the type and optionally complexity and may contain a randomly selectable seed value. As an example, the admission system110generates a random seed number, then selects a SHA256 challenge and sets the complexity requesting the client120to find a string that together with the seed would result with a SHA256 number with a certain probability. In other embodiment, the request701and the response702can be skipped all together if the client120choses to follow a non-interactive challenge path. In such non-interactive embodiment, the client120can chose a challenge from a pool of challenges available in an external registry (not shown). The registry may be part of the blockchain network130. In yet another embodiment, the challenge can be based on time of day and the time delta between the selected time of day and the actual time stamp in the703blockchain message deposits. At S730, the client120completes the challenge and deposits the result of the challenge in the blockchain network130. At S740, the admission system110validates the deposited challenge. This can be performed by monitoring the blockchain network130or by receiving a notification from the client120. At S750, upon validating the challenge's results, the admission system110is configured to deposit access tokens directly in the blockchain network130via a transaction704without waiting for explicit request from the client120. Alternatively, the admission system110can notify the client120of such deposit via a message705. It should be noted that, that if the challenge fails, no access tokens are deposited to the client120. The audit system170may log any failure to successfully complete a challenge to at least determine the determine trust of access value for the client, as discussed herein above. The various embodiments disclosed herein can be implemented as any combination of hardware, firmware, and software. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal. It should be understood that any reference to an element herein using a designation such as “first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations are generally used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise a set of elements comprises one or more elements. In addition, terminology of the form “at least one of A, B, or C” or “one or more of A, B, or C” or “at least one of the group consisting of A, B, and C” or “at least one of A, B, and C” used in the description or the claims means “A or B or C or any combination of these elements.” For example, this terminology may include A, or B, or C, or A and B, or A and C, or A and B and C, or 2A, or 2B, or 2C, and so on. All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the disclosed embodiments and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. | 60,806 |
11943225 | DETAILED DESCRIPTION The technology can be implemented in numerous ways, including as a process; a system; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In general, the order of the steps of disclosed processes may be altered within the scope of the technology. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions. A detailed description of one or more embodiments of the technology is provided below along with accompanying figures that illustrate the technology. The technology is described in connection with such embodiments, but the technology is not limited to any embodiment. The scope of the technology is limited only by the claims and the technology encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the technology. These details are provided for the purpose of example and the technology may be practiced according to the claims without some or all of these specific details. A hierarchical permissions model for case models is disclosed. In various embodiments, case roles may be defined and at case nodes in the hierarchy one or more case roles may be associated with that case node and for each role the associated permissions of that role at that case node may be specified. In some embodiments, the permissions associated with a case role with respect to a case node may be conditional, depending for example on a current state of a state machine defined for the case node, etc. FIG.1is a flow chart illustrating an example embodiment of a process to perform case management. In the example shown, a case model definition is received and stored (102). The case model definition is used to create new instances based on the case model, sometimes referred to herein as “case instances” or “case management instances”, and/or to provide access to previously-created instances (104). For example, a case model may be defined and stored for a loan application and associated processes. Case instances may be created based on the case model and each respective case instance used to manage a corresponding loan application, for example by different respective loan applicants. A case model typically describes a case management system. Using a case model, one can model ad hoc actions with mini workflows, for example, as opposed to a very structured process that defines an end-to-end business workflow. In various embodiments, a case model comprises a hierarchical/nested container model (sometimes referred to herein as a “hierarchical data model”), and may in addition define case roles, case phases (states), and/or permissions. In some embodiments, permissions may be defined for each case node and/or level in the hierarchy, and may vary in some embodiments based at least in part on the respective phases (states) of a state machine defined for a case node. In various embodiments, a case model may include a hierarchical/nested container model. This model represents how the data within a case is organized and what data is captured during runtime. Each node in the hierarchy is sometimes referred to herein as a “case node”. Case nodes at the lowest level of a case model hierarchy may be referred to as “case leaf nodes” or simply “leaf nodes”. “Case leaf nodes” in various embodiments may point to a specific business object or document type. The term “case role” is used herein to refer to user roles that have been defined in a case model. In various embodiments, users may be assigned to case roles with respect to instances of a case model, and at each case node in the case model permissions may be designated by reference to one or more case roles. During runtime in some embodiments, members may be added or removed from these roles at case node instances corresponding to respective instances of a type of case as defined in a case model. In various embodiments, at each case node, a metadata model that defines one or more traits and/or associated behavior may be defined. In various embodiments, a case model as described herein may be created using a domain-specific or other development module or tool. For example, reusable elements, such sample case nodes typical of those used in the domain (e.g., documents, case roles, behaviors, etc. typically associated with a loan application process, a new drug approval application, etc.), primitives usable to define a state machine and/or associated processing for respective case nodes, etc., may be provided. For example, an application programming interface (API) may be defined, and/or a visual or other case model development tool may be provided. In various embodiments, a case model definition is embodied in an XML or other structured data file. A case management system and/or platform is provided, which is configured (e.g., by software) to load a case model definition, parse the definition, and create an instance of the case model based on the definition. Instance-specific attributes and/or state information or other metadata may be stored in a case model instance data store, e.g., a database. At runtime, the case model definition file and the case model instance data for a given instance are used by the disclosed case management system to implement the case model instance, including by performing processing and managing case model instance associated content per the case model definition, in light of the current values of the case model instance data for that instance. FIG.2is a block diagram illustrating an example embodiment of a case management system and environment. In the example shown, client systems202are connected via a network204, e.g., the Internet, to a case management system206. In various embodiments, the case management system206may be configured to implement the process ofFIG.1. Case management system206uses case models stored in data storage208to provide case management services with respect to case management instances, the instance variable data values of which also are stored, in this example, in data storage208. For example, one or more of clients202may connect via network204to case management system206to obtain access to case management services. For example, case management system206may expose a “case management system as a service”, e.g., as a web service, enable clients202to connect to case management system206, create case management instances based on case models stored in data storage208. The users of client system202may be prompted to provide data values and/or other user input to populate case management instances with metadata, user data, documents, etc., and/or such other user input as may be required to advance case instances through case management processing as defined in the case model. In the example shown inFIG.2, a case model developer system210, e.g., a client computer system, also can connect to case management system206via network204. In some embodiments, a case model development user interface and/or service may be accessed and used to define a case model. For example, a visual or other developer tool may be presented to enable a developer using client system210to define a case model and cause the case model to be stored in data storage208and deployed by case management system206. In some embodiments, deployment of a case model includes making the case model available to be used to create case management instances based on the model, and to use the case model to perform with respect to each such instance the case management processing as defined in the case model. In various embodiments, a case model may indicate one or more content objects to be associated with respective instances of a case model. The case model may include metadata and associated behaviors to enable instance-specific content objects (e.g., documents) to be associated with case leaf nodes of a case instance. In the example shown inFIG.2, content objects may be accessed via a content management system212configured to manage content objects stored in an associated content repository214. In various embodiments, case management system206may be configured to use instance variables associated with a given case instance and metadata and/or behaviors defined in an associated case model to interact programmatically with content management system212to obtain and/or manage documents or other content objects associated with a case instance. In some embodiments, case management system206may be configured, e.g., via the case model, to invoke services and/or other functionality of content management system212with respect to such documents or other content objects. FIG.3is a block diagram illustrating an example embodiment of a case management system. In some embodiments, the case management system ofFIG.3corresponds to case management system206ofFIG.2. In the example shown, case management system206includes a network communication interface302, such as a wireless or other network interface card, to provide network connectivity, e.g., to network204ofFIG.2. A case model development module304is accessible to developers via network communication interface302and may be used to create and/or modify case model definitions. In some embodiments, a visual or other user interface is provided, via network communication interface302, to enable case models to be created and/or modified. For example, a developer may use a browser to access the developer user interface in some embodiments. Case model definitions are stored by case model development module304by using a backend database (or other data storage) interface306to store the case model(s) in case model store308. Referring further toFIG.3, the case management system206includes a case management module310. In various embodiments, case management module310includes functionality to enable users, e.g., users of client systems202ofFIG.2, to create and/or use case management instances based on case models stored in case model store308. Case management module310, for example, may expose a web or other interface to remote users and may receive via said interface a request to create and/or access a case instance. Case management module310uses database interface306to obtain an associated case model definition from case model store308, to use the case model to instantiate case instances. Instance variables are stored by case management module310in case instance data store312. FIG.4is a diagram illustrating an example embodiment of a process and system to create and/or provide access to case management instances. In some embodiments, the process ofFIG.4may be implemented by a case management system and/or a component thereof, such as case management module310ofFIG.3. In the example shown, case management system400receives a request402to create or access a case management instance and invokes instantiation process404. Instantiation process404uses a case model definition406associated with the request, e.g., a case model indicated explicitly and/or otherwise associated with data comprising the request402, and case management instance data408associated with the case management instance, to instantiate and provide access to a case management instance410. In various embodiments, a case model definition such as model definition406may include an XML file or other structured data, which the case management system is configured to parse and use to construct case instances based on the case model. For example, the hierarchical data structure may be defined, along with metadata and associated behaviors for each case node. A case management instance, such as case management instance410, may include an in memory instance of a data structure defined in case model definition406, which is used to store instance variables, such as instance data408in this example. FIG.5is a flow chart illustrating an example embodiment of a process to receive and store a case model. In some embodiments, the process ofFIG.5is used to implement step102ofFIG.1and is performed by a case management system, such as case management system206ofFIG.2, e.g., case model development module304ofFIG.3. In the example shown, an indication that a new case model is to be defined is received (502). A problem domain-specific developer interface to be used to define the case model is provided (504). For example, in some embodiments a developer may indicate in a request to define a new case model, and/or may be prompted to indicate, a “problem domain” with which the case model is associated, such as a loan application, an employment application, a product development or other business project, a healthcare or other patient, a claim for reimbursement or benefits, or a matter being handled by a professional or personal service provider, such as a lawsuit, home renovation project, etc. In various embodiments, the problem domain-specific developer interface provides access to problem domain-specific elements to assist the developer in defining the case model. For example, a loan application typically is initiated by a loan applicant submitting an application, and typically involves gathering information to verify and evaluate the applicant's identity, financial assets, income, creditworthiness, etc. In some embodiments, a template may be provided to be used as a starting point. The developer uses visual or other tools to customize the template as desired to define a case model. Once the developer has completed and submitted the case model definition, the case model definition is received, stored, and deployed (506). In some embodiments, a runtime representation of the definition is processed, e.g., upon submission by the developer, to generate an XML or other structured data file that embodies the case model as defined. Deployment in various embodiments includes making the case model definition available to be used to instantiate case management instances based on the case model, e.g., individual loan application cases. FIG.6is a flow chart illustrating an example embodiment of a process to receive and store a case model. In some embodiments, the process ofFIG.6is included in step506ofFIG.5. In the example shown, a definition of a hierarchical/nested data model is received (602). For example, a user interface that enables a developer to drag and drop case nodes onto a canvas and to indicate hierarchical relationships between case nodes may be provided and used by the developer to define a hierarchical/nested data model. A definition of case roles is received and stored (604). For example, a “loan application” case model may include user roles such as “loan initiator”, “underwriter”, “appraiser”, etc. For each case node in the hierarchical/nested data model, a definition of metadata, behaviors, content (e.g., documents), states/phases (and transitions between states/phases), and/or permissions (e.g., by case role) is received (606). For example, in various embodiments, a developer interface may be provided to enable a developer to select a case node and be presented with an interface to define a state machine for that case node. FIG.7is a block diagram illustrating an example of a hierarchical data model in an embodiment of a case management system. In various embodiments, a case model, such as one defined using the processes ofFIGS.5and6, may include a hierarchical/nested container model, such as the one shown inFIG.7. In the example shown, hierarchical/nested container model700includes a root node702at a first (highest) hierarchical level. At a first hierarchical level below the root node, nodes704and706are included. Finally, in a lowest hierarchical level (in this example), node704has two “case leaf nodes”708and710. In various embodiments, metadata, behaviors, permissions, etc. that have been defined for a case node extend (or in some embodiments may at the option of the case model developer be extended) to child case nodes of the case node at which such metadata, behaviors, permissions, etc. have been defined. FIG.8is a block diagram illustrating an example of a hierarchical data model in an embodiment of a case management system, such as case management system206ofFIG.2. In particular, a hierarchical/nested container model for a home loan application is illustrated. In the example shown, each instance of a “loan” case includes a root node802and two first level sub-nodes804and806, in this example one (804) for financial information of the applicant and associated processing, and another (806) for information and processing associated with the home to be purchased using the loan. The “applicant information” sub-node804includes a first case leaf node808for Forms W-2 and a second case leaf node810for the applicant's tax returns. “Property” sub-node806includes case leaf nodes812,814, and816for the title report, appraisal report, and home inspection report, respectively. In various embodiments, the case model definition may include for each case node a definition of metadata and/or behaviors for that case node. For case leaf nodes, such as case leaf nodes808,810,812,814, and816, the case model definition may include information regarding documents or other content objects to be associated with such nodes, including in some embodiments an identification of a storage location in which such documents are to be stored, e.g., in a content repository such as repository214ofFIG.2associated with a content management system such as content management system212ofFIG.2. FIG.9is a block diagram illustrating an example of a hierarchical data model and associated state machine in an embodiment of a case management system. In various embodiments, the hierarchical data model and associated state machine ofFIG.9may be included in a case model definition defined and/or deployed via a case management system such as case management system206ofFIGS.2and3. In the example shown, a state machine902has been defined for and associated with case node704of hierarchical/nested container model700ofFIG.7. In various embodiments, for any case node within the hierarchical/nested container model, a state machine can be defined and the actions that can be used to transition between different phases/states of the state machine defined for that case node may be specified. These actions could be used during runtime to transition between states. In the example shown inFIG.9, a state machine902has been defined and associated with a specific case node in the hierarchical model shown inFIG.7, specifically node “F11” (704). In various embodiments, a document or other content associated with node “F11”; traits, such as metadata and/or associated behavior associated with node “F11”; etc. may be transformed, reviewed, and/or otherwise involved with processing that may result, in a given case model instance, in transitions being made between states of the state machine902defined for case node “F11” in this example. In various embodiments, enabling a state machine to be defined and associated with a case node comprising a hierarchical/nested container model provides a flexible, dynamic framework within which ad hoc actions and/or information can be responded to, in a manner determined dynamically based on the circumstances of a given instance of a case, with the result that the actions and/or processing performed at a given case node, and/or the consequences of such actions and/or processing, may be different for one instance of the case model than for another instance of the case model. In various embodiments, a state machine engine may be included in a case management system, such as case management system206ofFIG.2, to enable a state machine defined for a case node, such as state machine902ofFIG.9, to be implemented and associated functionality to be provided. For example, in some embodiments, case management module310ofFIG.3may include a state machine engine. In some embodiments, the state machine engine may receive and parse state machine definition portions of a case model definition, and may use such portions to create and manage runtime data structures associated with the respective defined states (phases) of the state machine and transitions between them. In some embodiments, state variables associated with a current state of a case node-specific state machine for a given instance of a case model may be stored persistently with other case management instance data, for example in a case instance data store such as data store312ofFIG.3. FIG.10is a block diagram illustrating an example of a state machine defined for a case node in an embodiment of a case management system. In various embodiments, the state machine ofFIG.10may be included in a case model definition defined and/or deployed via a case management system such as case management system206ofFIGS.2and3. In the example shown, state machine1000includes an “open” state1002, associated for example with beginning processing of a received document, such as one created locally, uploaded, or otherwise provided by a user. In the example shown, a transition out of the “open” state1002may occur upon a “submit” option being selected, e.g., by a user, a business process, an external service, etc. If the item was submitted with an indication that a “review” is required (e.g., a reviewer is named or otherwise indicated), the state machine transitions to a “pending” state1004, indicating the required review is pending. If no review is required, the state machine instead transitions directly to an “accepted” state1006. If review was required and the reviewer “accepts” the item, a transition from “pending” state1004to “accepted” state1006occurs. If instead the reviewer were to “reject” the item, in this example a transition from “pending” state1004to “rejected” state1008would occur. From either “accepted” state1006or “rejected” state1008, a “close” transition to a “closed” state1010could occur. Finally, in this example, “reopen” transitions back to “open” state1002could occur from the “accepted” state1006, “rejected” state1008, and/or “closed” state1010. Note that for a given instance of a case model with which the state machine1000ofFIG.10is associated, the states through which the state machine1000ofFIG.10may transition for that instance may be different than for one or more other instances. Also, for a given instance, depending on the state machine definition included in the case model definition, the user data associated with that instance at applicable times, and potentially user actions and decisions made in response to case information, the state machine1000may be transitioned to/through a given state more than once (e.g., via the “reopen” transitions), potentially resulting in different outcomes of processing associated with that state. FIG.11is a flow chart illustrating an embodiment of a process to define a state machine for a case node in an embodiment of a case management system. In various embodiments, the process ofFIG.11may be performed by a case management system, such as case management system206ofFIGS.2and3. For example, in some embodiments, a case model development component such as case model development module304ofFIG.3may include a development tool and/or feature to enable a state machine to be defined and associated with a case node, using the process ofFIG.11. In the example shown inFIG.11, an indication to define a state machine for a case node is received (1102). For example, a developer using a case model definition tool, service, and/or interface may select a case node and provide an input indicating that a state machine is desired to be defined for and associated with the selected node. A developer user interface to define a state machine for the case node is provided (1104). A definition of two or more states and transition(s) between them is received (1106). A definition of permissions associated with the respective states, e.g., who may access content or metadata associated with the case node while the state machine is in that state, is received (1108). A definition of permissions associated with transitions between the respective states, e.g., who may cause each transition to occur, is received (1110). In various embodiments, a state machine defined for a case node using the process ofFIG.11may be included in the case model definition as stored and deployed, e.g., in a corresponding portion of an XML or other structured data file comprising the case model definition. A hierarchical permissions model for case models is disclosed. In various embodiments, the hierarchical permissions model is used at runtime, with respect to each case instance, to provide and control access to the case instance, associated content, and associated actions. For a given case model, in various embodiments the case model defines authorization permissions. For each case node in the case hierarchical data model, in various embodiments permissions are modeled in such a way that a) which case role in b) which (state machine) phase has c) what permissions are defined. In various embodiments, with reference to defining permissions, a case role may be a contained case role (defined at that case node) or a role defined at a parent level. Likewise, a phase could be a phase defined at that particular case node or a phase defined at a parent case node. In some embodiments, permissions define a) users from which case role in b) which phase has c) what permissions with respect to metadata and content may be modeled. In some embodiments, permissions are modeled defining a) users from which case role in b) which phase can c) add or remove users from that and other case roles. In some embodiments, permissions are modeled defining a) users from which case role can b) in what phase c) can transition a case node from that phase to possible target phase. As an example, users belonging to “checklist item reviewer” case role may in a case model be given permission to move a checklist item from “pending” to “accepted” or “rejected”. Similarly, users in a “checklist coordinator” case role may be given permission to move a checklist item from an “accepted” or “closed” state to an “open” state. FIG.12is a flow chart illustrating an embodiment of a process to define hierarchical permissions model for case management. In some embodiments, the process ofFIG.12may be implemented by a case management system, such as case management system206ofFIG.2. In the example shown, case role definitions for a case model are received (1202). In some embodiments, case role definitions may be received via a case model developer user interface. The case model developer defines a hierarchical data model for a case model. For each case node in the case model, the developer associates one or more case roles with the case node (1204). For example, the developer may drag and drop a visual representation of a case role onto a displayed representation of a case node. At each case node, for each case role that has been associated with that case node, the permissions to be associated with that case role at that case node are defined and (at runtime) set (1206). For example, in various embodiments, one or more of the following permissions may be set, with respect to metadata and/or content associated with a case node: create (C), read (R), update/modify (U), and delete (D). In various embodiments, permissions set for a case role with respect to a case node may be defined dynamically, e.g., by reference to the respective phases/states of a state machine associated with the case node. For example, a case role may have a first set of permissions at a case node when the case node's state machine is in a first phase/state, and a second (different) set of permissions at the case node when the case node's state machine is in a second phase/state. In some embodiments, in each state one or more permissions associated with causing transitions to be made to one or more other states may be set based on case role. In some embodiments, permission may be set by case role to indicate which case roles will be able to assign users to case roles and/or to change such assignments. FIG.13is a flow chart illustrating an example embodiment of a process to define hierarchical permissions conditioned on case node state. In some embodiments, the process ofFIG.13may be implemented by a case management system, such as case management system206ofFIG.2. In the example shown, definition of permissions by case node phase/state begins at a first phase/state of a state machine associated with a case node (1302). Case role(s) is/are associated with the phase state (1304), and for each case role associated with the case node phases/state, permissions are set for the case role with respect to the case node when the state machine of the case node is in that phase/state (1306). Processing continues with respect to the respective phases/states of the state machine of the case node, until permissions have been defined for each phase/state (1308,1310). In various embodiments, hierarchical and/or conditional (e.g., by case node phase/state) permissions defined as described in connection withFIGS.12and13may be embodied in a case model definition. At runtime, when a case instance is instantiated based on the case model definition, data comprising the case model definition is parsed to determine and create runtime data structures reflecting the hierarchical data model of the case model, and with respect to each node permissions are set by case role (and, as applicable, conditioned on case node phase/state). Permission-related services associated with a runtime environment in which the case instance is realized are used in some embodiments to enforce hierarchical and/or conditional (e.g., by phase/state) permissions as defined in the case model definition. For example, in some embodiments, the ability of a given user to access and/or perform requested operations with respect to metadata and/or content, to initiate transitions between states, and/or to assign or modify the assignment of users to case roles with respect to the case instance is determined by the runtime environment based on which (if any) case role(s) the user has been assigned with respect to the case instance, and the permission(s), if any, associated with such case role(s) with respect to an applicable case node in a current phase/state of a state machine associated with the case node. FIG.14is a block diagram illustrating an example of a hierarchical data model and associated permission in an embodiment of a case management system. In various embodiments, hierarchical permissions such as those shown inFIG.14may be defined via a process such as the one shown inFIG.12. In the example shown, a hierarchical permission model1400has been defined, corresponding to the hierarchical data model700ofFIG.7. The hierarchical permission model1400, includes in this example, for each case node of the hierarchical data model700ofFIG.7a corresponding hierarchical permission model node. For example, nodes1402,1404,1406,1408, and1410of the permissions model1400correspond to nodes700,704,706,708, and710, respectively, of data model700. At each node of the hierarchical permission model1400, one or more case roles are identified as having permissions at the corresponding case node, and for each the permissions to be provided are indicated. For example, at node1402, a case role “R1” is listed as having read (R) and update (U) permissions with respect to root case node702. In various embodiments, the case role R1would, by virtue of having read and update permissions set at the root case node702, have at least the same permissions at child case nodes of node702, which in this example would include all nodes given that case node702is a root node. As illustrated by node1404of the hierarchical permissions model1400, the case role R1at some nodes may be given permissions beyond those assigned at the root node1402. In this example, the case role R1is assigned permissions to create, read, update, or delete content and/or metadata at case node704corresponding to hierarchical permission model node1404. In addition, in the example shown inFIG.14, additional case roles such as R11a, R11b, R111, R112, and R12are associated at permission model nodes1404,1406,1408, and1410with corresponding case nodes of data model700, and at each permission model node, the respective permissions set for each respective case role that has been associated are indicated. In various embodiments, a case role may have permission(s) defined at a parent case node, and may have the same permission(s) with respect to child case nodes of the parent node by virtue of the permission(s) set for the case role at the parent node or, instead, permissions defined at the parent node may be overridden with a different set of permissions defined at the child node(s) for the same case role. In some embodiments, a case role may be a “contained” case role, and may have permission(s) only with respect to the case node at which it was assigned the permission(s). For example, a set of permissions defined for a case role at a specific case node may be “contained” in the sense that the permissions defined at that node do not extend beyond that node, even if the case role exists and has a user assigned to it with respect to other case nodes. FIG.15Ais a block diagram illustrating an example of a hierarchical data model and associated permission model in an embodiment of a case management system. In various embodiments, hierarchical permissions such as those shown inFIG.15Amay be defined via a process such as the one shown inFIG.12. In the example shown, a case model1500includes a hierarchical data model comprising case nodes1502,1506,1510,1514, and1518, and a corresponding hierarchical permissions model comprising nodes1504,1508,1512,1516, and1520. Each case node, in case model1500in this example, has a corresponding set of case roles and associated permissions. For example, case roles and permissions1504are associated with root node1502, and identify the case roles “Loan Admin”, “Loan Underwriter”, and “Loan Applicant” as having the respective permissions indicated. In this example, permissions are defined at each descendant/child case node of root node1502with references to these three case roles. At certain case nodes, one or more of them have permissions beyond those indicated at the root node1502. For example, permissions model node1508indicates that users having the role “Loan Admin” at root node Loan (1502) are granted rights to create and delete content (C, D) in addition to the read and update (R,U) permissions set at the root node1502,1504. Similar, users having the role “Loan Applicant” at root node Loan (1502) are granted in permission model node1512permissions to create, read, update or delete metadata and/or content with respect to the financial documents listed at case node1510, for example to enable such users to upload copies of personal financial documents, replace outdated documents with more recent ones, etc. In the example shown inFIG.15A, the case role “Home Inspector” does not have any permission associated with root node1502or the sub-tree beginning at node1506, but instead only has permissions with respect to the sub-tree comprising case nodes1514and1518. At “Home Details” case node1514, the “Home Inspector” case role is granted read permission (1516), and with respect to “Home Inspection Reports” case node1518the “Home Inspector” case role is granted permission to create, read, update, and delete (1520). FIG.15Bis a block diagram illustrating an example of an instance of a hierarchical data model and associated permission model in an embodiment of a case management system. In various embodiments, hierarchical permissions such as those shown inFIG.15Bmay be defined via a process such as the one shown inFIG.12. In the example shown, a case instance1540based on case model1500ofFIG.15A(in this example an instance “Loan123” of the case type/model “Loan”) includes a plurality of case nodes1542,1546,1550,1554, and1558and for each a corresponding set of permissions1544,1548,1552,1556, and1560, respectively, and as applicable specific users associated with each case role. For example, case instance1540includes root node1542with associated permissions1544, identifying specific users who have been associated with the case roles indicated for purposes of the case instance1540. In the example shown, the users “Joe” and “Amy” have been associated with the “Loan Applicant” case role, and in various embodiments those users would be afforded with respect to case instance1540the privileges indicated in case model1500ofFIG.15Aas being associated with that case role, e.g., permission to create, read, update, and delete metadata/content comprising and/or otherwise associated with case node1550(see1510,1512ofFIG.15A). Similarly, the user “Harry” is assigned the case role “Home Inspector” with respect to nodes1554and1558of case instance1540(see case role/permission information at1556,1560), which would result in the user “Harry” being afforded with respect to case instance1540those permissions associated with the case role “Home Inspector” in case model1500ofFIG.15A(see1514,1516,1518, and1520ofFIG.15A). While in the example shown inFIGS.15A and15B, each case role has a statically define set of permissions at each case node, in various embodiments such permissions may be defined in the case model as being conditioned on case node specific and/or other contextual data, such as a phase/state of a state machine associated with the case node. In addition, in various embodiments, permissions other than with respect to metadata and/or content may be defined, such as permissions regarding the ability to add or modify case role assignments with respect to a case instance, and/or permissions to cause phase/state transitions within a state machine of a case node. For example, a case role may be defined to have permission to update content and/or to make or change case role assignments when a case node is in a first state, but not when the same case node is in a second state. Similarly, within a case node and a particular phase/state thereof, a case role may be defined as having a permission to cause a state transition to a next phase/state of the case node, but only if one or more conditions defined in the case model have been satisfied with respect to the case node, such as all required content has been uploaded, all required metadata has been provided, and/or an action required to be taken by another user has been completed. FIG.16is a flow chart illustrating an embodiment of a process to associate users with case roles. In various embodiments, a case management system, such as case management system206ofFIGS.2and3, may implement the process ofFIG.16. In the example shown, an indication to associate a user with a case role with respect to a case instance is received (1602). For example, an administrative user may assign a case management system user to a case role with respect to one or more case instances, e.g., via an administrative user interface. Permissions associated with the case role, e.g., in a permissions model comprising a case model definition, are extended to the user with respect to applicable case nodes of the case instance (1604). FIG.17is a flow chart illustrating an embodiment of a process to provide and control access based on a hierarchical permissions model. In some embodiments, the process ofFIG.17is used to implement step1604ofFIG.16. In the example shown, when a user attempts to perform an action with respect to a case instance (1702), e.g., create, read, update, or delete metadata and/or content, assign a user to a case role, cause a transition to a new phase/state with respect to a case node of the case instance, etc., it is determined which case role(s), if any, the requesting user has with respect to the case instance (1704). It is determined whether any case role to which the user has been assigned with respect to the case instance has the permission required to perform the action with respect to the case instance (1706). If so (1708), the requested action is allowed to be performed by the user (1710). If not (1708), the requested operation is not allowed to be performed by the user (1712). For example, in the example shown inFIGS.15A and15B, the user “Harry” is assigned the role “Home Inspector”. As a result, a request by the user “Harry” to update a home inspection report document and/or metadata associated with the Home Inspection Reports node1558of case instance1540ofFIG.15Bwould be allowed (see nodes1156,1158and1560ofFIG.15B, and corresponding nodes1518and1520ofFIG.15A), whereas a request by the same user “Harry” to read financial metadata at case node1546of case instance1540ofFIG.15Bwould not be allowed (see nodes1544,1550, and1552ofFIG.15B, and corresponding nodes1506and1508ofFIG.15A, indicating the “Home Inspector” case role has no permission at the applicable case node). Providing the ability to define a hierarchical and in some embodiments conditional permission model for case management in various embodiments enables a case model developer to define and control permissions with fine granularity and, if desired, to cause permissions to be determined dynamically, at runtime, based on conditions such as a current phase/state of a state machine associated with a case node or other context data or conditions. Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the technology is not limited to the details provided. There are many alternative ways of implementing the technology. The disclosed embodiments are illustrative and not restrictive. | 42,460 |
11943226 | DETAILED DESCRIPTION The following general acronyms may be used below: TABLE 1General AcronymsAPIapplication program interfaceARMadvanced RISC machineCD-compact disc ROMROMCMScontent management systemCoDcapacity on demandCPUcentral processing unitCUoDcapacity upgrade on demandDPSdata processing systemDVDdigital versatile diskEVCexpiring virtual currency (a virtual currencyhaving an expiration date, or subjectto other virtual currency usage rules; localvirtual currencies with expiration dates)EVCUexpiring virtual currency (units)EPROMerasable programmable read-only memoryFPGAfield-programmable gate arraysHAhigh availabilityIaaSinfrastructure as a serviceI/Oinput/outputIPLinitial program loadISPInternet service providerISAinstruction-set-architectureLANlocal-area networkLPARlogical partitionPaaSplatform as a servicePDApersonal digital assistantPLAprogrammable logic arraysRAMrandom access memoryRISCreduced instruction set computerROMread-only memorySaaSsoftware as a serviceSLAservice level agreementSRAMstatic random-access memoryVCURvirtual currency usage rulesWANwide-area network Data Processing System in General FIG.1Ais a block diagram of an example DPS according to one or more embodiments. In this illustrative example, the DPS10may include communications bus12, which may provide communications between a processor unit14, a memory16, persistent storage18, a communications unit20, an I/O unit22, and a display24. The processor unit14serves to execute instructions for software that may be loaded into the memory16. The processor unit14may be a number of processors, a multi-core processor, or some other type of processor, depending on the particular implementation. A number, as used herein with reference to an item, means one or more items. Further, the processor unit14may be implemented using a number of heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. As another illustrative example, the processor unit14may be a symmetric multi-processor system containing multiple processors of the same type. The memory16and persistent storage18are examples of storage devices26. A storage device may be any piece of hardware that is capable of storing information, such as, for example without limitation, data, program code in functional form, and/or other suitable information either on a temporary basis and/or a permanent basis. The memory16, in these examples, may be, for example, a random access memory or any other suitable volatile storage device. The non-volatile or persistent storage18may take various forms depending on the particular implementation. For example, the persistent storage18may contain one or more components or devices. For example, the persistent storage18may be a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The media used by the persistent storage18also may be removable. For example, a removable hard drive may be used for the persistent storage18. The communications unit20in these examples may provide for communications with other DPSs or devices. In these examples, the communications unit20is a network interface card or other form of an I/O processor. The communications unit20may provide communications through the use of either or both physical and wireless communications links. The input/output unit22may allow for input and output of data with other devices that may be connected to the DPS10. For example, the input/output unit22may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, the input/output unit22may send output to a printer. The display24may provide a mechanism to display information to a user. Instructions for the operating system, applications and/or programs may be located in the storage devices26, which are in communication with the processor unit14through the communications bus12. In these illustrative examples, the instructions are in a functional form on the persistent storage18. These instructions may be loaded into the memory16for execution by the processor unit14. The processes of the different embodiments may be performed by the processor unit14using computer implemented instructions, which may be located in a memory, such as the memory16. These instructions are referred to as program code38(described below) computer usable program code, or computer readable program code that may be read and executed by a processor in the processor unit14. The program code in the different embodiments may be embodied on different physical or tangible computer readable media, such as the memory16or the persistent storage18. The DPS10may further comprise an interface for a network29. The interface may include hardware, drivers, software, and the like to allow communications over wired and wireless networks29and may implement any number of communication protocols, including those, for example, at various levels of the Open Systems Interconnection (OSI) seven layer model. FIG.1Afurther illustrates a computer program product30that may contain the program code38. The program code38may be located in a functional form on the computer readable media32that is selectively removable and may be loaded onto or transferred to the DPS10for execution by the processor unit14. The program code38and computer readable media32may form a computer program product30in these examples. In one example, the computer readable media32may be computer readable storage media34or computer readable signal media36. Computer readable storage media34may include, for example, an optical or magnetic disk that is inserted or placed into a drive or other device that is part of the persistent storage18for transfer onto a storage device, such as a hard drive, that is part of the persistent storage18. The computer readable storage media34also may take the form of a persistent storage, such as a hard drive, a thumb drive, or a flash memory, that is connected to the DPS10. In some instances, the computer readable storage media34may not be removable from the DPS10. Alternatively, the program code38may be transferred to the DPS10using the computer readable signal media36. The computer readable signal media36may be, for example, a propagated data signal containing the program code38. For example, the computer readable signal media36may be an electromagnetic signal, an optical signal, and/or any other suitable type of signal. These signals may be transmitted over communications links, such as wireless communications links, optical fiber cable, coaxial cable, a wire, and/or any other suitable type of communications link. In other words, the communications link and/or the connection may be physical or wireless in the illustrative examples. In some illustrative embodiments, the program code38may be downloaded over a network to the persistent storage18from another device or DPS through the computer readable signal media36for use within the DPS10. For instance, program code stored in a computer readable storage medium in a server DPS may be downloaded over a network from the server to the DPS10. The DPS providing the program code38may be a server computer, a client computer, or some other device capable of storing and transmitting the program code38. The different components illustrated for the DPS10are not meant to provide architectural limitations to the manner in which different embodiments may be implemented. The different illustrative embodiments may be implemented in a DPS including components in addition to or in place of those illustrated for the DPS10. Cloud Computing in General It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as Follows On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service Models are as Follows Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as Follows Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. Referring now toFIG.1B, illustrative cloud computing environment52is depicted. As shown, cloud computing environment52includes one or more cloud computing nodes50with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes50may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment52to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.1Bare intended to be illustrative only and that computing nodes50and cloud computing environment52can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.1C, a set of functional abstraction layers provided by cloud computing environment52(FIG.1B) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.1Care intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and application processing elements96. Any of the nodes50in the computing environment52as well as the computing devices54A-N may be a DPS10. Computer Readable Media The present invention may be a system, a method, and/or a computer readable media at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention are presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein has been chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. Container and Resource Access Restriction The following application-specific acronyms may be used below: TABLE 2Application-Specific AcronymsACEEAccessor Environment ElementAPFauthorized program facilityAT-TLSApplication Transparent TransportLayer SecurityCCTDOMAclients that connect over TCP/IP fordefinition, operations, and monitoringarchitectureCDC ®Change Data CaptureCICScustomer infdormation control systemCLICall-level InterfaceDBMSdatabase management systemDSNdata set nameESDSentry-sequenced data setESMexternal security mechanismIMSinformation management systemIPInternet ProtocolIPSecIP securityJDBCJava Database ConnectivityJESz/OS job entry subsystemKSDSkey-sequenced data setLDAPLightweight Directory Access ProtocolLDSlinear data setLPARlogical partitionODBC(Microsoft) Open Database ConnectivityRACF ®resource access control facilityRHEL ®Red Hat Enterprise Linux ®RRDSrelative record data setSAFsecurity authorization facilitySQLstructured query languageSTCstarted task controlTCPTransmission Control ProtocolTLSTransport Layer SecurityVSAM ®Virtual Storage Access Method ®z/OS ®z-Architecture Operating System ® Described herein is a system and related method for a security administrator to restrict access to specific container deployments hosted at established or well-known IP addresses and to restrict the accessible resources through resource profiles permitted to a well-known user, thereby allowing well-known user access control while avoiding the need for the well-known user to have or provide security credentials. In the system disclosed herein, security validation is performed on the host system (data resident operating system) where the data being replicated (host data or source data in a datastore) resides. The host data is accessed and sent to a container by a job or a started task that is running on that host system. This job or started task is also responsible for reading the contents of DBMS-created logs that report changes to the host data. These changes are also sent to the container. The job (or started task) on the system where the source data (datastore) resides has a user ID associated with it (this user ID being that of the well-known user, which may be obtained using, e.g., operating system-provided functions). A security administrator grants rights to access the host data to be read, DBMS created logs, and other local datasets, in order for the system to function properly. The well-known user ID may be obtained by the remote log reader (the software running in the job/started task) and the remote log reader may construct compound names consisting of the name of the remote log reader job/started task, the IP address of the container, and additional application-provided names. The remote log reader accesses the host security system to create an access token based on the well-known user ID, which, in the case of an SAF interface, may be created without providing a password, passphrase, etc. Using the access token (e.g., an ACEE in the case of SAF), the remote log reader may issue security calls against a configured resource profile name, providing the compound name to determine if a rule exists that allows access to the resource name for that profile. In some embodiments, CONTROL access might be sought, but this is implementation dependent. Significantly, if a resource profile exists and the well-known user ID has the proper authority, then processing is allowed to continue, otherwise processing is not allowed to continue. A first level of validation may occur by checking that the address space is running under the expected name. This keeps someone from trying to use a replacement server at the port. Profiles may start with the jobname to separate environments and ensure the address space making the SAF request is using the rules defined for it. The second level of validation occurs when a container connects to the remote log reader, which allows for a basic check to determine if this is a well-known container deployment. The IP address is obtained from the TCP/IP stack by the remote log reader, making the IP address difficult to spoof. The second level of validation, in essence, indicates that a host security administrator allows a container hosted at a specific remote IP address to connect to the remote log reader. Although spoofing IPs is common for attacks on a network, it requires someone to take exceptional steps to modify IP packets. Embodiments disclosed herein prevent a more typical scenario where someone is trying to setup a clone system and receive data they are not supposed to see. Most customer networks are on the lookout for suspicious traffic that might be spoofing, and are using firewalls and other protections to keep unknown users outside the network. An assumption here is that anyone trying to spoof the IP address has already gained access to a restricted network. The third level of validation is subscription verification. In reality, this may just be a name, and the operating system service is likely to support multiple service requests. The third level validation may be thought of as determining “is the container running at this IP address and requesting this service allowed?” In the discussion below, there may be more context because a subscription has an end destination. The subscription may be referenced by a subscription identifier. The fourth (final) level of validation may use an object name for an (any) object. By way of example, the object may be a file or a database. For a data virtualization “service” it may make sense to create various types of profiles. In one embodiment disclosed herein, a replication scenario is provided in which a third part of a constructed name described below represents a subscription. In this embodiment, a subscription comprises one or more replication objects that relate a source object to a target object that is located at a remote destination or target system. In this embodiment, the source object may be a file, such as a VSAM file, and the target object could be, for example, a relational table, a Hadoop file, Kafka queue, or determined by an application call at the target to handle or dispose of the change. This embodiment may also apply to reading the contents of the source object. The third level of validation indicates that a host administrator allows the contents of each source object referenced by the subscription to be sent to the target IP address. This may include reading the contents of the source objects and capturing their changes. The four levels of validation may correspond to a constructed name in the form <jobname>.<IP>.<subscription>.<object>. Additionally, it may be possible to construct a profile name using wildcards, in the form of, e.g., <jobname>.<IP>.** (or ending with “.*.*”, “.*”, or any other form representing a wildcard. This wildcard example would be interpreted as “this IP can have anything that the job can access”. Thus, the constructed name may identify any access entity that is permitted access. The security administrator may setup security resource profiles234that include wild cards for matching purposes. The application (the remote log reader220in this embodiment) may construct the compound names and the ESM may perform the wild card evaluation and determine if there is a profile entry that matches the compound name provided. A fourth part of the constructed name includes the source object name. In one embodiment, this fourth part is a VSAM data set name. This allows a host security administrator to allow/control reading/capturing changes for a specific file from a container hosted at a specific IP address targeting a specific destination (via the subscription name). In theory, these third and fourth parts of the constructed names could represent anything. In various embodiments, the container application provides these names which are “well known” to the host security administrator who has setup the required resource profiles to allow the application to function. This may include generic or masked profiles mentioned above in the wildcard form <jobname>.<IP>.**. If an unknown IP address or unknown name is provided, then whatever function the application is trying to perform is not allowed. Conceptually, the container connects to a “service” that is running on an operating system, such as z/OS. A generic validation check that may be performed is whether the container running at some IP address is allowed to use any function supported by the operating system service. IP port security may be used, but it is a much more complex process. Various embodiments disclosed herein may make using these generic resource profiles easier to setup and maintain by administrators. Various embodiments described herein focus on functional security versus simple authentication of credentials or identification of an individual for a potentially compromised account. In these embodiments, the user ID associated with the operating system remote log reader address space may be used for security profile checking purposes. No authentication is required (user-credentialless authentication access), and this user ID may be used to check to determine if a security administrator has created a rule (security resource profile) that has authority for that user ID for a compound resource name constructed from the operating system remote log reader job/started task name and the IP address of the container for basic access authorization purpose. Additional authorization checking may also be performed by appending the subscription source system ID (such as a unique one- to eight-character identifier) that is used as an indirect method of controlling/authorizing the right to capture and replicate changes to a set of related objects (VSAM files in this example) to a target location. At this point, it would be very difficult for a hacker to penetrate the network, as they would need to: 1) be inside the network; 2) be altering packets to spoof the correct IP address of a permitted container; and 3) know the source system ID permitted from the correct IP address. This latter part requires either intimate knowledge of the replication environment or decoding good traffic that should be encrypted through TLS. Each step of the name implemented raises the bar for an attacker and also narrows the impact if an attacker is able to breach these systems. By way of example, on a z/OS platform, security calls may be APF authorized so that this address space may be granted a higher privilege so that it could create ACEEs and do resource profile checking. In sum, the user credentials per se are not being validated—rather the system is asked if the user is permitted to access a resource. A subscription establishes a connection between the source (container) and a target server that provides the indirection mechanism. The job or started task of the service running on the data resident operating system has a valid user ID associated with it—the valid user ID is needed for the job or started task to be running. A normal user cannot simply start a job with the same name on the OS at some other port and bypass the checks. Authorization was given based on the STC/JOB user having access to the resource profiles created with the jobname. Sites also typically reserve ports for specific job names. In an embodiment, one possibility may be to use an SAF interface option to create a security environment (for example, an ACEE) by just providing the user ID without a password. This security environment/ACEE may subsequently be used for resource profile checking, even though the security environment was created without authentication. Although described for an example implementation, there is no real requirement for control authority. When a security administrator sets up a rule that associates a user ID with a resource profile, the security administrator may identify the access level. Of significance, the remote log reader's user ID may be associated with the resource profile name(s) constructed by the remote log reader service and has the correct access level (or control) for the resource name. The resource name discussed herein may comprise at least two parts: the name of the address space where the remote log reader is running and the IP address of the container that is connecting to the remote log reader. However, additional (more granular) authorization checks may be performed by appending the subscription (short) name after the IP address. In such embodiments this may be used to “authorize” replication of the data objects to a specific target IP address that is indirectly identified by the subscription name. It thus may be possible to authorize the refresh and capture of changes to a specific destination for specific data objects themselves by appending their names after the subscription name. This approach of appending names after the IP address can be generalized to allow the process running on the host or data resident OS (such as on the remote log reader in this embodiment) to perform various functions on behalf of the process that is running at a specific IP address. Various embodiments discussed herein may be operating system-based with the generalized security interface being provided that may be supported by all security vendors that provide operating system support. Various embodiments disclosed herein also include a method to allow an organization to control where changes get replicated without having to provide explicit user identification—these embodiments may utilize an existing security mechanism. Various embodiments further include a method for providing functional security to allow access to the operating system remote log reader only from specific IP addresses. These embodiments further provide an indirect mechanism to restrict replication of changes from specific objects to a known target location (using a subscription model). Various embodiments are security related and provide a method to determine whether a remote IP address can capture changes made to a dataset, e.g., a VSAM dataset (or access its contents for refresh purposes), and whether the contents/changes to this dataset can be “sent” to a target location (using the subscription model). The functional security mechanism may be used to restrict access to a system to specific IP addresses and control the distribution of changes (and data via refresh) to specific target IP addresses using the subscription model. FIG.2is a block diagram that illustrates, according to some embodiments, an implementation environment200having various components of a system in which various embodiments of a system and method may operate. These embodiments are not intended, however, to limit the invention in any way. The implementation environment200may be, for example, a cloud computing environment52, and comprise nodes50that each may be, e.g., a DPS10. The various components may be implemented as, e.g., application processing elements96. The components may comprise a (native) data resident operating system210(also referred to as a (data) source system), administrative and capture components240, and a target system270, where the target system270may serve as a mirror/replicator for the source data215. In a general sense, access by a remote system (i.e., a system remote from the data resident OS210, such as the administrative and capture components system240and the target system270), may be provided based on a non-user-specific security credentials that are provided in a security resource profile234. The implementation environment200may also comprise administrative tools, including management consoles280A,280B (referred to collectively as280), for implementing the clients that connect over TCP/IP for definition, operations, and monitoring architecture (CCTDOMA) described herein. One possible implementation of CCTDOMA may be the Microsoft Windows® Classic Data Architect (CDA) Change Data Capture (CDC). The management consoles280may be used to define and manage subscriptions, and may communicate with the container260(e.g., to map non-relational data and to update configuration parameters in the data) as well as the target272. The management consoles280may also communicate with the access server282, which stores source/target user credentials (user IDs and passwords) that can be used to by the management consoles280to establish connections with the source/target servers. The data resident operating system210may comprise source data215and that may be used in, e.g., a VSAM® sphere data services component that stores the data that is read by the file access services232. When applications update the VSAM® sphere data215, the changes are logged in the replication log222, which has already been read by the file access services232. The data resident operating system210may further comprise a remote log reader component220that interfaces with, e.g., a replication log222. File access services232may be a part of the remote log reader component220. Mirroring/replication is the process of continuous replication of changed data from the source data215to the target system270. The administrative and capture components240may serve to offload certain administrative and capture functionality from the data resident operating system210in order to, e.g., reduce processing load on the data resident operating system210. The administrative and capture components240may reside on their own computer, such as a DPS10, or may be a part of another computer or device. Functionally, the administrative and capture components240operate on a peer level similar to the target system270. The administrative and capture components240may comprise hardware and software infrastructure242, a host OS244(for example, Linux®, Microsoft Windows®, or any other operating system or platform), one or more containers260that operate, e.g., within a Docker® or Podman environment246. Other container-based application(s)252A,252B may also run at this location, and/or may run on the data resident OS210. The container260may comprise a capture component262as well as client services264. The file access services232may run on an operating system210, based on connections received from a capture component262running in the container260, such as a VSAM remote source for z/OS®. The container260may serve to capture changes to nonrelational mainframe data, and deliver these changes to, e.g., relational databases, producing an accurate relational replica of mainframe data on supported target databases in near real-time (from the perspective of an application creating or using the data). In this embodiment, the capture component262may use a control connection236A to communicate with the data services file access data component232to request access to specific files for refresh or capture purposes using a subscription model. The target system270has an apply engine272that applies the data replications to a target data replica274for a variety of databases, and may be connected to the capture component262with a target connection236B. The source server replication components may consist of the container260with its capture component262, and the OS started task230with its remote log reader component220. while the target system replication component is the apply engine272. The remote log reader component220architecture has the capture component262moved to run on a different operating system (host OS244running on the administrative and capture components system240) while the remote log reader component220remains on the data resident operating system210where the databases (e.g., source data215) and logs (e.g., replication log222) exist. In more detail, in the mirroring/replication process, when changes are made to data by an application (which are generally made by another application running in another address space running on the data resident operation system210(or some other related system if there are multiple operating systems working together)), the changes may be logged into the replication log222, and those changes may later be read by the remote log reader component220and pushed to the capture component262where data transformations may be done, and then sent to a target apply engine272to keep the target data replica274in sync with the source data215. A refresh is a process that synchronizes the target table on the target data replica274with the current contents of the source table in the source data215. Data may be pulled from a data set with minimal processing and sent to the capture component262, which may perform all columnar work, transformations, and data type changes. The data may then be replicated in a replication target apply engine272. Operating system-based replication products that include certain elements, such as the remote log reader220, a capture component262, and an apply engine component272(or those that use a remote log reader approach), may divide source engine (or server) components into a distinct data access or services component that runs on the operating system with a capture component262that resides elsewhere, e.g., on the administrative and capture components system240. The capture component(s)262may include temporary files and transactions queues staged on the administrative and capture components system240, which are staged on the target system270(for the record being applied). One responsibility here is to cache the changes to the source databases using transaction (unit-of-recovery) semantics until a commit or rollback record is received. If the transaction is rolled back, the cached changes are discarded. If the transaction is committed, then the changes can be sent to the apply engine272. Committed changes are sent in commit sequence. One feature and advantage of these embodiments is that the do not need to convert the data from its native format into an SQL representation and then convert it to “wire” format, which the target engine is expecting. This is done on a per column basis and becomes very expensive when you have a large number of columns. Because of this, the remote capture architecture is advantageous. When the replication product captures non-relational data, then the remote log reader component220may also support the ability to read the source database or file on the source system240since these objects may be difficult to access from a container260environment. Relational products may use a client Java Database Connectivity (JDBC) or Open Database Connectivity Call-level Interface (ODBC/CLI) driver for refresh purposes which can directly interface with the database management system (DBMS) for refresh purposes. In relational products, one area of potential focus is on the task of defining the replication environment. In various embodiments discussed herein, the container is presented as a mechanism to isolate the data scientist from the perceived complexities of dealing with a mainframe data source. Defining security profiles to control access to the operating system data and replication logs must be robust, but also simple enough to not slow access down or appear as a roadblock or bottleneck when defining the replication environment. These steps are preferably as invisible (transparent) and seamless as possible. The administration of one of these relational products that uses a remote log reader component is focused on the container260. Individuals working with the replication product (e.g., a data scientist) are typically associated with the line of business and related applications— normally these individuals have little or no knowledge about the operating system and typically do not have access rights to the operating system or data that needs to be replicated. The replication solution may typically collect authentication information (e.g., user ID and password) that an administrative user can use to connect to the source (capture) component262and target (apply) component272, and may also allow the specification of secondary authentication information that can be used to connect with the source database system215or target database system274. For the purposes of this disclosure, the database system215is accessed by the remote log reader and secondary authorization information is used to verify that the replication product has the rights to access the database (or file) objects that are referenced by the subscriptions (sets of application related data or grouping object (grouping of objects for data)) that have been defined. If the remote log reader220has access to the secondary authorization information, the remote log reader220could either create an ACEE using the secondary authorization information, or use an existing ACEE, to access the data source (215) or replication log (222) using the secondary authorization context or continue to use the default context. However, the remote log reader220issues the similar security checks that are done for the source data215and/or by the replication log222to see if the secondary authentication ID is authorized to access those objects. The secondary authentication information combines the well-known IP address with a known subscription name. If this compound name matches security resource profiles234defined by the site security personnel, then replication is allowed. The “secondary” nature of this information is that the user does not need explicit rules for access to the data; the fact that the user can access the IP address and subscription is adequate, because the system has been configured to accept this combination of elements as properly allowing access to this source data. In order to restrict a particular user from accessing the data, the user should be prevented from accessing the IP address. The IP address of the container260may be obtained using, for example, native API's by the remote log reader220for the container260that is trying to connect to the process (capture component262) that, in this case, will be accessing the source data215or the replication log streams containing the changes to the source data215. The presently disclosed system may be distinguished with regard to a non-containerized system where all components run on the data resident OS210system. With regard to secondary authentication information, due to limitations in the non-containerized implementation and concerns about the security requirements of the people that are defining and administering the subscriptions used, a better approach has been developed that is described herein. Typically, when a management console280is launched, a user ID and password must be provided. This is validated against, e.g., a local Windows account or an LDAP server. Normally, in the management console280, one may set up access to each source215and target engine/server and provide a user ID and password that may be managed by the access server282. When a subscription is defined, depending on the target, additional user ID and password information may be provided that is necessary for the target engine to connect to and perform operations against the target database. In some versions of the non-containerized implementation, the user ID and password managed by the access server may be validated using SAF calls using a security exit (provided as part of the product). However, in the container, there is no security exit, so any/all user IDs and passwords provided for a source running in the container are valid. The non-containerized implementation does support traditional SQL security so there is some control. However, security issues may remain for privileged accounts. In a container deployment, the configuration managers setting up the system (mapping and defining subscriptions) are likely to be going to a cloud target and typically have little or no knowledge of the source data and have no idea how to interact with the host (e.g., z/OS) system. Assuming the non-containerized system was able to validate the configuration manager's credentials, these configuration managers may not have a z/OS ID, and a z/OS security administrator will likely not want to grant the setup people access to the system and source databases/files. Restated differently, the Classic system described above has both security and operational issues. Unless individuals that are authorized to access the database objects also set up the replication environment, there is a need to share secrets for the secondary authorization checking to work. In the container environment, the data scientist or other data user may not have credentials for the source database on the source system. Thus, such a user may only work with a replica of the data at the target system and not be able to access the source database objects on the source system directly. This either leads to: a) sharing of secrets with someone who is not authorized, or b) an authorized individual “entering” the authorization information into the tool, which is a violation of standard security policies. Typically, the authorization information consists of a user ID and password. Also, good security practices call for the password to have a limited lifetime, after which a new password must be provided. When these passwords are stored by the replication product, then (periodically) replication fails because the password has expired and a (new) valid one needs to be provided. Replication products also should not store this authorization information, since protecting and encrypting credentials may present areas for attack. Thus, the presently disclosed system may be distinguished over the Classic system since it provides the data resident OS210(e.g., z/OS) security administrator a different option that is more functionally oriented—an option with which they can control where the data can be accessed from and (indirectly via the subscription) where the data can be sent and persisted as a target replica. This is done using the “well known” ID associated with the remote log reader220, which can be obtained directly by the remote log reader220, meaning there is no need to store valid secrets for the necessary access capabilities to work (e.g., using an STC/JOB owner ID and the security resource profiles234created by SAF admin to protect the data). Replication products may use TCP/IP to communicate between the capture component (source)262and apply component (target)272when a remote log reader component220is used. TCP/IP administrators may use IP security to control the connection236B that can be established between the administrative and capture components system240, target system270, as well as the connection236A that can be established between the administrative and capture components system240and the remote log reader220. The data flowing on these connections236may be encrypted, using a tool, such as AT-TLS, that can be setup externally, or the replication product may provide automatic support for secure connections. Various embodiments disclosed herein may use a combination of generic operating system security authorization facility (SAF) capabilities and TCP/IP capabilities to restrict access by a remote log reader component220that may be associated with the file access data services232of an OS started task230to the capture component(s)262hosted at a specific IP address(es). The TCP/IP connection236A that connects the capture component262with the remote log reader220may send refresh data for the source data215, or may provide insert, update, and delete operations for the files associated with a subscription from the replication log222to the capture component262. The changes read from the replication log222by the remote log reader220are sent over the TCP/IP connection236A to the capture component262of the container260where they are identified, transformed, and grouped as appropriate—the identification of log records and transformations and grouping constitutes the capture aspect. The remote log reader220may call a security resource routine that provides a security resource profile234including resource names. Various embodiments may also utilize the data resident operating system210to further restrict names of subscriptions that can be deployed on the client and operating system files that can be referenced by a subscription. Thus, an important provision is that a connection occurred from a known/expected location, and once that is accomplished, various approaches to access may be implemented at different levels of granularity. This level of control may be obtained without the need for the capture component262to provide an operating system user ID, password information, or an IP address where the container260and capture component262are hosted. In sum, the data is transferred from source data215via the remote log reader220to the administrative and capture components system240, with the capture component262in the container260transferring the data to the target data replica274using the apply engine component272on the target system270. Various embodiments provide that the data resident operating system210uses (or sets up) a security class (e.g., a default class may be SERVAUTH that may be overridden) and defines a security resource profile234for some combination of:remote log reader address space namean IP address that can establish a connection with the remote log reader component220;the name of a subscription that can connect from a specific authorized IP address; andthe name of each object (such as a file) that can be associated with a subscription at a given IP address. These resource profiles234may be defined in the SAF external security manager (ESM). The SAF may provide a common interface, such as RACROUTE, that can be used to interact with the ESM. The SAF interface might be supported by IBM (RACF®), and other products, such as Top Secret® and ACF2®. A separate set of resource profiles234may be defined for each remote log reader component220deployed in a set of systems, such as Sysplex®, which is a set of systems (e.g., a set of data resident OSs210, such as z/OS logical partitions (LPAR)) communicating and cooperating with each other through multisystem hardware components and software services to, e.g., process customer workloads. The resource profiles234may be stored under a same security class or different ones. FIG.3is a flowchart that provides an example of a process300that may be performed for validation purposes, according to some embodiments. The operations performed in the process300are discussed below on the context of the components shown inFIG.2. In some embodiments, the process300may be performed, e.g., by the remote log reader component220. Although specific types of components and operations are described in the following discussion and for the sake of simplicity, this discussion should be understood to relate to generic descriptions of such components and operations. In operation305, the remote log reader component220may use operating system APIs to obtain name of the process/job (or started task) and the user ID (e.g., JES2 OWNER) associated with the address space of the process/job. For a z/OS system, e.g., the running workload is generally controlled by a job entry subsystem (which is another address space) and this subsystem tracks things that are running using a combination of job names. The remote log reader component220may also use TCP/IP services to obtain the IP address where the capture component262is running (i.e., the container, such as a Docker container245). In the example, an assembler macro IAZXJSAB306is used to obtain the name of the started task address space307that is used in constructing profile names and the user ID308that is used to determine whether access has been granted to a profile entry. Logically, next, the create user identity control block operation310creates a user identity control block, such as an SAF accessor environment element (ACEE). The user identity control block may reside, e.g., in the private storage of the OS started task230. Jobs, started tasks, or transactions on the operating system may have an associated identity. Such an identity may, for example, be a user ID that is a one-to-eight-character string. The ACEE is an example of a control block that represents the user's identity, and may contain the user ID and other related information used in establishing the identity of the user and the user's credentials. ACEEs may be created by an external security manager, such as a resource access control facility (RACF®), that enables the protection of system resources by making access control decisions through resource managers. The security manager may run within another address space (similar to the OS started task230), and may also be responsible for managing and accessing the security resource profiles234. The ACEE may be created on request by resource managers, such as UNIX System Services, z/OS job entry subsystem (JES), customer information control system (CICS), information management system (IMS), etc. The control block may be created using, e.g., a z/OS SAF API such as RACROUTE REQUEST=VERIFY and INITACEE. For example, the ACEE may be created using the address space user ID without password validation when the initial connection236A is received (operation315below) from the capture component262. The initial connection reflects a connection between the capture component262and the remote log reader220. The ACEE may be used to validate whether an owner has access to a specific profile entry234. This may be performed, for example, using a RACROUTE verify request311. RACROUTE is a macro provided by SAF that is understood by RACF®, which is a part of a z/OS Security Server. z/OS is an operating system for IBM® z/Architecture mainframes. The profile entry may be created by a system administrator and stored somewhere as, e.g., a part of the security resource profile234. The resource profile name (specific profile entry)234is shown by way of example inFIG.3by the ENTITYX parameter327,357,377. In this example, password authorization checking is not requested (PASSCHK=NO)313. WhileFIG.3shows that the obtaining of the job/owner information operation305and the creation of the user identity control block (e.g., ACEE) operation310is done once, other implementations may choose a more “atomic” approach where this is done prior to each authorization check. Not shown inFIG.3is the file access data services component232performing other initialization functions, such as creating a TCP/IP listen session, which allows file access data services component232to accept connections from the capture component262. In an accept connection operation315, when a connection is received by the file access data services232from e.g., the capture component, the control connection236A (TCP/IP connection) is accepted, and in operation320, the client240IP address324is obtained. A series of TCP/IP APIs may be used to determine the TCP/IP address of the client240in, e.g., decimal notation. This may be done, e.g., using an inet_ntoa(getpeername)322call. The various verify calls are issued to verify the remote log reader component220user ID (identified in the user identity control block) has been granted control authority to the security class (verify connection/IP address325, verify subscription355, and verify file375) and resource profile name provided. When a connection is received315, this represents a first opportunity the operating system component210has to validate that the client is authorized, before any data has been received (operation325, by verifying the IP address is an authorized one). However, it is also possible to defer validation until data has been received as part of an application handshake process. In operation325, the IP address of the client240is verified. The verification of the IP address may be performed by, e.g., calling an IP address verify routine326. The IP address verify routine326may be performed using a macro, e.g., a RACROUTE REQUEST=AUTH326A, with a resource profile name327(e.g., ENTITYX) comprising the log reader job name327A (e.g., JOBNAME) and the capture component262IP address327B (e.g., IP_Address). Once the IP address has been obtained, the profile name is constructed and, for example, a RACROUTE call326is issued. In this example (1), the resource name (ENTITYX) is the JOBNAME327obtained from IAZXJSAB306, followed by a delimiter (here a period), and then the IP address324that was returned by the API call inet_ntoa324. In this example, the RACROUTE call326checks for CONTROL access329, however, other embodiments may choose to use other or more granular access control methods (e.g., READ access for refresh requests and CONTROL for replication). If the IP address verify routine326reports an error (330: FAIL) (or if the user identity control block, e.g, ACEE, cannot be created, as described above), then an “access denied” error may be returned (i.e., the error is reported in operation335) to the capture component262by the remote log reader component220(data access component) on the operating system security administration210, and the client240container's246connection236A is terminated. If this verify routine326is successful (330: SUCCESS), then access operations may continue. Assuming that access was granted to the capture component running at the Docker container, the capture component is allowed to continue communicating with the VSAM data services component and, at some point, will be sending a control message indicating that some operation needs to be done for a subscription. In a subscription operation350, a client subscription is verified. A security manager API call (e.g., a RACROUTE call, used herein solely for illustrative purposes) may be issued for subscription validation. The profile name has been updated to append the subscription name after the IP address, separated by a period. In this embodiment a subscription has a long name (up to sixty-four characters) and a short name (eight characters) that is unique at both the source and target, so in this embodiment, the short name (referred to as the source system ID) is used in constructing the resource name. This embodiment is illustrated using periods as separators between different name components, however, any valid character may be used. The verification of the client subscription may be performed by, e.g., calling a subscription verify routine355. The subscription verify routine355may be performed using a macro, e.g., a RACROUTE REQUEST=AUTH356A, with a resource profile name357(e.g., ENTITYX) comprising the log reader job name357A (e.g., JOBNAME), the capture component IP address357B (e.g., IP_Address), and a (short) unique subscription or grouping name357C (e.g., source system ID (SrcSysID)) associated with the subscription that may be used to indirectly control or authorize a right to capture and replicate changes to data. A grouping object may represent the subscription that is a group of data source (e.g., VSAM data sets). If the subscription verify routine356reports an error (360: FAIL), then an “access denied” error may be returned (i.e., the error is reported in operation335) to the capture component262by the remote log reader component220(data access component) on the operating system security administration210, and the client240container's246connection236A is terminated. If this subscription verify routine356is successful (360: SUCCESS), then access operations may continue. In a file operation370, a source data215access may be verified. In this embodiment, the third level of validation is the file (data set name) level so that when a control message is received requesting a refresh for a file or informing the VSAM data services components that changes need to be captured for a specific file, another RACROUTE call is issued to see whether that is allowed or not. Here, in this example, the DSN is appended to the profile name separated by another period. Once again, after the RACROUTE call is issued, the results should be inspected. If authority has been granted, normal processing can continue. If authority has not been granted, an implementation-defined handler may be provided to deal with this situation. The verification of the client file access may be performed by, e.g., calling a file verify routine376. The file verify routine376may be performed using a macro, e.g., a RACROUTE REQUEST=AUTH376A, with a resource profile name377(e.g., ENTITYX) comprising the log reader job name377A (e.g., JOBNAME), the capture component IP address377B (e.g., IP_Address), a (short) subscription name377C (e.g., source system ID) associated with the subscription, and a file identifier, for example a data set name (DSN), source data identifier, or base cluster name377D, appended to the end. If the file verify routine376reports an error (380: FAIL), then an “access denied” error may be returned for the associated subscription (i.e., the error is reported in operation335) to the capture component262by the remote log reader component220(data access component) on the operating system security administration210, and the client240container's260connection236may be terminated. If this file verify routine376is successful (380: SUCCESS), then access operations may continue. By way of example, the file verify may be performed in the context of a refresh, and the file type may be a VSAM file. The Virtual Storage Access Method (VSAM®) is an IBM storage access method. VSAM comprises four data set organizations: key-sequenced (KSDS), relative record (RRDS), entry-sequenced (ESDS) and linear (LDS). The KSDS, RRDS and ESDS organizations contain records, while the LDS organization (added later to VSAM) simply contains a sequence of pages with no intrinsic record structure, for use as a memory-mapped file. When a refresh is performed for a VSAM file or the log reader is informed that capture operations are required for a VSAM file, the file verify routine376may be performed. With this solution, there is no need to obtain, store, or transmit operating system authorization information from a container260environment. Adequate validation325,355,375may be done using the authorization ID (e.g., USERID that was returned from the IAZXJSAB READ request in operation306) associated with the remote log reader component220deployed on the operating system security administration210which reduces the risk of unintended exposure of this information. Requesting the creation of an ACEE310without password validation also reduces administrative exposure and simplifies system administration. Further, code within the remote log reader component220may be used to obtain the job name327A,357A,377A, authorization ID, and the client connection (the container's) IP address327B,357B,377B which reduces the administrative cost and risk. Various embodiments described herein provide a generic approach to resource validation which is simple to administer using a hierarchic naming convention that may be extended to any use as shown below: TABLE 3Naming Convention ComponentsScopeSource LocationObject Name 1Object Name 2Object Name n In Table 3, the Scope component is the JOBNAME that is returned from the IAZXJSAB READ request in operation306, which is the name of the address space that can be used to correlate the user ID with a specific address space deployment in the Sysplex. Both the address space name and associated user ID may be obtained independently by the remote log reader component220. The Source Location component is the IP address of the partner process whose access needs to be verified—this is the IP address of the container260where the capture component262is running. The IP address was chosen since it may be independently obtained and is relatively hard to spoof. Put differently, if an entity is able to spoof the IP address, the system is likely seriously compromised already. After this is an optional application name hierarchy that can be validated. These names are provided by the application with the assumption that the security administrator can correlate these names with something they want to restrict access to, such as a file to be accessed in the file operation370described above. In some embodiments shown inFIG.3, a period may be used to separate these different values (3-7A, 3-7B, 3-7C, and 3-7D, where x={2, 5, 7} shown inFIG.3) that make up the resource name being checked. However, any separator or delimiter value may be used. In some embodiments, the separator/delimiter may be required to conform to the SAF resource naming conventions. In some embodiments, the resource profiles234represent proxies for an administrative or physical object to be secured. If the data resident operating system210has granted access (in the above examples, it is the control authority—but this is not a strict requirement), the assumption is that the application has the right to continue; otherwise, the application should stop because it is not authorized. From a replication perspective, in some embodiments, the object names chosen for VSAM replication may comprise:A subscription name, which represents a connection between capture and apply hosted at a specific remote location. An 8-character (unique) source system ID may be used for resource name construction.A VSAM base cluster name, which is a part of the source data215and which uniquely identifies a VSAM file in the source Sysplex. Independently, the user ID associated with the remote log reader address space needs at least READ authority to be able to access the VSAM files for refresh purposes, and browse access to a system log reader where the changes are staged for a subscription. The system log reader may be associated with a log provider whom the remote log reader accesses via log provider APIs, and the log provider (e.g., IXGLOGR on z/OS) abstracts the physical storage of the data from the reader. A user must be able to browse data from the log provider or that user cannot read logged changes for the datastore. This is similar for other datastores as well—the user needs access to the logged data and any APIs used to read that logged data. These requirements are enforced by the APIs that are used to access these objects by the remote log reader component220. The APIs to read files215and replication logs222also do their own access validation checking using the USERID associated with the address space where the remote log reader component220is running (the OS started task230). These authorization checks do not have the granularity of the resource profile checks and cannot distinguish between a refresh operation for subscription A (not shown) versus a refresh operation for subscription B (not shown). The approach, described above and commonly referred to as IP security (IPSec), of restricting access to a specific IP address is conceptually like setting up NETACCESS security in an operating system communications manager in conjunction with the SAF, which is another mechanism that may be used by a security administrator to control which IP address can connect to a specific port number—which may be more difficult to set up than the present approach. The approach described here represents a simpler solution than using a NETACCESS approach, but produces the similar results. There is also nothing precluding an embodiment from supporting NETACCESS validation versus this solution, or in addition to this solution. Various embodiments use a different approach than the traditional user or group profile approach to securing access to operating system resources. This approach allows the container260to obtain operating system information permissible through these profiles, regardless of which specific user is asking for it. Furthermore, the customers may control the user's ability to make the request by limiting access to the container. These are effectively trusted internal systems with groups of users accessing the data under a secondary authentication of the trusted connection of the internal systems. And, in effect, this provides what customers want group access to the data without requiring the customers configure these users to be in a defined (e.g., privileged) group on the operating system. Various embodiments disclosed herein are advantageous in that they strike a balance between security, cost, and complexity, however the invention is not limited to embodiments that provide these or other advantages mentioned herein. It provides for secure data and ensures that the data is allowed to be accessed where access has been allowed, without requiring each operational user to have all profiles to read the data individually just to restart replication between the site's endpoints. To illustrate by way of example, a command center may exist where the user is responsible for restarting replication if there is a failure. Various embodiments disclosed herein give the customer site tools to allow such a restart without requiring each user to have all the profiles necessary on the data resident OS to access the data directly. As users change roles or new users enter the system, this all may remain static. If the users are in the command center, they can perform the operations necessary to restart replication. Advantageously, this approach allows a securing of data to another remote system, based on a network address, such as an IP address, and providing more discrete control over that data than what known systems, such as IPSec, offer. Although such known systems can prevent two systems from communicating and can also restrict port access, these systems do not allow for doing that while at the same time limiting the data that can be accessed by the remote peer. This approach further allows the ability to secure the data by protecting who can access the remote system IP address. Anyone in the group may become implicitly allowed to see the data because the OS resident system may be defined to allow some subset of data to flow to this remote system. Technical Application The one or more embodiments disclosed herein accordingly provide an improvement to computer technology. For example, an ability for authenticating access to a non-relational data store for a user, without having to establish credentials for the user, may balance an acceptable level of computer system security at reduced administrative costs and complexity in a networked architecture. | 79,206 |
11943227 | DETAILED DESCRIPTION System Overview FIG.1is a schematic diagram of an embodiment of an information system100that is generally configured to allow a user to interact with web pages using a user device106(e.g. a mobile device), to store the current state of one or more web pages120, and then later to securely access the web pages120in their saved state using an augmented reality device104. This process generally involves authenticating a user on their user device106before allowing the user to access a website118and then storing information for any flagged web pages120from the website118that the user would like to access in the future using an augmented reality device104. At a later time, the user authenticates themselves again using the augmented reality device104to resume interacting with the flagged web pages120. The state of the flagged web pages120is the same as when the user was previously interacting with the web pages120on the user device106. For example, a web page120may be preloaded with filter settings130that affect the appearance of the web page120or user inputs132that are applied to data fields within the web page120. This process improves information security for an information system100by providing a secure way to store and transfer sensitive user information between a user device and an augmented reality device. The information system100also provides a technical advantage by allowing the user to preserve and recover the state of a web page120that the user was previously interacting with. This means that any user inputs132or settings that were previously applied by the user to a web page120will automatically be reapplied when the user accesses the web page120using the augmented reality device104. This means that the user will spend less time occupying network resources since they do not have to reapply all of their inputs and settings before they can access the information that they need on the flagged web pages120. In one embodiment, the information system100comprises a plurality of user devices (e.g. augmented reality device104and user device106), an access control device102, and a database108that are in signal communication with each other within a network110. The access control device102may also be in signal communication with other network devices within the network110. The network110may be any suitable type of wireless and/or wired network including, but not limited to, all or a portion of the Internet, an Intranet, a private network, a public network, a peer-to-peer network, the public switched telephone network, a cellular network, a local area network (LAN), a metropolitan area network (MAN), a personal area network (PAN), a wide area network (WAN), and a satellite network. The network110may be configured to support any suitable type of communication protocol as would be appreciated by one of ordinary skill in the art. User Devices The augmented reality device104and the user device106are each generally configured to provide hardware and software resources to a user. Examples of the user device106include, but are not limited to, a smartphone, a tablet, a laptop, a computer, a smart device, or any other suitable type of device. The user device106comprises a graphical user interface (e.g. a display or a touchscreen) that allows a user to view web pages120from a website118on the user device106. The user device106may comprise a touchscreen, a touchpad, keys, buttons, a mouse, or any other suitable type of hardware that allows a user to provide inputs into the user device106. The user device106is configured to allow the user to flag web pages120from a website118and to send information associated with the flagged web pages120so that the user can access the flagged web pages120using the augmented reality device104. An example of this process is described inFIG.2. InFIG.1, the augmented reality device104is configured as a head-mounted wearable device. In other examples, the augmented reality device104may be integrated into a contact lens structure, an eyeglass structure, a visor structure, a helmet structure, or any other suitable structure. An example of the hardware configuration of the augmented reality device104is described inFIG.6. The augmented reality device104is configured to display the flagged web pages120to the user as virtual objects402within a virtual environment400. For example, the augmented reality device104may be configured to show the flagged web pages120as virtual objects402that are overlaid onto tangible objects within a real scene that is in front of the user. As another example, the augmented reality device104may be configured to show the flagged web pages120as virtual objects402within a self-contained virtual environment400. An example of this process is described inFIGS.3and4. Access Control Device Examples of an access control device102include, but are not limited to, a server, an access point, a computer, or any other suitable type of network device. In one embodiment, an access control device102comprises an access control engine112and a memory114. Additional details about the hardware configuration of the access control device102are described inFIG.5. The memory114is configured to store user profiles116, websites118. web pages120, and/or any other suitable type of data. In one embodiment, the access control engine112is generally configured to allow a user to set up and update a user profile116using their user device106(e.g. a mobile device or a computer). This process generally involves allowing the user to flag web pages120from a website118and to store information associated with the flagged web pages120in their user profile116. This process allows the user to later access the flagged web pages120using an augmented reality device104. An example of the access control engine112performing this operation is described in more detail inFIG.2. The access control engine112is further configured to provide the flagged web pages120as virtual objects402within a virtual environment400that can be displayed to the user using an augmented reality device104. An example of the access control engine112performing this operation is described in more detail inFIG.3. The user profiles116generally comprises information that is associated with known and approved users for accessing websites118within the network110. A website118comprises a plurality of web pages120that provide information to the user. In one embodiment, a website118may be configured to restrict access to information for users. For example, a user may be required to provide valid user credentials before accessing information from the website118. In other embodiments, a website118may not restrict user access. In this case, any user can access the information from the website118. The user profiles116may comprise user identities, user credentials, account information, contact information, user device information, user permission settings, or any other suitable type of information that is associated with users. Examples of user identities include, but are not limited to, a name, an alphanumeric code, an employee number, an account number, a phone number, an email address, or any other suitable type of identifier that is uniquely associated with a user. Examples of user credentials include, but are not limited to, log-in credentials, a username and password, a token, a Personal Identification Number (PIN), an alphameric value, biometric information, or any other suitable type of information that can be used to verify the identity of a user. The user profiles116may further comprise information that is associated with web pages120that were flagged by the user. The information associated with the flagged web pages120may comprise an address128, filter settings130, user inputs132, or any other suitable type of information for the web page120. The address128comprises information for locating and accessing a web page120from a website118. As an example, the address128may comprise a Uniform Resource Location (URL) for a web page120. The filter settings130comprise settings for customizing the appearance of a web page120. As an example, the filter settings130may comprise inputs for filtering search results on a web page120. As another example, the filter settings130may comprise inputs for modifying a layout for a web page120. The user inputs132comprise information for populating data fields within a web page120. As an example, the user inputs132may comprise text for filling in a data field on a web page120. As another example, the user inputs132may comprise selections for a drop-down menu on a web page120. In some embodiments, the user profile116may further comprise user preferences134for a virtual environment400. The user preference134may comprise settings for visualizing virtual objects402within a virtual environment400. For example, the user preferences134may comprise instructions or settings for arranging and scaling virtual objects402within the virtual environment400. Database Examples of a database108include, but are not limited to, file repositories, computers, databases, memories, servers, shared folders, or any other suitable type of networking device. In some embodiments, the database108may be configured to store user profiles116, websites118, web pages120, and/or any other suitable type of information that is associated with the information system100. In this case, the access control device102may request information from the database108or store information in the database108. InFIG.1, the information system100shows a single database108. In other embodiments, the information system100may comprise any other suitable number of databases108. In some embodiments, the database108may be optional and omitted. User Profile Updating Process FIG.2is a flowchart of an embodiment of a user profile updating process200for the information system100. The information system100may employ process200to store information associated with a current state of web pages120that a user is interacting with using a user device106(e.g. a mobile device or computer). Process200allows the user to interact with a web page120and then to save the state of the web page120so that it can be later viewed and accessed using an augmented reality device. This process allows the user to preserve the state of the web pages120so that the user can resume interacting with the web pages120in a virtual environment400using an augmented reality device. At step202, the access control device102receives an access request122for a website118from a user device106(e.g. a mobile device). The access request122comprises user credentials for a user. Examples of user credentials include, but are not limited to, log-in credentials, a username and password, a token, a Personal Identification Number (PIN), an alphameric value, biometric information, or any other suitable type of information that can be used to verify the identity of a user. As an example, a user may access a landing web page120for the website118that prompts the user to enter their user credentials. In response to the user inputting their user credentials, the user device106sends the user credentials within an access request122to the access control device102. At step204, the access control device102determines whether a user associated with the user device106is authorized to access the website118. Here, the access control device102may compare the provided user credentials to user credentials that are stored in the user profiles116to determine whether there is a match. In this example, the access control device102may use the user credentials as a search token to determine whether a user profile116exists for the user. The access control device102determines that user credentials are valid when the access control device102is able to identify a user profile116for the user. Otherwise, the access control device102determines that user credentials are invalid when the access control device102is unable to identify a user profile116for the user. The access control device102terminates process200in response to determining that the user credentials are invalid. In this case, the access control device102determines that the user credentials are not associated with a known or approved user and terminates process200which prevents the user device106from accessing the website118. The access control device102proceeds to step206in response to determining that the user credentials are valid. In this case, the access control device102determines that the user credentials are associated with an authorized user and proceeds to step206to identify information that is associated with the user. At step206, the access control device102identifies a user profile116for a user that is associated with the user device106. Here, the access control device102identifies the user profile116that is associated with the user credentials that were provided by the user device106in step202. At step208, the access control device102provides access to the website118for the user device106. After determining that the user associated with the user device106is authorized the website118, the access control device102provides access to the website118and its web pages120. Each web page120may be configured to provide different types of information to the user. Some web pages120may be configured to provide general information, for example, general information about an organization and its resources. Other web pages120may be configured to provide personalized information for the user. For example, a web page120may be configured to provide account information, user history information, a personal calendar, messages, or any other suitable type of information that is personalized for the user. At step210, the access control device102receives a flag request124for a web page120on the website118. The flag request124identifies the web page120and comprises information for the web page120that the user would like to make accessible via an augmented reality device104. The flag request124may comprise an identifier for the web page120, an address128for the web page120, filter settings130for the web page120, user inputs132for the web page120, or any other suitable type of information for the web page120. In one embodiment, the user device106may send the flag request124in response to a user performing a specific action on the web page120. As an example, the web page120may comprise a button that is embedded within the web page120for flagging the web page120. In this example, the user may click on the button to generate the flag request124for the web page120. The web page120may be configured with executable code that detects when the user clicks the button and generates the flag request124for the web page120. In other embodiments, the user device106may use any other suitable technique for generating and sending the flag request124. At step212, the access control device102identifies an address128for the web page120. Here, the access control device102may extract an identifier for the web page120and the address128(e.g. URL address) for accessing the web page120. At step214, the access control device102identifies filter settings130for the web page120. Here, the access control device102determines whether the flag request124comprises any filter settings130that were applied by the user for the web page120. In response to determining that the flag request124comprises filter settings130, the access control device102extracts the filter settings130from the flag request124. The filter settings130comprise settings for customizing the appearance of the web page120. As an example, the filter settings130may comprise inputs for filtering search results on a web page120. As another example, the filter settings130may comprise inputs for modifying a layout for a web page120. At step216, the access control device102identifies user inputs132for the web page. Here, the access control device102determines whether the flag request124comprises user inputs132that were entered by the user for the web page120. In response to determining that the flag request124comprises user inputs132, the access control device102extracts the user inputs132from the flag request124. The user inputs132comprise information for populating data fields within a web page120. For example, the user inputs132may identify one or more data fields on the web page120and corresponding user-defined values for each of the identified data fields. As an example, the user inputs132may comprise text for filling in a data field on a web page120. As another example, the user inputs132may comprise selections for a drop-down menu on the web page120. At step218, the access control device102stores the collected information for the web page120in the user profile116. The access control device102may store an identifier for the flagged web page120, an address128for the flagged web page120, filter settings130for the flagged web page120, user inputs132for the flagged web page120, or any other suitable type of information for the flagged web page120. At step220, the access control device102determines whether any other flag requests124have been received for other web pages120. The access control device102may continue to monitor the activity of the user on the web site118to determine whether the sends any flag requests124for other web pages120on the website118. The access control device102returns to step212in response to determining that a flag request124has been received for another web page120. In this case, the access control device102returns to step212to collect information for the flagged web page120and to store the collected information in the user's user profile116. The access control device102terminates process200in response to determining that no more flag requests have been received. In this case, the access control device102determines that the user has ended their session with the website118and that no additional information for flagged web pages120will be stored in the user's user profile116. Data Access Process FIG.3is a flowchart of an embodiment of a data access process300for the information system100. The information system100may employ process300to recover the state of a previously saved web page120that a user was interacting with and to present the web page120in its saved state within a virtual environment400. For example, a user may have previously saved information associated with the state of web pages120that the user is interacting with using a process similar to process200that is described inFIG.2. The information system100may then employ process300to recover the state of the previously saved web pages120and to present the web pages120as virtual objects402within a virtual environment400for an augmented reality device104. This process allows the user to view and resume interacting with the web pages120in the virtual environment400using the augmented reality device104. At step302, the access control device102receives an access request126for a website118from an augmented reality device104. The access request126comprises user credentials for the user. The access request126may comprise the same user credentials that were sent by the user device106or different user credentials that are associated with the user. As an example, a user may access the landing web page120for the website118using the augmented reality device104. Once again, the landing web page120prompts the user to enter their user credentials. In response to the user inputting their user credentials, the augmented reality device104sends the user credentials within an access request122to the access control device102. At step304, the access control device102determines whether a user associated with the augmented reality device104is authorized to access the website118. Here, the access control device102may compare the user credentials provided by the augmented reality device104to user credentials that are stored in the user profiles116to determine whether there is a match. In this example, the access control device102may use the user credentials as a search token to determine whether a user profile116exists for the user. The access control device102determines that user credentials are valid when the access control device102is able to identify a user profile116for the user. Otherwise, the access control device102determines that user credentials are invalid when the access control device102is unable to identify a user profile116for the user. The access control device102terminates process300in response to determining that the user credentials are invalid. In this case, the access control device102determines that the user credentials are not associated with a known or approved user and terminates process300which prevents the augmented reality device104from accessing the website118. The access control device102proceeds to step306in response to determining that the user credentials are valid. In this case, the access control device102determines that the user credentials are associated with an authorized user and proceeds to step306to identify information that is associated with the user. At step306, the access control device102identifies the user profile116for the user that is associated with the augmented reality device104. Here, the access control device102identifies the user profile116that is associated with the user credentials that were provided by the augmented reality device104in step302. At step308, the access control device102identifies information for any flagged web pages120that are associated with the website118from the user profile116. The information for the flagged web pages120may comprise an identifier for a flagged web page120, an address128for a flagged web page120, filter settings130for a flagged web page120, user inputs132for a flagged web page120, or any other suitable type of information for a flagged web page120. At step310, the access control device102generates a virtual environment400based on the identified information for the flagged web pages120. An example of a virtual environment400is shown inFIG.4. The virtual environment400comprises a plurality of virtual objects402. In this example, the virtual environment400comprises a virtual object402A for a web page120with a user profile, a virtual object402B for a web page120with a calendar, a virtual object402C for a web page120with charts, a virtual object402D for a web page120with general information, a virtual object402E for a web page120with navigation tools, and a virtual object402F for a web page120with graphical information. Each of the flagged web pages120may be converted into a virtual object402using the process described below. The virtual objects402can be positioned and scaled based on the user's preferences. The virtual objects402can also be repositioned and resealed any time by the user using hand gestures. Additional details about this process are described below in step312. In this example, each virtual object402comprises a screenshot of a flagged web page120in a configuration that was previously saved by the user. This process allows the user to quickly identify flagged web pages120and to resume interacting with the flagged web pages120. Conventional displays have physical constraints (e.g. screen sizes) that limit the amount of information that can be present to a user at once time. In contrast, the virtual environment400provides a three-hundred and sixty-degree field of view for viewing virtual objects402. This provides a virtual environment400that surrounds the user. The user can view different sections of the virtual environment400by turning their head. Returning toFIG.3, the access control device102generates the virtual environment400by converting the flagged web pages120into virtual objects402that the user can interact with. In one embodiment, the access control device102first identifies information associated with a flagged web page120from the user profile116. The access control device102then accesses the flagged web page120using the stored address128for the flagged web page120. The access control device102determines whether any filter settings130have been stored for the flagged web page120. In response to determining that filter settings130have been stored for the flagged web page120, the access control device102obtains the filter settings130from the user profile116and then applies the filter settings130to the flagged website120to customize the appearance of the flagged web page120. The access control device102then determines whether any user inputs132have been stored for the flagged web page120. In response to determining that user inputs132have been stored for the flagged web page120, the access control device102obtains the user inputs132from the user profile116, populates one or more data fields within the flagged web page120using their corresponding user inputs132, and then applies the entered user inputs132. After applying filter settings130and user inputs132to the flagged web page120, the access control device102then captures a screenshot of at least a portion of the flagged web page120. The captured screenshot is a virtual object402that can be embedded within the virtual environment400. The access control device102then assigns the screenshot to a location within the virtual environment400. The access control device102then associates a hyperlink with the screenshot which allows the user to access the flagged web page120in the state shown in the screenshot. In some embodiments, the user profile116may further comprise user preferences134for how data is to be presented or visualized within the virtual environment400. For example, the user preferences134may comprise instructions for arranging or scaling virtual objects within the virtual environment400. In this case, the access control device102applies the user preferences134to the virtual objects402in the virtual environment400. At step312, the access control device102provides access to the virtual environment400for the augmented reality device104. The access control device102may provide access to the virtual environment400by sending data to the augmented reality device104that allows the user to view and interact with virtual objects402within the virtual environment400. After receiving access to the virtual environment400, the user may use gestures (e.g. voice commands or hand gestures) to interact with the virtual objects402in the virtual environment400. For example, providing access to the virtual environment400may allow the user to use gestures (e.g. voice command or hand gestures) to select a virtual object402to view. In this example, the augmented reality device104may detect a gesture performed by the user that identifies a virtual object402within the virtual environment400. The augmented reality device104may then load and display the web page120that corresponds with the selected virtual object402. The web page120is prepopulated with any filter settings130and user inputs132that the user previously applied. After loading the web page120, the user may interact with the web page120when the filter settings130and user inputs132have been applied. This process allows the augmented reality device104to provide the web page120to the user in the same state that was captured when the web page120was flagged using the user device106. As another example, providing access to the virtual environment400may allow the user to use gestures to rearrange or reposition virtual objects402within the virtual environment400. In this example, the augmented reality device104may detect hand gestures performed by the user that identifies a virtual object402and a new location for the virtual object. For example, the augmented reality device104may detect a hand gesture that corresponds with the user selecting a virtual object402and dragging the virtual object402to a new location within the virtual environment400. The augmented reality device104may then reposition the identified virtual object402to the new location within the virtual environment400based on the detected hand gestures. As another example, providing access to the virtual environment400may allow the user to use hand gestures to rescale virtual objects402within the virtual environment400. In this example, the augmented reality device104may detect hand gestures performed by the user that identifies a virtual object402and a scale or size change for the virtual object402. For instance, the user may pinch two fingers together to indicate a decrease in the size of the virtual object402or pull two fingers apart to indicate an increase in the size of the virtual object402. The augmented reality device104may then rescale or resize the identified virtual object402based on the detected hand gestures. In other examples, the augmented reality device104may perform any other suitable type of action on virtual objects402within the virtual environment400based on detected gestures from the user. Hardware Configuration for the Access Control Device FIG.5is an embodiment of an access control device102for the information system100. As an example, the access control device102may be a server or a computer. The access control device102comprises a processor502, a memory114, and a network interface504. The access control device102may be configured as shown or in any other suitable configuration. Processor The processor502is a hardware device that comprises one or more processors operably coupled to the memory114. The processor502is any electronic circuitry including, but not limited to, state machines, one or more central processing unit (CPU) chips, logic units, cores (e.g. a multi-core processor), field-programmable gate array (FPGAs), application-specific integrated circuits (ASICs), or digital signal processors (DSPs). The processor502may be a programmable logic device, a microcontroller, a microprocessor, or any suitable combination of the preceding. The processor502is communicatively coupled to and in signal communication with the memory114and the network interface504. The one or more processors are configured to process data and may be implemented in hardware or software. For example, the processor502may be 8-bit, 16-bit, 32-bit, 64-bit, or of any other suitable architecture. The processor502may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components. The one or more processors are configured to implement various instructions. For example, the one or more processors are configured to execute access control instructions506to implement the access control engine112. In this way, processor502may be a special-purpose computer designed to implement the functions disclosed herein. In an embodiment, the access control engine112is implemented using logic units, FPGAs, ASICs, DSPs, or any other suitable hardware. The access control engine112is configured to operate as described inFIGS.1-3. For example, the access control engine112may be configured to perform the steps of process200and300as described inFIGS.2and3, respectively. Memory The memory114is a hardware device that is operable to store any of the information described above with respect toFIGS.1-3along with any other data, instructions, logic, rules, or code operable to implement the function(s) described herein when executed by the processor502. The memory114comprises one or more disks, tape drives, or solid-state drives, and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution. The memory114may be volatile or non-volatile and may comprise a read-only memory (ROM), random-access memory (RAM), ternary content-addressable memory (TCAM), dynamic random-access memory (DRAM), and static random-access memory (SRAM). The memory114is operable to store access control instructions506, user profiles116, websites118, web pages120, and/or any other data or instructions. The access control instructions506may comprise any suitable set of instructions, logic, rules, or code operable to execute the access control engine112. The user profiles116, the websites118, and the web pages120are configured similar to the user profiles116, the websites118, and the web pages120described inFIGS.1-3, respectively. Network Interface The network interface504is a hardware device that is configured to enable wired and/or wireless communications. The network interface504is configured to communicate data between user devices (e.g. augmented reality device104and user device106) and other devices, systems, or domains. For example, the network interface504may comprise an NFC interface, a Bluetooth interface, a Zigbee interface, a Z-wave interface, a radio-frequency identification (RFID) interface, a WIFI interface, a LAN interface, a WAN interface, a PAN interface, a modem, a switch, or a router. The processor502is configured to send and receive data using the network interface504. The network interface504may be configured to use any suitable type of communication protocol as would be appreciated by one of ordinary skill in the art. Augmented Reality Device Hardware Configuration FIG.6is a schematic diagram of an embodiment of an augmented reality device104for accessing the information system100. The augmented reality device104is configured to display a virtual environment400that comprises virtual objects402overlaid onto one or more tangible objects in a real scene. The augmented reality device104comprises a processor602, a memory604, a camera606, a display608, a wireless communication interface610, a network interface612, a microphone614, a global position system (GPS) sensor616, and one or more biometric devices618. The augmented reality device104may be configured as shown or in any other suitable configuration. For example, augmented reality device104may comprise one or more additional components and/or one or more shown components may be omitted. Camera Examples of the camera606include, but are not limited to, charge-coupled device (CCD) cameras and complementary metal-oxide semiconductor (CMOS) cameras. The camera606is configured to capture images607of people, text, and objects within a real environment. The camera606is a hardware device that is configured to capture images607continuously, at predetermined intervals, or on-demand. For example, the camera606is configured to receive a command from a user to capture an image607. In another example, the camera606is configured to continuously capture images607to form a video stream of images607. The camera606is operable coupled to an optical character (OCR) recognition engine624and/or the gesture recognition engine626and provides images607to the OCR recognition engine624and/or the gesture recognition engine626for processing, for example, to identify gestures, text, and/or objects in front of the user. Display The display608is a hardware device that is configured to present visual information to a user in an augmented reality environment that overlays virtual or graphical objects onto tangible objects in a real scene in real-time. In an embodiment, the display608is a wearable optical head-mounted display configured to reflect projected images and allows a user to see through the display. For example, the display608may comprise display units, lens, semi-transparent mirrors embedded in an eyeglass structure, a visor structure, or a helmet structure. Examples of display units include, but are not limited to, a cathode ray tube (CRT) display, a liquid crystal display (LCD), a liquid crystal on silicon (LCOS) display, a light-emitting diode (LED) display, an active-matrix OLED (AMOLED), an organic LED (OLED) display, a projector display, or any other suitable type of display as would be appreciated by one of ordinary skill in the art upon viewing this disclosure. In another embodiment, the display608is a graphical display on a user device. For example, the graphical display may be the display of a tablet or smartphone configured to display an augmented reality environment with virtual or graphical objects402overlaid onto tangible objects in a real scene in real-time. Wireless Communication Interface Examples of the wireless communication interface610include, but are not limited to, a Bluetooth interface, a radio frequency identifier (RFID) interface, a near-field communication (NFC) interface, a LAN interface, a PAN interface, a WAN interface, a Wi-Fi interface, a ZigBee interface, or any other suitable wireless communication interface as would be appreciated by one of ordinary skill in the art upon viewing this disclosure. The wireless communication interface610is a hardware device that is configured to allow the processor602to communicate with other devices. For example, the wireless communication interface610is configured to allow the processor602to send and receive signals with other devices for the user (e.g. a mobile phone) and/or with devices for other people. The wireless communication interface610is configured to employ any suitable communication protocol. Network Interface The network interface612is a hardware device that is configured to enable wired and/or wireless communications and to communicate data through a network, system, and/or domain. For example, the network interface612is configured for communication with a modem, a switch, a router, a bridge, a server, or a client. The processor602is configured to receive data using network interface612from a network or a remote source. Microphone Microphone614is a hardware device configured to capture audio signals (e.g. voice commands) from a user and/or other people near the user. The microphone614is configured to capture audio signals continuously, at predetermined intervals, or on-demand. The microphone614is operably coupled to the voice recognition engine622and provides captured audio signals to the voice recognition engine622for processing, for example, to identify a voice command from the user. GPS Sensor The GPS sensor616is a hardware device that is configured to capture and to provide geographical location information. For example, the GPS sensor616is configured to provide the geographic location of a user employing the augmented reality device104. The GPS sensor616is configured to provide the geographic location information as a relative geographic location or an absolute geographic location. The GPS sensor616provides the geographic location information using geographic coordinates (i.e. longitude and latitude) or any other suitable coordinate system. Biometric Devices Examples of biometric devices618include, but are not limited to, retina scanners and finger print scanners. Biometric devices618are hardware devices that are configured to capture information about a person's physical characteristics and to output a biometric signal631based on captured information. A biometric signal631is a signal that is uniquely linked to a person based on their physical characteristics. For example, a biometric device618may be configured to perform a retinal scan of the user's eye and to generate a biometric signal631for the user based on the retinal scan. As another example, a biometric device618is configured to perform a fingerprint scan of the user's finger and to generate a biometric signal631for the user based on the fingerprint scan. The biometric signal631is used by a biometric engine630to identify and/or authenticate a person. Processor The processor602is a hardware device that is implemented as one or more CPU chips, logic units, cores (e.g. a multi-core processor), FPGAs, ASICs, or DSPs. The processor602is communicatively coupled to and in signal communication with the memory604, the camera606, the display608, the wireless communication interface610, the network interface612, the microphone614, the GPS sensor616, and the biometric devices618. The processor602is configured to receive and transmit electrical signals among one or more of the memory604, the camera606, the display608, the wireless communication interface610, the network interface612, the microphone614, the GPS sensor616, and the biometric devices618. The electrical signals are used to send and receive data and/or to control or communicate with other devices. For example, the processor602transmits electrical signals to operate the camera606. The processor602may be operably coupled to one or more other devices (not shown). The processor602is configured to process data and may be configured to implement various instructions. For example, the processor602is configured to implement a virtual overlay engine620, a voice recognition engine622, an OCR recognition engine624, a gesture recognition engine626, and a biometric engine630. In an embodiment, the virtual overlay engine620, the voice recognition engine622, the OCR recognition engine624, the gesture recognition engine626, and the biometric engine630are implemented using logic units, FPGAs, ASICs, DSPs, or any other suitable hardware. The virtual overlay engine620is configured to overlay virtual objects onto tangible objects in a real scene using the display608. For example, the display608may be a head-mounted display that allows a user to simultaneously view tangible objects in a real scene and virtual objects. The virtual overlay engine620is configured to process data to be presented to a user as an augmented reality virtual object on the display608. An example of overlay virtual objects onto tangible objects in a virtual environment400is shown inFIG.4. The voice recognition engine622is configured to capture and/or identify voice patterns using the microphone614. For example, the voice recognition engine622is configured to capture a voice signal from a person and to compare the captured voice signal to known voice patterns or commands to identify the person and/or commands provided by the person. For instance, the voice recognition engine622is configured to receive a voice signal to authenticate a user and/or to identify a selected option or an action indicated by the user. The OCR recognition engine624is configured to identify objects, object features, text, and/or logos using images607or video streams created from a series of images607. In one embodiment, the OCR recognition engine624is configured to identify objects and/or text within an image607captured by the camera606. In another embodiment, the OCR recognition engine624is configured to identify objects and/or text in about real-time on a video stream captured by the camera606when the camera606is configured to continuously capture images607. The OCR recognition engine624employs any suitable technique for implementing object and/or text recognition. The gesture recognition engine626is configured to identify gestures performed by a user and/or other people. Examples of gestures include, but are not limited to, hand movements, hand positions, finger movements, head movements, and/or any other actions that provide a visual signal from a person. For example, gesture recognition engine626is configured to identify hand gestures provided by a user to indicate various commands such as a command to initiate a request for an augmented reality overlay for a document. The gesture recognition engine626employs any suitable technique for implementing gesture recognition. The biometric engine630is configured to identify a person based on a biometric signal631generated from the person's physical characteristics. The biometric engine630employs one or more biometric devices618to identify a user based on one or more biometric signals631. For example, the biometric engine630receives a biometric signal631from the biometric device618in response to a retinal scan of the user's eye and/or a fingerprint scan of the user's finger. The biometric engine630compares biometric signals631from the biometric device618to previously-stored biometric signals631for the user to authenticate the user. The biometric engine630authenticates the user when the biometric signals631from the biometric devices618substantially matches (e.g. is the same as) the previously stored biometric signals631for the user. Memory The memory604is a hardware device that comprises one or more disks, tape drives, or solid-state drives, and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution. The memory604may be volatile or non-volatile and may comprise ROM, RAM, TCAM, DRAM, and SRAM. The memory604is operable to store images607, virtual overlay instructions632, voice recognition instructions634, OCR recognition instructions636, gesture recognition instructions638, biometric instructions642, and any other data or instructions. Images607comprises images captured by the camera606and images607from other sources. In one embodiment, images607comprises images used by the augmented reality device104when performing optical character recognition. Images607can be captured using camera606or downloaded from another source such as a flash memory device or a remote server via an Internet connection. Biometric signals631are signals or data that are generated by a biometric device618based on a person's physical characteristics. Biometric signals631are used by the augmented reality device104to identify and/or authenticate an augmented reality device104user by comparing biometric signals631captured by the biometric devices618with previously stored biometric signals631. The virtual overlay instructions632, the voice recognition instructions634, the OCR recognition instructions636, the gesture recognition instructions638, and the biometric instructions642each comprise any suitable set of instructions, logic, rules, or code operable to execute the virtual overlay engine620, the voice recognition engine622, the OCR recognition engine624, the gesture recognition engine626, and the biometric engine630, respectively. While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated with another system or certain features may be omitted, or not implemented. In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein. To aid the Patent Office, and any readers of any patent issued on this application in interpreting the claims appended hereto, applicants note that they do not intend any of the appended claims to invoke 35 U.S.C. § 112(f) as it exists on the date of filing hereof unless the words “means for” or “step for” are explicitly used in the particular claim. | 47,987 |
11943228 | DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the disclosed example embodiments. However, it will be understood by those skilled in the art that the principles of the example embodiments may be practiced without every specific detail. Well-known methods, procedures, and components have not been described in detail so as not to obscure the principles of the example embodiments. Unless explicitly stated, the example methods and processes described herein are not constrained to a particular order or sequence, or constrained to a particular system configuration. Additionally, some of the described embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently. The techniques of developing least-privilege profiles for network entities addressed herein overcome several important technical problems in the fields of data security and network communications. Rather than relying on analysis of the usage data of an entity, which could be missing or incomplete, the techniques discussed below may use an iterative process to identify a minimal valid set of permissions. For example, the iterative process may start with an initial set of permissions, evaluate the permission set, and iteratively reduce the permission set until an iteration termination condition has been met. In this manner, the iterative process can identify the minimal valid set of permissions for the entity without relying on the usage data of the entity. Reference will now be made in detail to the disclosed embodiments, examples of which are illustrated in the accompanying drawings. FIG.1is a block diagram of an example system100in accordance with disclosed embodiments. As shown, system100includes a permission optimizer102, which may include one or more computing devices configured to iteratively develop a least-privilege profile for an entity104(e.g., a process, a program, a user, an organization, etc.) so that entity104can perform an action, e.g., access a resource108via a communication channel106, using the least-privilege profile. In some embodiments, permission optimizer102may be implemented as a component of a computing device accessible to entity104. Alternatively, in some embodiments, permission optimizer102may be implemented as a component of resource108or as a separate component. In some embodiments, permission optimizer102may be provided as a service, and in some embodiments, permission optimizer102may operate in a decentralized manner. Furthermore, in some embodiments, permission optimizer102may be hosted in a cloud-based network (e.g., built on virtualized infrastructure from AWS™, Azure™, IBM Cloud™, VMWare™, or others). Entity104may perform the action, e.g., access resource108, using a computing device. The computing device may be a handheld device (e.g., a mobile phone, a tablet, or a notebook), a wearable device (e.g., a smart watch, smart jewelry, an implantable device, a fitness tracker, smart clothing, a head-mounted display, etc.), an IoT device (e.g., smart home device, industrial device, etc.), personal computer (e.g., a desktop or laptop), or various other devices capable of processing and/or receiving data. Exemplary components of the computing device are further discussed below in connection withFIG.2. The computing device may be in communication with resource108via communication channel106. Communication channel106may include a bus, a cable, a wireless communication channel, a radio-based communication channel, the Internet, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a cellular communication network, or any Internet Protocol (IP), Secure Shell (SSH), Hypertext Transfer Protocol (HTTP), or Representational State Transfer (REST) based communication network and the like. In some embodiments, communication channel106may be based on public cloud infrastructure, private cloud infrastructure, hybrid public/private cloud infrastructure, or no cloud infrastructure. In such differing embodiments, permission optimizer102and entity104may each be in the same, or in different, networks or network segments. In some embodiments, entity104may be equipped with one or more compatible communication interfaces configured to support communications with permission optimizer102via communication channel106. In some embodiments, entity104may be a network entity connected to a network through the one or more compatible communication interfaces. The communication interfaces are not shown inFIG.1for illustrative simplicity. Entity104may utilize permission optimizer102to iteratively develop a least-privilege profile. Entity104may then use the least-privilege profile to perform an action, including, e.g., accessing resource108. In some embodiments, to enforce the principle of least privilege, entity104may be required to use the least-privilege profile developed by permission optimizer102to perform the action. FIG.2illustrates a block diagram of an exemplary computing device200in accordance with disclosed embodiments. Referring toFIG.2, computing device200may include a communication interface202, a processor204, and a memory206, among potentially various other components. The communication interface202may facilitate communications between computing device200and other computing devices or resources, including, e.g., computing devices utilized by permission optimizer102, entity104, and resource108(shown inFIG.1). In some embodiments, communication interface202may be configured to support one or more communication standards, such as an Internet standard or protocol, an Integrated Services Digital Network (ISDN) standard, and the like. In some embodiments, communication interface202may include one or more of a LAN card, a cable modem, a satellite modem, a data bus, a cable, a wireless communication channel, a radio-based communication channel, a cellular communication channel, an Internet Protocol, a SSH, a HTTP, or a REST-based communication device, or other communication devices for wired and/or wireless communications. In some embodiments, communication interface202may be based on public cloud infrastructure, private cloud infrastructure, or hybrid public/private cloud infrastructure. Processor204may include one or more dedicated processing units, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or various other types of processors or processing units. Processor204may be coupled with memory206and configured to execute instructions stored in memory206. Memory206may store processor-executable instructions and data. Memory206may include any type of volatile or non-volatile memory devices, or a combination thereof, such as a static random-access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, or a magnetic or optical disk. When the instructions in memory206are executed by processor204, computing device200may perform operations for iteratively developing a least-privilege profile for entity104, as discussed below in connection withFIG.3. Referring now toFIG.3, an exemplary flowchart showing a process300for iteratively developing least-privilege profiles for network entities is shown. In accordance with above embodiments, process300may be implemented in system100as depicted inFIG.1. For example, process300may be performed by one or more computing devices utilized by permission optimizer102, entity104, or resource108. In some embodiments, at least some of the steps depicted in process300may be carried out in a sandboxed environment based on replicated or simulated instances of one or more entities and resources, including, e.g., entity104and resource108. It is contemplated that carrying out process300in such a sandboxed environment may further improve security and may provide a more time efficient development of the least-privilege profiles for the entities, allowing process300to take into consideration all potential and relevant use cases for an entity in the sandboxed environment prior to deployment in production. At step302, process300may access a set of permissions associated with a network entity, e.g., entity104. In some embodiments, the set of permissions associated with entity104may be represented as a N-dimensional vector, where N is the number of all possible permissions in a platform (e.g., system100) that includes entity104. In some embodiments, if the platform includes different categories of entities (e.g., different categories of users, accounts, machines, applications, or the like), N may be the number of all permissions associated with a particular entity category that includes entity104. For example, if the platform includes “users” (a first entity category) and “administrators” (a second entity category), N may be the number of all permissions associated with “users” if entity104is one of the “users” (and not one of the “administrators”). In some embodiments, the N-dimensional vector may be implemented as a binary vector, decimal vector, hexadecimal vector, or various other types of numerical, alphabetical, or alphanumeric vectors. In the example of a binary vector, the ithbit in the vector may be set to 0 if permission number i is not granted to entity104. Otherwise, if entity104is granted permission number i, the ithbit in the vector may be set to 1. For example, suppose there are seven possible permissions that can be set for entity104, and suppose that entity104is granted permissions to all but the 1stof the seven possible permissions, then the N-dimensional vector representing this set of permissions may be set as follows: BitsVector Name1234567Starting Vector0111111 For illustrative purposes, the vector illustrated above may be referred to as a Starting Vector, which may serve as the starting point for the iterative process that can be utilized to develop the least-privilege profile for entity104. In some embodiments, entity104may be granted the permissions shown above by an administrator (e.g., a human IT manager) at the beginning of the iterative process. Alternatively, entity104may be granted the permissions randomly or systematically (e.g., by setting the permissions according to a default setting) at the beginning of the iterative process. It is to be understood that the permissions shown above are depicted merely for illustrative purposes and are not meant to be limiting. For example, in some embodiments, entity104may be granted permissions to all seven possible permissions at the beginning of the iterative process or to some of the seven possible permissions at the beginning of the iterative process. At step304, process300may obtain a set of permission vectors for the network entity based on the set of permissions represented by the Starting Vector. In some embodiments, process300may obtain the set of permissions by generating the set of permissions, and in some embodiments, the set of permissions may be randomly generated. For example, process300may obtain the set of permission vectors by randomly changing (e.g., flipping) one or more bits of the Starting Vector. Continuing with the example above, process300may obtain a set of four permission vectors, referred to as Candidates 1, 2, 3, and 4, as shown below: BitsVector Name1234567Candidate 10101101Candidate 20111001Candidate 30101111Candidate 40011111 In this example, Candidate 1 may be obtained by randomly flipping some bits, e.g., bits3and6, of the Starting Vector. Similarly, Candidate 2 may be obtained by randomly flipping some bits, e.g., bits5and6, of the Starting Vector. Likewise, Candidate 3 may be obtained by flipping bit3of the Starting Vector and Candidate 4 may be obtained by flipping bit2of the Starting Vector. It is contemplated that process300may obtain additional permission vectors in similar manners. At step306, process300may evaluate each permission within the set of permission vectors. In some embodiments, process300may carry out the evaluation at least partially in terms of least-privilege fitness of each permission within the set of permission vectors, based on whether each permission within the set of permission vectors provides sufficient privileges for entity104to perform an action, including, e.g., accessing resource108. For example, process300may determine whether granting entity104permissions according to permission vector Candidate 1 (e.g., granting entity104permissions to all but the 1stand 3rdof the seven possible permissions) provides sufficient privileges for entity104to perform the action. If so, process300may recognize permission vector Candidate 1 as a successful candidate, which can be kept and utilized to facilitate subsequent creations of additional candidates. On the other hand, if process300determines that granting entity104permissions according to a permission vector, e.g., Candidate 4, does not provide sufficient privileges for entity104to perform the action, process300may recognize permission vector Candidate 4 as an unsuccessful candidate, which can be eliminated and not utilized to facilitate subsequent creations of candidates. In some embodiments, process300may also carry out the evaluation based on the number of permissions in the set of permission vectors. Continuing with the example above, suppose Candidates 1, 2, and 3 are all recognized as successful candidates. In that situation, process300may still determine that Candidates 1 and 2 are better candidates than Candidate 3 because Candidates 1 and 2 each have four bits set to 1 while Candidate 3 has five bits set to 1. In other words, Candidates 1 and 2 each require less privileges compared to Candidate 3, making Candidates 1 and 2 better candidates for satisfying the least-privilege fitness. At step308, process300may select a group of the set of permission vectors based on the evaluation. In some embodiments, process300may select the group of the set of permission vectors by selecting a determined number or a determined percentage of the set of permission vectors. In some embodiments, process300may select the group of the set of permission vectors from permission vectors that have been recognized as successful. Continuing with the example above, suppose process300is configured to select 50% of permission vectors that have been recognized as successful, process300may select Candidates 1 and 2 because Candidates 1 and 2 have been recognized as successful candidates and they require less privilege compared to other successful candidates that have been recognized as successful, including, e.g., Candidate 3 and the Starting Vector. At step310, process300may create a new set of permission vectors for entity104based on at least the selected group of the set of permission vectors. Continuing with the example above, process300may use the selected group of the set of permission vectors, Candidates 1 and 2, to create a new set of three permission vectors, referred to as Candidates 5, 6, and 7, shown below: BitsVector Name1234567Candidate 50101100Candidate 60111000Candidate 70110001 In this example, Candidate 5 may be obtained by randomly flipping some bits, e.g., bit7, of Candidate 1. Similarly, Candidate 6 may be obtained by flipping bit7of Candidate 2 and Candidate 6 may be obtained by flipping bit4of Candidate 2. It is contemplated that process300may obtain additional permission vectors in similar manners. At step312, process300may iterate the evaluation for the new set of permission vectors. In some embodiments, process300may iterate the evaluation step306described above. For example, process300may determine whether granting entity104permissions according to permission vector Candidate 5 provides sufficient privileges for entity104to perform the action. If so, process300may recognize permission vector Candidate 5 as a successful candidate, which can be kept and utilized to facilitate subsequent creations of additional candidates. On the other hand, if process300determines that granting entity104permissions according to a permission vector, e.g., Candidate 7, does not provide sufficient privileges for entity104to perform the action, process300may recognize permission vector Candidate 7 as an unsuccessful candidate, which can be eliminated and not utilized to facilitate subsequent creations of candidates. Process300may also evaluate the new set of permission vectors at least partially based on the number of permissions in the set of permission vectors. Continuing with the example above, suppose Candidates 5 and 6 are both recognized as successful candidates. In this example, process300may determine that Candidates 5 and 6 are better candidates than Candidates 1 and 2 because Candidates 5 and 6 each have three bits set to 1 while Candidates 1 and 2 each have four bits set to 1. In other words, Candidates 5 and 6 each require less privilege compared to Candidates 1 and 2, making Candidates 5 and 6 better candidates for satisfying the least-privilege fitness. Process300may continue through steps308and310. For example, process300may select Candidates 5 and 6 as the group of the set of permission vectors based on the evaluation and create a new set of permission vectors for entity104based on Candidates 5 and 6. For illustrative purposes, suppose the new set of permission vectors includes one member, Candidate 8, obtained by flipping bit5of Candidate 5, as shown below: BitsVector Name1234567Candidate 80101000 Process300may then iterate the evaluation for the new set of permission vectors and determine whether granting entity104permissions according to permission vector Candidate 8 provides sufficient privileges for entity104to perform the action. If so, process300may recognize permission vector Candidate 8 as a successful candidate, which can be kept and utilized to facilitate subsequent creations of additional candidates. On the other hand, if process300determines that granting entity104permissions according to Candidate 8 does not provide sufficient privileges for entity104to perform the action, process300may recognize permission vector Candidate 8 as an unsuccessful candidate, which can be eliminated and not utilized to facilitate subsequent creations of candidates. Process300may also evaluate the new set of permission vectors at least partially based on the number of permissions in the set of permission vectors. Continuing with the example above, suppose Candidate 8 is recognized as a successful candidate. In that case, process300may determine that Candidate 8 is a better candidate than Candidates 5 and 6 because Candidate 8 only has two bits set to 1 while Candidates 5 and 6 each have three bits set to 1. In other words, Candidate 8 requires less privilege compared the Candidates 5 and 6, making Candidate 8 a better candidate for satisfying the least-privilege fitness. Subsequently, process300may continue to create a new set of permission vectors based on Candidate 8 and iterate the evaluation for the new set of permission vectors as described above for one or more additional iterations. In some embodiments, following at least one instance of the iteration, process300may determine, at step314, whether an iteration termination condition has been met. In some embodiments, process300may determine whether the iteration termination condition has been met based on the number of permissions in the set of permission vectors stopping to decrease. For example, if process300cannot create any new successful candidate that requires less privileges compared to Candidate 8 for a predetermined number of iterations or a predetermined period of time, process300may determine that an iteration termination condition has been met. And if so, at step316, process300may terminate the iteration based on the iteration termination condition being met. In some embodiments, upon termination of the iteration, process300may identify a successful candidate that requires the least privilege compared to all successful candidates as the least privilege candidate. And in some embodiments, process300may set the least-privilege profile for entity104according to the identified least privilege candidate. Continuing with the example above, suppose process300cannot create any new successful candidate that requires less privilege compared to Candidate 8 for a predetermined number of iterations. In that case, process300may terminate the iteration based on the iteration termination condition being met and identify Candidate 8 as the candidate that requires the least privilege. Process300may therefore determine that granting entity104permissions according to permission vector Candidate 8 (e.g., granting entity104permissions to only the 2ndand 4th of the seven possible permissions) can provide sufficient privileges for entity104to perform the action. Process300may then set the least-privilege profile for entity104according to permission vector Candidate 8. As described above, process300can use the iterative solution to identify the minimal valid set of permissions needed for entity104. In each iteration, process300may evaluate a set of permission vectors, select one or more vectors that provide the best least-privilege fitness for that iteration, and continue to use the selected vectors to create additional candidate vectors that can help further minimize the permissions needed for entity104. Process300may repeat this iterative solution until process300determines that an iteration termination condition has been met, allowing process300to identify a least privilege candidate that can be used to set the least-privilege profile for entity104. It is to be understood that process300may utilize various techniques to create candidate vectors at steps304and/or310. For example, as described above, process300may take a candidate vector recognized in a previous iteration and randomly change (e.g., flip) one or more bits of the candidate vector to create one or more additional candidate vectors. Alternatively, process300may select two or more candidate vectors and combine permissions from the selected vectors to create additional candidate vectors. Additionally, process300may select two or more candidate vectors recognized in a previous iteration as parents and swap parts of selected parents to produce new candidate vectors. For example, while the illustration above described Candidate 8 as having been created by flipping bit5of Candidate 5, Candidate 8 may also be created by selecting Candidates 5 and 6 as parents and swapping the last three bits of Candidate 5 with the last three bits of Candidate 6. In other words, process300may concatenate permissions from Candidates 5 and 6 by concatenating the first four bits of Candidate 5 with the last three bits of Candidate 6. Furthermore, process300may utilize other techniques to create candidate vectors without departing from the spirit and scope of the present disclosure. It is also to be understood that process300may utilize various techniques to evaluate the permission vectors at step306. For example, process300may utilize a genetic algorithm optimization technique and evaluate each permission within the set of permission vectors in terms of least-privilege fitness. And as described above, in some embodiments, the evaluation may be based on whether each permission within the set of permission vectors provides sufficient privileges for the network entity to perform an action, and/or a number of permissions in the set of permission vectors (less being better). Alternatively, process300may utilize a gradient descent optimization technique, in which process300may start from an initial point and iteratively proceed in a direction of the lowest gradient until process300gets to a minimal point. For example, process300may take a function of N variables (N being the number of possible permissions that can be set for entity104) and start a gradient descent from an initial point represented by the Starting Vector described above. Process300may then take repeated steps in the opposite direction of the gradient (or approximate gradient) of the function to iteratively minimize the value of the function. In some embodiments, process300may repeat the process a few times from different starting points to avoid arriving at a local minimum. In this manner, process300may utilize a gradient descent optimization technique or a genetic algorithm optimization technique to find a global minimum, which can be utilized to set the least-privilege profile for entity104, as described above. In some embodiments, process300may identify a failure condition based on iterating the evaluation for the new set of permission vectors. For example, as described above, process300may identify Candidate 4 as a failure because Candidate 4 does not provide sufficient privileges for entity104to perform the action. In some embodiments, if all candidates created for a particular iteration fail to provide sufficient privileges for entity104to perform the action, process300may recognize that iteration as a failure and revert, based on the identified failure condition, to a previous iteration of the evaluation for the new set of permission vectors. In this manner, process300can create a different set of permission vectors and evaluate the permission vectors again. Furthermore, in some embodiments, process300may add at least one permission attribute to the previous iteration of the evaluation for the new set of permission vectors. For instance, if a candidate vector fails to produce any successful offspring in subsequent iterations, process300may randomly change one or more bits of the candidate vector from 0 to 1 in an effort to facilitate subsequent creations of additional candidates. It is to be understood that the references to the action of accessing resource108are presented as examples and are not meant to be limiting. It is contemplated that process300may be utilized to iteratively develop least-privilege profiles for entities configured to perform various types of actions, including actions to be performed locally at the entities or remotely on other resources. It is also to be understood that the references to entity104and resource108described in the examples above are not meant to be limiting. It is contemplated that system100may be configured to support multiple entities and multiple resources without departing from the spirit and scope of the present disclosure. It is to be understood that the disclosed embodiments are not necessarily limited in their application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the examples. The disclosed embodiments are capable of variations, or of being practiced or carried out in various ways. For instance, in some embodiments, process300may be utilized to iteratively develop profiles that may not necessarily be the least-privilege, but may satisfy certain considerations or predefined rules that make these profiles useful. For example, as will be described further below, in some embodiments, predefined rules may be established to facilitate finding of an optimal profile that is simple and efficient to set up, even if the profile may grant more permissions compared to a true least-privilege profile. In another example, the predefined rules may be established based on similarities between a candidate profile being evaluated and one or more existing profiles that have already been approved. In this manner, the process may facilitate finding of a profile similar or identical to an approved profile, which can further improve efficiencies. In some embodiments, to facilitate finding of an optimal profile, the predefined rules may be established based on the number of permission entries specified in a candidate profile and the number of permissions granted by the candidate profile. Furthermore, in some embodiments, the predefined rules may be established to work in conjunction with a fitness function to find an optimal profile that has the lowest combined sum of the number of permission entries and permissions granted. For example, to facilitate finding of an optimal profile in a cloud-based network such as AWS™, Azure™ and the like, various candidate profiles may be created to express permissions in terms of services and actions. Suppose that one of the candidate profiles grants an entity (e.g., a user) permissions to perform actions “a” and “b” for service “S3” and action “a” for service “EC2,” that profile may express these permissions as {Service:Action} key-value pairs: {S3:a}, {S3:b}, and {EC2:a}. In this example, the number of permission entries (i.e., the number of key-value pairs) is 3, and the number of permissions granted is 3, resulting in a sum of 6, which may be utilized as the fitness score of this particular candidate profile. Continuing with the example above, suppose that another candidate profile grants the entity permissions to perform all actions with respect to service “S3” but no actions with respect to service “EC2,” that candidate profile may express these permissions as: {S3:*}. In this example, the number of permission entries is 1, and the number of permissions granted equals to the total number of all possible actions users can perform with respect to service “S3.” Suppose that the total number of all possible actions users can perform with respect to service “S3” is 20 (actions “a” and “b” being 2 out of the 20 possible actions that can be performed with respect to service “S3”), then the fitness score of this candidate profile is 1+20=21. Continuing with the example above, suppose that still another candidate profile grants the entity permissions to perform 18 specific actions with respect to service “S3” and no actions with respect to service “EC2,” that candidate profile may express these permissions as 18 key-value pairs: {53:a}, {53:b}, . . . , {53:r}. In this example, the number of permission entries is 18, and the number of permissions granted is also 18, resulting in a sum of 36, representing the fitness score of this particular candidate profile. For illustrative purposes, suppose that each of these three candidate profiles described above may provide sufficient privileges for the entity to perform its intended operations, then process300may consider the first candidate profile, expressed as {S3:a}, {S3:b}, and {EC2:a}, to be a better fit than the other two because it has the lowest fitness score amongst the three candidate profiles. It is to be understood that defining the fitness function as the sum of the number of permission entries specified in a candidate profile and the number of permissions granted by the candidate profile is merely provided as one example and is not meant to be limiting. It is contemplated that the fitness function, denoted as F( ), may be generally defined as F(Xin)=Σaixi, where airepresents a weight and xirepresents a factor to be considered in calculating the fitness score of the profile. In the above example, the weight aiwas set to 1 and xiwas used to represent the number of key-value pairs expressed in the candidate profiles and the number of permissions granted in these candidate profiles. It is to be understood that the weight aimay vary based on the action types or other considerations. For example, in some embodiments, a weight is assigned to at least one of the number of permission entries represented by each permission vector and the number of permissions granted by each permission vector, and in some embodiments, a read action may carry a small (or zero) weight, a write action may carry a greater weight compared to a read action, a delete action may carry a greater weight compared to a write action, and a grant of all actions (“*”) may carry an even greater weight compared to a delete action, and so on. Likewise, it is to be understood that the factors ximay also vary. Such factors may include, e.g., the number of resources the entity is granted access to and the sensitivity of each resource. Other factors may also include, e.g., certain qualitative measurements. For example, in some embodiments, the fitness function F( ) may be defined to favor profiles that use smaller number of permission entries (e.g., key-value pairs) to express. In this manner, a profile that grants all read access to service “S3,” expressed as {S3:get*}, may receive a lower (i.e., better) fitness score compared to a profile that lists the read access permissions more granularly as {S3:getA}, {S3:getB}, {S3:getC}, {S3:getD}. And in some embodiments, process300may divide the permissions into a few categories, including, e.g., read, write, execute, delete, etc. In this manner, for each given candidate profile, process300may determine how many permissions are granted within each category and score the candidate profiles accordingly. In some embodiments, candidate profiles may be expressed using permission vectors, which may be iteratively created and evaluated as described above. For example, in some embodiments, a permission vector may include a bit for each action that can be granted for each service. For illustrative purposes, suppose there are two services, “S3” and “EC2,” and each service allows actions “a” and “b” as the only possible actions that can be granted, then a corresponding candidate permission vector may include 4 bits representing permissions granted to the 4 actions: Bits1234Service:ActionS3:aS3:bEC2:aEC2:bVector0/10/10/10/1 In some embodiments, the permission vector may also be defined to include permissions granted more broadly, e.g., using all actions (“*”) or all read actions (“get*”) and the like. Continuing with the example above, suppose both services “S3” and “EC2” support grant of all actions (“*”), then a corresponding candidate permission vector may include 6 bits representing permissions granted to the 4 specific actions plus the 2 all actions: Bits123456Service:ActionS3:aS3:bEC2:aEC2:bS3:*EC2:*Vector0/10/10/10/10/10/1 In this manner, a profile that grants {S3:*} may be expressed as Vector A below: Bits123456Service:ActionS3:aS3:bEC2:aEC2:bS3:*EC2:*Vector A000010 which would provide the same amount of access compared to a profile that grants {S3:a} and {S3:b}, expressed as Vector B below: Bits123456Service:ActionS3:aS3:bEC2:aEC2:bS3:*EC2:*Vector B110000 However, the profile that grants {S3:*} only has one permission entry specified therein, whereas the profile that grants {S3:a} and {S3:b} has two permission entries. Therefore, the two profiles may score differently, depending on how the fitness function is defined. If the fitness function favors profiles that use smaller number of key-value pairs, the profile that grants {S3:*} (expressed as Vector A) may score lower (i.e., better) compared to the profile that grants {S3:a} and {S3:b} individually (expressed a Vector B). On the other hand, if the fitness function assigns a significant weight to a grant of all actions (“*”), i.e., the fitness function favors a more granular definition of permissions, then the profile that grants {S3:a} and {S3:b} individually may score lower (i.e., better) compared to the profile that grants {S3:*}. In some embodiments, process300may also create a translated vector if a permission is granted using all actions (“*”). In some embodiments, the translated vector may expand the permissions granted using all actions (“*”), allowing process300to determine the specific number of permissions granted by the original vector. Continuing with the example above, process300may create a translated vector corresponding to Vector A as follows, allowing process300to determine that the specific number of permissions granted by Vector A is 2: Bits123456Service:ActionS3:aS3:bEC2:aEC2:bS3:*EC2:*Translated Vector A1100N/AN/A It is to be understood that because Vector B does not grant any permissions using all actions (“*”), process300does not need to create a translated vector for Vector B. In some embodiments, process300may determine the number of permission entries specified in a candidate profile based on the number of bits set to “1” in the corresponding permission vector that represents the candidate profile. Process300may also determine the number of permissions granted by the permission vector representing the candidate profile based on the number of bits set to “1” in the translated vector. In this manner, process300may utilize the predefined rules in conjunction with the fitness function to find an optimal profile. For example, in some embodiments, process300may perform an evaluation in terms of a fitness of each permission vector within a set of candidate permission vectors to facilitate a selection of a permission vector with a smaller sum of the number of permission entries represented and the number of permissions granted, as described above. Additionally and/or alternatively, in some embodiments, the predefined rules may be established based on similarities between each candidate profile being evaluated and one or more approved profiles. For example, in some embodiments, process300may keep creating new permission vectors during the iterative process, which may determine a similarity of each new permission vector to one or more approved permission vectors. If one of the new vectors matches a vector that has already been used to create a user profile, that match may trigger a termination condition for process300. In this manner, process300may be able to replicate or create a similar profile based on the matching vector, further improving the efficiency of process300. It is to be understood that the match referenced above does not need to be exact. For example, in some embodiments, a threshold (e.g., the number of bits that are allowed to be different) may be defined such that if the difference between a new permission vector and one that has already been used to create a user profile is under the threshold, the match may be considered close (or close enough) to trigger the termination condition. In some embodiments, the match may be performed against one or more existing groups of approved vectors, and in some embodiments, depending on whether the new vector matches with any of the groups (and if so, which group the new vectors matches with), process300may determine whether the termination condition is triggered. For example, in some embodiments, if a new vector does not match with any of the groups of approved vectors, the new vector may be eliminated immediately. On the other hand, if the new vector matches with a group of approved vectors, process300may determine whether there exists another group that grants even fewer permissions. If so, process300may continue its iterative process because a potentially better permission vector (in terms of fitness scores) may exist. Otherwise (i.e., the new vector matches with a group that already grants fewer permissions compared to other groups), process300may determine that the termination condition has been triggered, and create a profile based on the new vector accordingly. It is to be understood that the disclosed embodiments may be implemented in a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowcharts and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a software program, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. It is expected that during the life of a patent maturing from this application many relevant virtualization platforms, virtualization platform environments, trusted cloud platform resources, cloud-based assets, protocols, communication networks, security tokens and authentication credentials will be developed and the scope of the these terms is intended to include all such new technologies a priori. It is appreciated that certain features of the disclosure, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the disclosure, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the disclosure. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements. Although the disclosure has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. | 48,632 |
11943229 | In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing. DETAILED DESCRIPTION OF EMBODIMENTS The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible. In one aspect, an embodiment of the present disclosure provides a system for managing access to a plurality of remote digital platforms, wherein the system comprising a plurality of platform databases, wherein a given platform database in the plurality of platform databases is associated with a given remote digital platform and stores metadata related thereto, the system further comprising:a user device, wherein an existing user associated with the user device generates a user-request for accessing a given remote digital platform, and provides a remote digital platform identifier for the given remote digital platform;an access-control database comprising information relating to roles and permissions associated with a plurality of users;a key-store database comprising private key associated with the plurality of users; anda server arrangement, wherein the server arrangement:identifies a given remote digital platform server associated with the remote digital platform identifier using the plurality of platform databases;obtains credentials from the existing user via the user device and verifies the credentials;determines roles and permissions associated with the existing user by accessing the access-control database;retrieves a private key associated with the existing user by accessing the key-store database;verifies the private key associated with the existing user with a public key stored at the given remote digital platform server; andenables a data communication network between the given remote digital platform server and the user device. In another aspect, an embodiment of the present disclosure provides a method of managing access to a plurality of remote digital platforms, using a system comprising a plurality of platform databases, wherein a given platform database in the plurality of platform databases is associated with a given remote digital platform and stores metadata related thereto, wherein the method comprises:generating a user-request for accessing a given remote digital platform via a user-device associated with an existing user, and obtaining a remote digital platform identifier for the given remote digital platform via the user-device associated with the existing user;identifying a given remote digital platform server associated with the remote digital platform identifier using the plurality of platform databases;obtaining credentials from the existing user via the user device and verifying the credentials;determining roles and permissions associated with the existing user by accessing an access-control database;retrieving a private key associated with the existing user by accessing a key-store database;verifying the private key associated with the existing user with a public key stored at the given remote digital platform server; andenabling a data communication network between the given remote digital platform server and the user device. The present disclosure provides an efficient approach for managing access to a plurality of remote digital platforms. Furthermore, the system disclosed herein applies authentication and verification at multiple levels and ensures a secured access to the plurality of remote digital platforms. Moreover, the invention disclosed herein substantially reduces user effort in memorizing multiple passwords for each of the plurality of remote digital platforms. In addition, the invention disclosed herein works in an efficient manner by predetermining a bandwidth requirement of the plurality of user so that network bandwidth is allocated and utilized effectively among the plurality of users. Moreover, the invention disclosed herein is compatible with existing hardware and software infrastructure and has a low time and processing complexity. Therefore, management of access to the plurality of digital platforms is achieved in a cost-efficient manner. In recent years, using multiple digital platforms for different purposes such as ordering food, storing pictures and the like has become an indispensable part of people's everyday lives. Notably, people need to have credentials for each of the multiple digital platforms in order to using them. However, remembering credentials for each of the digital platforms becomes bothersome and difficult. Therefore, a solution for eliminating the need for remembering credentials for each of the multiple digital platforms. Disclosed herein is the system for managing access to the plurality of remote digital platforms. The system manages accessing the plurality of remote digital platforms by eliminating the need of remembering different credentials for each of the plurality of digital platforms, wherein the plurality of remote digital platforms relate to cloud/internet based applications that provide services such as online tools (storage, processing and so forth), services (ordering food, cloths, chatting and so forth), platforms and the like. Furthermore, the plurality of digital platforms require credential (such as passwords, security pin, security question digital signature and the like) to allow access to services provided thereby. In addition, the plurality of digital platforms may charge an enumeration (such as digital money, tokens, fiat money and the like) in lieu of services provided thereby. Alternatively, the plurality of digital platforms may provide services to users without charging any enumeration. Moreover, the plurality of digital platforms are accessed using the user device (such as a smart phone, a laptop, a tablet, a phablet and so forth). Furthermore, the system comprises the plurality of platform databases, wherein the given platform database in the plurality of platform databases is associated with the given remote digital platform and stores metadata related thereto. The plurality of platform databases include data associated with the plurality of remote digital platforms. Furthermore, the plurality of platform databases are an organized body of digital information regardless of the manner in which the data or the organized body thereof is represented. Optionally, the plurality of platform databases may be hardware, software, firmware and/or any combination thereof. For example, the organized body of data related to the plurality of digital platforms may be in the form of a table, a map, a grid, a packet, a datagram, a file, a document, a list or in any other form. The plurality of platform databases include any data storage software and systems, such as, for example, a relational database like IBM DB2 and Oracle 9. Furthermore, the plurality of platform databases refers to the software program for creating and managing one or more databases related to data associated with the plurality of remote digital platforms. Optionally, the plurality of platform databases may be operable to supports relational operations, regardless of whether it enforces strict adherence to the relational model, as understood by those of ordinary skill in the art. Additionally, the plurality of platform databases are populated by data elements. Furthermore, the data elements may include data records, bits of data, cells, are used interchangeably herein and all intended to mean information stored in cells of a database. The data elements stored in the plurality of platform databases are metadata related to the plurality of remote digital platforms, wherein each of the plurality of platform databases include data related to one of the plurality of remote digital platforms. Furthermore, metadata related to the given remote digital platform may include a domain name server name, an IP address, a Uniform Resource Locator (URL) and so forth. As mentioned previously, the system further comprises the access-control database comprising information relating to roles and permissions associated with the plurality of users. The access-control database has a similar organization and architecture as elaborated earlier with respect to the plurality of platform databases. Furthermore, the access-control database includes information associated with the plurality of users of the system, wherein each of the plurality of users is an existing user of the system. The data stored in the access-control database includes information related with roles and permissions of each of the plurality of users, wherein permissions related to the plurality of users refer to: permission to read contents of different databases within the system, permission to write in different databases within the system, permission to modify the content of the in different databases within the system, permission to delete from content of the different databases within the system and so forth. Moreover, roles of the plurality of users relate to a stature assigned to the plurality of users. Notably, a specific role of each of the plurality of users allow them to make certain changes in the system. Optionally, roles associated with the plurality of users is any one of: an administrator, a standard user, a privileged user. The administrator is any of the plurality of users having administrative rights in the system. In addition, the administrator is a first user of the system and manages the system. Moreover, any of the plurality of users is the standard user, wherein the standard user gets his/her rights and permissions from the administrator. The standard user possesses the right to access the given remote digital platform. In addition, the standard user possesses the right to transfer files to and from the remote digital platform. Moreover, the privileged user has all the rights of the standard user along with certain additional rights given by the administrator. Such additional rights include getting access to the given remote digital platform in an instance of conflict. Optionally, the administrator: adds users to the system, provides permissions to the users, delete users from the system, adds remote digital platforms to the system. The administrator manages the system by adding new users to the system, assigning the new users respective roles thereof, providing the new users permissions to read, write, modify and make changes within the system, removing existing users from the system, deleting data related to the removed users. Moreover, the administrator further makes changes in the plurality of platform databases by adding and removing the plurality of remote digital platforms and metadata related thereto from the plurality of platform databases. More optionally, the system has more than one administrator. Moreover, the system further comprises the user device, wherein the user device relates to hardware and software-based devices having a user interface that is used to interact with the user device and provide instructions thereto. Furthermore, the user device may be programmable or non-programmable. In addition, the user device has hardware and software components to communicate with other user devices, servers, communicating nodes and the like. Furthermore, the user device is configured to connect to the plurality of remote digital platforms. Examples of user device includes computer, laptop, smart phone, table, phablet and so forth. Furthermore, the existing user associated with the user device generates the user-request for accessing the given remote digital platform, and provides the remote digital platform identifier for the given remote digital platform. The existing user is a user (namely, a person, an organization and the like) who has previously registered with the plurality of remote digital platform and has previously used the system to access the plurality of remote digital platforms. The plurality of platform database has metadata associated with the plurality of remote digital platforms related to the existing user. The plurality of remote digital platforms related to the user are digital platforms that the existing user has used previously. The existing user uses the user interface of the user device to generate the user-request for accessing the given remote digital platform. Notably, the given remote digital platform is any one of the plurality of remote digital platforms. It will be appreciated that the given remote digital platform is a remote digital platform that the existing user wants to access. In an example, the user-request may be generated by way of a command. In another example, the user-request may be generated by way of a keyboard input. Moreover, the existing user provides the system the remote digital identifier for the given remote digital platform. The remote digital identifier for the given remote digital platform is IP address, Uniform Resource Locator (URL), a domain name and so forth associated with the given remote digital platform. Optionally, the user-request is: an access request, a file transfer request. The existing user generates a request to connect to the given remote digital platform. The request to connect to the remote digital platform is either access request or file transfer request. The existing user that generates access request gets communicably coupled to the given remote digital platform via a communication channel allowing access of data that is reading content of the remote digital platform. In an exemplary embodiment the communication channel for access request is implemented via Secure shell tunneling protocol (SSH) (namely, SSH port forwarding). Notably, SSH tunneling is a method of transporting arbitrary networking data over an encrypted SSH connection. It is used to add encryption to data communication. It is also be used to implement VPNs (Virtual Private Networks) and access intranet services across different networks and firewalls. Moreover, the existing user that generates file transfer request gets communicably coupled to the given remote digital platform via a communication channel allowing transfer of files to and from the remote digital platform. In an exemplary embodiment, the file transfer request of the existing user is implemented via Secure File Transfer Protocol (sFTP). Notably, the sFTP runs over the SSH tunneling protocol. It supports full security and authentication functionality of SSH tunneling protocol. Beneficially, defining a type of user-request helps the system in determining a network bandwidth requirement of the existing user. Notably, the access request requires a lower bandwidth as compared to the file transfer request. As mentioned previously the system further comprises the server arrangement, wherein the server arrangement relates to a structure and/or module that include programmable and/or non-programmable components configured to store, process and/or share information. Optionally, the server arrangement includes any arrangement of physical or virtual computational entities capable of enhancing information to perform various computational tasks. Furthermore, it should be appreciated that the server arrangement may be both single hardware server and/or plurality of hardware servers operating in a parallel or distributed architecture. In an example, the server arrangement may include components such as memory, a processor, a network adapter and the like, to store, process and/or share information with other computing components, such as user device/user equipment. Optionally, the server arrangement is implemented as a computer program that provides various services (such as database service) to other devices, modules or apparatus. Moreover, the user-request generated by the existing user is communicated to the server arrangement, wherein the server arrangement, in operation, processes the user-request. The server arrangement identifies the given remote digital platform server associated with the remote digital platform identifier using the plurality of platform databases. The server arrangement receives the user-request including the remote digital platform identifier having at least one of: an IP address, a Uniform Resource Locator (URL), a domain name and the like associated with the given remote digital platform. Subsequently, the server arrangement matches the remote digital platform identifier with the metadata associated with the plurality of remote digital platforms stored in the plurality of platform databases. The server arrangement identifies one of the plurality of platform databases that contains metadata matching with the remote digital platform identifier provided by the existing user. The platform database identified by the server arrangement has one or more remote digital platform servers associated therewith. The server arrangement identifies one of the remote digital platform servers that is available as the given remote digital platform server. Notably, remote digital platform servers among the one or more digital platform server associated with the identified remote digital platform may be in use by other existing users of the system. Optionally, the administrator adds remote digital platform server to the system, delete remote digital platform server from the system. The administrator grants access to standard user and privileged user and makes changes in the system to add a new remote digital platform server of any of the plurality of remote digital platforms or to add a remote digital platform server of a new remote digital platform. Moreover, the administrator removes data associated with a non-working remote digital platform server or a remote digital platform server of a remote digital platform that is no longer a part of the system. The administrator works as an access controller within the system. Optionally, the administrator accesses the system as the standard user or the privileged user. Moreover, the administrator works in three ways within the system namely, grant access, work as the standard user to access the given remote digital platform, maintain the plurality of platform databases. The administrator maintains the plurality of platform databases by adding and removing the plurality of remote digital platforms and creating and deleting remote digital platform servers. The administrator is configured to communicate with the remote digital platforms using command line interface for maintain the plurality of platform databases. Moreover, the server arrangement obtains credentials from the existing user via the user device and verifies the credentials. The server arrangement directs the existing user via the user device to provide credentials for using the system to access the plurality of remote digital platforms. In other words, the server arrangement directs the existing user to provide credentials for accessing the given remote digital platform. Furthermore, the credentials provided by the existing user is a password, answer for a security question, an OTP (one-time password), a thumb impression, an optical password, a scanned input or any other way of establishing authentication. Optionally, the server arrangement verifies the credentials provided by the existing user using two factor authentication (2FA). Notably, two factor authentication (namely, two-step verification or dual factor authentication) is a security process in which the existing user provides two different authentication factors to verify himself/herself to protect the credentials as well as the system and the given remote digital platform. Furthermore, Two-factor authentication method rely on the existing user providing a password as well as a second factor, such as a security token or a biometric factor like a fingerprint or facial scan. The server arrangement works as an authorizer to allow the existing user to access the given remote digital platform. Furthermore, the server arrangement determines roles and permissions associated with the existing user by accessing the access-control database. The server arrangement, after verifying authentication of the existing user accesses the access-control database for determining a role and permissions associated therewith. Notably, the server arrangement determines roles and permission associated with the existing user to ascertain if the user-request generated by the existing user can be granted. In an example, a standard user may not be allowed to add a new user to the system. Moreover, the system further comprises the key-store database comprising private key associated with the plurality of users. Furthermore, the server arrangement retrieves the private key associated with the existing user by accessing the key-store database. Notably, the system applies asymmetric encryption technique for establishing communication between the user device and the remote digital platform server, wherein the private associated with the existing user is stored in key-store database. Notably, the key-store database includes private keys associated with the plurality of users of the system. Optionally, the key-store database is not accessible to the plurality of users of the system. Alternatively, optionally, a given user of the system is allowed to access a private key related thereto however the given user is not allowed to access a private associated with any other user of the system. The key-store database stores the private key related to the plurality of users of the system by enlisting the private keys corresponding to a user identifier of the existing user. The server arrangement identifies the private key related to the existing user by way of obtaining the user identifier from the existing user. The private key associated with the existing user has to be verified with corresponding public key stored at the given remote digital platform server. The server arrangement communicates with the given remote digital platform server and retrieves the public key stored therewith. Furthermore, the server arrangement verifies the private key associated with the existing user with the public key stored at the given remote digital platform server. Beneficially, verifying the private key associated with the existing user with public key stored at the given remote digital platform server ensures authentication of both the communicating parties (namely, the user-device and the given remote digital platform server). Moreover, the server arrangement enables the data communication network between the given remote digital platform server and the user device. Notably, the server arrangement allows the user device to communicate with the given remote digital platform server via the data communication network enabled thereby. The server arrangement keeps the data communication functioning for an active session of the existing user, wherein active session relates to an ongoing functioning of the existing user over the data communication network. The data communication network between the user device and the given remote digital platform server relates to a communication channel therebetween that is used to access the remote digital platform and transfer data therebetween. Beneficially, the system provides the existing user with a single gateway to access the plurality of remote digital platforms. Therefore, the existing user does not need to connect with multiple gateways to communicate with remote digital platforms of different networks. Optionally, the system disables the data communication network after a predefined time period for which the existing user remain inactive over the data communication network. Optionally, the data communication network established by the system is a virtual private network (VPN). Furthermore, the virtual private network (VPN) extends a private network across a public network, and enables the existing user to send and receive data across shared or public networks as if the user device is directly connected to the private network. Optionally, the server arrangement enables simultaneous data communication network between the user device and one or more remote digital platforms. In an instance, the existing user generates more than one user-request to access more than one remote digital platform. In such an instance, the server arrangement enables separate channels in the data communication network to enable communication between the user-device and one or more remote digital platforms. Beneficially, providing simultaneous access to one or more remote digital platforms allows the existing user to access multiple remote digital platforms within a small period of time. Therefore, the existing user spends less time in accessing multiple remote digital platforms. Optionally, the system further includes a log database having entry for data communication between the user-device and each of the plurality of remote digital platforms. The log database includes an entry for each active session of the existing user. In an instance, the log database includes multiple entries for the existing user if the existing user is accessing multiple remote digital platforms. An entry of the given remote digital platform is removed from the log database after the existing user terminates an active session related thereto. Optionally, the server arrangement further accesses the log database to determine the active session for the existing user. The system does not require credential and authentication of the existing user having an entry in the log database. The server arrangement accesses the log database to determine if the existing user requires to provide credentials and authentication. Alternatively, when an entry for active session associated with the existing user is found in the log database, the user authentication is not performed instead credentials for the given remote digital platform are verified by the server arrangement and after checking plurality of platform databases, access-control database and verifying private key and public key, the data communication network between the given remote digital platform server and the user device is enabled. In an implementation example, an existing user associated with a user device such as a laptop generates a user-request having a given remote digital platform identifier (namely, an access request) to access a given remote digital platform. The server arrangement analyses the user-request and identifies a given remote digital platform server associated with the given remote digital platform identifier using the plurality of platform databases. Furthermore, the server arrangement checks the log database to determine if the existing user has any active session associated therewith. In an instance, the existing user does not have an entry in the log database. In such an instance, the server arrangement obtains credentials from the existing user via the user device and verifies the credentials in order to authenticate the existing user. Subsequently, the server arrangement, determines roles and permissions associated with the existing user by accessing the access-control database. The server arrangement determines that the existing user is a standard user and therefore the access request is eligible for execution by the system. Furthermore, the server arrangement, retrieves a private key associated with the given digital platform server by accessing the key-store database and verifies the private key associated with the existing user with a public key stored at the given remote digital platform server. The server arrangement, after verifying both communicating parties namely, the user device and the given remote digital platform server, enables a data communication network between the given remote digital platform server and the user device. Consequently, the existing user is enabled to communicate with the given remote digital platform. Moreover, the existing user generates another user-request having a remote digital platform identifier, for file transfer using the user device, laptop. The server arrangement identifies a given remote digital platform server associated with the remote digital platform identifier using the plurality of platform databases. Subsequently, the server arrangement accesses the log database to determine if the existing user has any active session associated therewith. The server arrangement finds an active session associated with the existing user and grants the existing user permission to perform file transfer with the remote digital platform. The present disclosure also relates to the method as described above. Various embodiments and variants disclosed above apply mutatis mutandis to the method Optionally, in the method, roles associated with the existing user is any one of: an administrator, a standard user, a privileged user. Optionally, in the method, the user-request is: an access request, a file transfer request. Optionally, the method further includes a log database having entry for data communication between the user-device and each of the plurality of remote digital platforms. Optionally, the method further includes verifying credentials provided by the existing user using two factor authentication. Optionally, the method further includes accessing the log database to determine an active session for the existing user. Optionally, the method further includes enabling simultaneous data communication network between the user device and one or more of remote digital platforms. Optionally, the method further includes allowing the administrator for: adding users to the system, providing permissions to the users, deleting users from the system, adding remote digital platforms to the system, adding remote digital platform server to the system, deleting remote digital platform server from the system. DETAILED DESCRIPTION OF THE DRAWINGS Referring toFIG.1, illustrated is a schematic diagram of a system100for managing access to a plurality of remote digital platforms, in accordance with an embodiment of the present disclosure. The system for managing access to a plurality of remote digital platforms102,104and106, wherein the system100comprises a plurality of platform databases108,110and112wherein a given platform database108in the plurality of platform databases108,110and112is associated with a given remote digital platform102and stores metadata related thereto. The system further comprises a user device114, wherein an existing user “U” associated with the user device114generates a user request for accessing a given remote digital platform104, and provides a remote digital platform identifier for the given remote digital platform104. Moreover, the system further comprises an access-control database116comprising information relating to roles and permissions associated with a plurality of users (not shown). Furthermore, the system comprises a key-store database118comprising private key associated with the plurality of users. The system further comprises a server arrangement120, wherein the server arrangement identifies a given remote digital platform server122associated with the remote digital platform identifier using the plurality of platform databases108,110and112, wherein remote digital platform servers122,124and126are associated with the remote digital platforms102,104and106respectively. Subsequently, the server arrangement120obtains credentials from the existing user “U” via the user device114and verifies the credentials. Moreover, the server arrangement120determines roles and permissions associated with the existing user “U” by accessing the access-control database116. Furthermore, the server arrangement120retrieves a private key associated with the given digital platform server122by accessing the key-store database118. The server arrangement120verifies the private key associated with the existing user “U” with a public key stored at the given remote digital platform server122. Subsequently, the server arrangement120enables a data communication network128between the given remote digital platform server122and the user device114. FIG.1is merely an example, which should not unduly limit the scope of the claims herein. It is to be understood that the simplified illustration of the system100for managing access to a plurality of remote digital platforms is provided as an example and is not to be construed as limiting the system100to specific numbers, types, or arrangements of the processing arrangement. A person skilled in the art will recognize many variations, alternatives, and modifications of embodiments of the present disclosure. Referring toFIG.2A-B, illustrated are steps of a method200of managing access to a plurality of remote digital platforms, using a system comprising a plurality of platform databases, wherein a given platform database in the plurality of platform databases is associated with a given remote digital platform and stores metadata related thereto. At step202, a user-request is generated for accessing a given remote digital platform via a user-device associated with an existing user and obtains a remote digital platform identifier for the given remote digital platform via the user-device associated with the existing user. At step204, a given remote digital platform server associated with the remote digital platform identifier is identified using the plurality of platform databases. At step206, credentials from the existing user are obtained via the user device and subsequently the credentials are verified. At step208, roles and permissions associated with the existing user are determined by accessing an access-control database. At step210, a private key associated with the given digital platform server is retrieved by accessing a key-store database. At step212, the private key associated with the existing user is verified with a public key stored at the given remote digital platform server. At step214, a data communication network is enabled between the given remote digital platform server and the user device. The steps202,204,206,208,210,212and214are only illustrative and other alternatives can also be provided where one or more steps are added, one or more steps are removed, or one or more steps are provided in a different sequence without departing from the scope of the claims herein. Referring toFIG.3, illustrated is a flowchart300of steps followed by a server arrangement ofFIG.1, in accordance with an embodiment of the present disclosure. At step302, an existing user generates a user-request that is an access request. Alternatively, at step304, the existing user generates a user request that is a file transfer request. At step306, the server arrangement checks a log database308to find an entry for an active session associated with the existing user. In an instance, an entry for active session associated with the existing user is not found in the log database, an authentication of the existing user is performed at a step310. Subsequently, at step312a new entry for a new session is added to the log database. Alternatively, in another instance, when an entry for active session associated with the existing user is found in the log database, credential verification for remote digital platform is performed at a step314. Moreover, at step316, platform databases are checked by the server arrangement. At step318, access-control database is checked to determine roles and permissions associated with the existing user. At step320, key-store database is accesses to retrieve and verify private key associated with the existing user with public key stored at a given remote digital platform server. Moreover, at step322, the server arrangement routes a communication channel for access request. At step324, the server arrangement routes a communication channel for file transfer request. Furthermore, at step326the server arrangement enables data communication network, wherein at step328, the server arrangement enables separate data communication channel for access request and at step330the server arrangement enables separate data communication channel for file transfer request. Referring toFIG.4, illustrated is a flowchart400of step followed by an administrator, in accordance with an embodiment of the present disclosure. At step402, the administrator login is performed by the administrator. At step404, the administrator selects an option for further operations to be performed. At step406, the administrator selects option: platform database and communicates with plurality of remote digital platforms to obtain remote digital platform metadata. At step410, the administrator selects options: access request and proceeds as a standard server to access remote digital platforms. At step408, the server grants access to users. At step416, the administrator updates the key-store database after granting roles and permission to users. At step418, the administrator updates access-control database after granting roles and permissions to the users. Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as “including”, “comprising”, “incorporating”, “have”, “is” used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural. | 38,209 |
11943230 | DETAILED DESCRIPTION Existing solutions allowing orchestration of service set or a Stock Keeping Unit (SKU) over a cloud network requires a user, specifically a network administrator, to manually select the service set. The network administrator selects the service set solely based on a number of users required to access the service set. Such approach is inefficient as all the users in an organization do not have similar hardware and network requirements or similar privilege to use hardware and network. Hardware requirements includes hardware components that are physically needed to store data and process the data, e.g. processing and storage requirements. The hardware components required to store data may include different types of data storage elements, such as optical data storage elements such as Compact Disc (CD), Digital Video Disc (DVD), Blu-Ray disc, magnetic data storage elements such as Hard Disk Drives (HDD), flash memories such as Solid State Drive (SSD), and holographic memories. The hardware components required to process data may include different types of data processing elements, such as Application Specific Integrated Circuit (ASIC). Field Programmable Gate Array (FPGA), and Digital Signal Processor (DSP). Network requirements include thresholds for data flow, e.g. bandwidth and throughput requirements. Data flow may depend on several factors including, but not limited to, type of network devices (such as network routers and network switches) used, connection topology of the network devices, and data transfer privileges configured on the network devices. For example, a group of 500 users of an engineering team may require access to sufficiently more amount of hardware or advanced hardware and network performance compared to 500 users of a finance team. An advanced hardware means a hardware having superior computing capability, such as a processor core of 3.2 GHz compared to a processor core of 2.8 GHz. This is because the users of the engineering team might be required to perform computationally expensive tasks, such as accessing and performing analytics over a vast dataset in an encrypted manner. In comparison, users of the finance team might mostly be required to raise invoices, which might be a computationally inexpensive task compared to the task performed the users of engineering team. Therefore, a service set purchased or licensed merely based on the user count might under-serve requirements of the engineering team. Further, the same service set might prove to be superfluous than the actual requirements of the finance team. In order to address this technical problem, examples disclosed herein include a method and a system that leverages user role based licensing to dynamically orchestrate virtual gateways in cloud networks. Compute resource consumption of users on network devices like network controllers or gateways may be a function of the users' roles. For example, in an Enterprise deployment, the users having research and development roles require certain capabilities like an encryption service (WPA3, WPA2-Enterprise), and users having a guest role would just use open authentication without encryption. For such reason, the users having research and development role might require more hardware and network capabilities than the users having a guest role. The systems and methods disclosed herein determines hardware and network capabilities corresponding to the user role and a number of users associated with the role, using a repository storing such information. Such repository may be built from learning gathered from previous implementations and instructions associated with the service sets, released by agencies managing the cloud networks. The proposed systems and methods also include determining a service set that would be sufficient to provide the identified hardware and network capabilities. Successively, such service set may be licensed over the cloud network for the users. Further, such service set may be modified during changes of users' roles and/or change in the number of users having such roles. FIG.1illustrates a network architecture of a system for configuring resources over a network cloud, in accordance with an embodiment of the present disclosure. An organization's network may include a plurality of user devices operated by users associated with different roles. Further, a network administrator may operate a network administrator device102using which he may license a service set over a network cloud104to serve hardware and network requirements of the users operating their user devices within the organization. The user device may correspond to different computing devices, such as laptop, desktop, smart phones, and mobile tablets. The network administrator device102includes a memory106, a processor108, and a communication module110. The memory106is configured to store program instructions generated as a result of the commands entered by the network administrator for operating the network administrator device102. Such program instructions will be executed by the processor108. The communication module110is configured to transmit commands for configuration of the service set and data (attributes related to the user roles) to the administrator device102and receive any response from the network cloud104. The network cloud104includes a stack of memory112, a stack of processor114, and a communication module116. The service set suiting requirements of the users present in the organization's network might be licensed and configured over one or more memory of the stack of memory112and/or one or more processor of the stack of processor114. The communication module116is configured to receive commands for configuration of the service set and the data (attributes related to the user roles) from the network administrator device102and transmit responses to the network administrator device102. Communication of the commands and data between the network administrator device102and the network cloud104would occur via secure communication sessions. Such secure communication sessions may correspond to Virtual Private Network (VPN) tunnels established over a public network118, such as internet. Further, access to the network cloud104may be controlled by a virtual gateway120. Although the virtual gateway120is illustrated as a separate network device different from elements of the network cloud104, the virtual gateway120may also be implemented over the network cloud104. In one implementation, the virtual gateway120may be implemented on a networking device including a memory configured to store access control information, and a processor configured to execute commands for providing access of one or more elements of the network cloud104, based on the access control information. Upon gaining access to the virtual gateway120, the network administrator may input attributes related to user roles, Such attributes may include categories of roles of the users, network cloud based services associated with each category, and a number of the users associated with each category. The network cloud based services mean hardware and/or software services hosted over the network cloud104and may belong to one of several categories of services including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), Enterprise Resource Planning (ERP), and managed services. Based on such attributes, the virtual gateway120may determine required hardware and network capabilities, using a repository storing such information. Thereupon, the virtual gateway120may determine a service set that would be sufficient to provide the identified hardware and/or network capabilities. Successively, such service set may be licensed over the cloud network for the users. In some examples, the service set may be represented by Stock-Keeping Units (SKUs) where each SKU represents a different service set. A non-transitory computer-readable storage medium122may be used to store program instructions responsible for providing User Interface (UI) on the network administrator device102for entering inputs (attributes) and viewing output results, for establishing communication between the network administrator device102and the network cloud104, and for managing operations over the network cloud104for implementing and enabling functioning of service sets. FIG.2illustrates detailed implementation of a cloud management model, in accordance with an embodiment of the present disclosure. For licensing a service set to support functions of different user groups present in an enterprise network, such as an engineering group202, finance group204, and a guest group206, the network administrator may communicate with an orchestrator module208configured over the network cloud104. Along with the orchestrator module208, a. Resource Compute Element (RCE)210and a repository212may also be present over the network cloud104. From the orchestrator module208, the RCE210may receive attributes belonging to the user groups202through206, provided by the network administrator. Based on the attributes, the RCE210may first determine hardware capabilities and network capabilities corresponding to the attributes by querying the repository212, and thereupon may determine a service set capable of providing the hardware capabilities and the network capabilities by querying the repository212again. Once information of the service set is provided to the orchestrator module208, the orchestrator module208may request configuration of the service set over the network cloud104, through the virtual gateway120. The service set may include one or more of several network cloud based services provided by the network cloud104, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), Enterprise Resource Planning (ERP), and managed services. FIG.3illustrates a data flow diagram showing information exchange for configuring resources over the network cloud104, in accordance with an embodiment of the present disclosure. At instance302, using the network administrator device102, the network administrator may send a request to the orchestrator module208for configuring resources on the network cloud104. Such resources may require to be configured to serve hardware and network requirements of users operating their user devices within the organization. The network administrator may be a person responsible for managing licenses or subscription of services purchased over the network cloud104. The network administrator may submit the request into a User Interface (UI) provided by the orchestrator module208. It must be understood that different types of UIs may be used for receiving the request from the network administrator. For example, a UI providing a drop down menu, a UI allowing selection of an input from a list of predefined inputs, or a UI allowing to manually enter the request may be used. The request may include attributes related to user roles. Specifically, the attributes may include categories of user roles, network cloud based services, and a number of users associated with each category. The network cloud based services may correspond to hardware and/or software services hosted over a network cloud104. Additionally, other attributes such as type of encryption, manner of authentication of users' requests, categories for classification of data, bandwidth requirement, throughput requirement, and service up-time may also be provided through the request. In one implementation, the request may be provided as a command of predefined format, such as “ENG_ROLE_DOT1X_AES_5GBPS_1000.” Such input indicates multiple attributes including roles of users as Engineering (ENG_ROLE), authentication type as WPA2-Enterprise (DOT1X), encryption scheme as Advanced Encryption Scheme (AES), bandwidth as 5GBPS, and user count as 1000. At instance304, the orchestrator module208may forward the attributes to the RCE210. The RCE210may determine hardware capabilities and network capabilities corresponding to the attributes. The hardware capabilities may comprise amount of processing power required and memory requirements, such as of HDD or SSD, and RAM. The network capabilities may comprise bandwidth, latency, and throughput. The RCE210may determine the hardware capabilities and the network capabilities from data stored in the repository212, by submitting a query at instance306and receiving response towards the query at instance308. The data stored in the repository212corresponds to learning gathered from previous implementations and instructions associated with the service sets, released by agency managing the network cloud104. Further, such data may be stored in one of several suitable formats, such as a Look Up Table (LUT) and a decision tree. In a decision tree, data may be stored in root and the intermediate nodes corresponding to the attributes and leaf nodes corresponding to the hardware capabilities and the network capabilities. FIG.4illustrates requirements determined by the RCE210for different network configuration requests. For the request “NG_ROLE_DOT1X_AES_5GBPS_1000”, the RCE determines that 4 Central Processing Units (CPUs) and 6 GB of Random Access Memory (RAM) would be required. For another request “ENG_ROLE_DOT1X_AES_10GBPS_1000”, the RCE210determines that 6 Central Processing Units (CPUs) and 16 GB of Random Access Memory (RAM) would be required. For yet another request “GUEST_ROLE_OPEN_10GBPS_1000”, the RCE210determines that 5 Central Processing Units (CPUs) and 8 GB of Random Access Memory (RAM) would be required. Although in current examples, the network capabilities i.e. the bandwidth are mentioned to be received within the request, it is quite possible that the network capabilities are not provided within the request, and the RCE210determines it from other received attributes, such as the user roles. For example, the RCE210may determine that for 1000 Engineering (ENG_ROLE) users, 5 Gbps bandwidth would be sufficient, and for 1000 Guest (GUEST_ROLE) users, 1 Gbps bandwidth would be sufficient. Successive to determining the hardware capabilities and the network capabilities, the RCE210may determine an appropriate service set capable of providing the hardware capabilities and the network capabilities. The service set could be understood as a service package/module or Stock-Keeping Unit (SKU) designed by network cloud service providers for licensing to organizations. The RCE210may determine the suitable service set from data stored in the repository212, by submitting a query at instance310and receiving response towards the query at instance312. For example, the data stored in the repository212may be present as a LUT, as illustrated below. UsersupportTotal vCPUMemoryFlash/DiskS. No.Service setcount(hyperthreaded)(GB)(GB)1MC-VA-102563462MC-VA-508004663MC-VA-25040005884MC-VA-1K16000616165MC-VA-4K654961248486MC-VA-6K63977146464 In one instance, while the RCE210queries the repository212for determining a service providing 5 virtual Central Processing Units (CPUs), 8 GB memory, and 8 GB of flash memory as the hardware capabilities, MC-VA-250 may be identified as the suitable service set. Further, from the data, a number of users supported by the service set may also be determined. For example, the number of users supported by the service set MC-VA-250 may be identified as 4000. Although the data used for determining the service set is illustrated as a LUT, it is fairly possible to store the data in other formats, such as a decision tree comprising root and intermediate nodes corresponding to the hardware capabilities and leaf node representing the service sets. At instance314, the RCE210may communicate details of an identified service set to the orchestrator module208. At instance316, the orchestrator module208may send details of the identified service set to the Virtual Gateway120for configuring the resources over the network cloud104to implement the service set for use by the users of the organization. Configuring the resources over the network cloud104means reserving and customizing the resources for implementing the service set, and thereby making required services accessible by the users. Thus, the orchestrator module208gets the service set implemented through the Virtual Gateway120. Post getting the service set configured over the network cloud104, the orchestrator module208may send a confirmation message to notify the network administrator, at instance318. Further, in certain implementations, immediately before configuring the service set over the network cloud104, the network administrator may be required to make payment towards purchasing/licensing the service set. Upon making such payment, credentials for accessing the service set may be shared with the network administrator. Later, when categories of the user roles change or the number of users associated with each category of role change, the service set already configured for the users may be reconfigured or a new service set may be determined using the above described process. Such new service set may be determined and configured by accessing the UI provided by the orchestrator module208. This enables dynamic update of the service set, during change in requirements of an enterprise. Implementing the methodologies explained above, current disclosure allows dynamic orchestration of virtual gateways i.e. configuring a service set over a network cloud based on the optimal compute (hardware capabilities and network capabilities) determined for a group of users or an organization. Specifically, the dynamic orchestration of virtual gateways over network cloud is performed based on roles of the users and a number of the users associated with each role. The disclosure also allows dynamic re-orchestration of virtual gateways during changes in the number of users and/or change of user roles. In this manner, under-utilization of a service set or over-purchase of a service set is avoided. Referring now toFIG.5, method for configuring resources over a network cloud for serving users' requirements is described with reference to the flowchart500. In this regard, each block may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the drawings. For example, two blocks shown in succession inFIG.5may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Any process descriptions or blocks in flow charts should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the example embodiments in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved. In addition, the process descriptions or blocks in flow charts should be understood as representing decisions made by a hardware structure such as a state machine. At block502, attributes related to a user roles may be obtained from a network administrator responsible for managing network of an enterprise. The attributes may include categories of user roles, network cloud based service associated with each category, and a number of users associated with each category. The network cloud based services correspond to hardware and/or software services hosted over a network cloud. At block504, hardware capabilities and network capabilities corresponding to the attributes may be determined, from a mapping table stored in a repository. The hardware capabilities comprise processing power and memory requirements. The network capabilities comprise bandwidth, latency, and throughput requirements. At block506, a service set capable of providing the hardware capabilities and the network capabilities may be determined from the mapping table stored in the repository. The service set could be understood as a service package/module or Stock-Keeping Unit (SKU) designed by network cloud service providers for licensing to organizations. At block508, suitable resources may be configured over the network cloud to implement the service set, for serving users' requirements. The service set may also be reconfigured, based on changes in the attributes. An embodiment of the disclosure may be an article of manufacture in which a machine-readable medium (such as microelectronic memory) has stored thereon instructions which program one or more data processing components (generically referred to here as a “processor”) to perform the operations described above. In other embodiments, some of these operations might be performed by specific hardware components that contain hardwired logic (e.g., dedicated digital filter blocks and state machines). Those operations might alternatively be performed by any combination of programmed data processing components and fixed hardwired circuit components. The detailed description set forth below in connection with the appended drawings is intended as a description of various embodiments of the present disclosure and is not intended to represent the only embodiments in which details of the present disclosure may be implemented. Each embodiment described in this disclosure is provided merely as an example or illustration, and should not necessarily be construed as preferred or advantageous over other embodiments. Any combination of the above features and functionalities may be used in accordance with one or more embodiments. In the foregoing specification, embodiments have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the disclosure, and what is intended by the applicants to be the scope of the disclosure, is the literal and equivalent scope of the set as claimed in claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. A network cloud may be implemented using multiple Data centres that can support the distributed computing environment. The data centres include a cloud computing platform, racks, and nodes (e.g., computing devices, processing units, or blades) in each rack. The virtual gateway can be implemented with a cloud computing platform that runs cloud services across different data centres and geographic regions. The cloud computing platform can implement an allocator component for provisioning and managing resource allocation, deployment, upgrade, and management of cloud services. Typically, the cloud computing platform acts to store data or run service applications in a distributed manner. The cloud computing platform may be a public cloud, a private cloud, or a dedicated cloud. A non-transitory computer-readable storage medium includes program instructions to implement various operations embodied by a computing device such as a laptop, desktop, or a server. The medium may also include, alone or in combination with the program instructions, data files, data structures, and the like. The medium and program instructions may be those specially designed and constructed for the purposes, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of non-transitory computer-readable storage medium include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as Compact Disc Read-Only Memory (CD-ROM) disks and Digital Video Disc (DVD); magneto-optical media such as floptical disks; and hardware devices that are especially to store and perform program instructions, such as Read Only Memory (ROM), Random Access Memory (RAM), flash memory, and the like. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The described hardware devices may be to act as one or more software modules in order to perform the operations of the above-described embodiments. Modules as used herein, such as the communication module and the orchestration module are intended to encompass any collection or set of program instructions executable over network cloud so as to perform requited task by the software. The term “software” as used herein is intended to encompass such instructions stored in storage medium such as RAM, a hard disk, optical disk, or so forth, and is also intended to encompass so-called “firmware” that is software stored on a ROM or so forth. Such software may be organized in various ways, and may include software components organized as libraries, Internet-based programs stored on a remote server or so forth, source code, interpretive code, object code, directly executable code, and so forth. It is contemplated that the software may invoke system-level code or calls to other software residing on server or other location to perform certain functions. A processor may include one or more general purpose processors (e.g., INTEL® or Advanced Micro Devices® (AMD) microprocessors) and/or one or more special purpose processors (e.g., digital signal processors or Xilinx® System On Chip (SOC) Field Programmable Gate Array (FPGA) processor), MIPS/ARM-class processor, a microprocessor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a microcontroller, a state machine, or any type of programmable logic array. A memory may include, but is no limited to, non-transitory machine-readable storage devices such as hard drives, magnetic tape, floppy diskettes, optical disks. Compact Disc Read-Only Memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, Random Access Memories (RAMS), Programmable Read-Only Memories (PROMs), Erasable PROMs (EPROMs), Electrically Erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions. The terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination, Therefore, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive, | 27,001 |
11943231 | DETAILED DESCRIPTION Embodiments of the invention can incorporate a cloud based token processing system and method for performing transactions. The types of transactions in which embodiments of the invention can be used include, but are not limited to, access transactions (e.g., when a user attempts to enter a location, venue, or building), payment transactions, and data request transactions. In one embodiment of the invention, a mobile communication device (e.g., a smartphone) belonging to a user may capture a Quick Response (QR) code displayed on a merchant POS terminal. The QR code may encode transaction specific information, location information, merchant-related information (e.g., merchant identifier), and any other relevant information. Upon capturing the QR code, the mobile communication device may request a token and a cryptogram from a remote server computer (which may be a cloud based token generation and verification system) using at least the information encoded in the QR code. After the mobile communication device obtains the token and the cryptogram, the mobile communication device may then pass the token and the cryptogram to the POS terminal. The POS terminal may then generate and send an authorization request message that includes at least some of the information encoded in the QR code, the token, and the cryptogram to an issuer computer operated by the issuer of the account number that was used to create the token and/or to a transaction processing computer. One or both of these computers then receives the authorization request message and validates the cryptogram using information in the authorization request message and a shared encryption key. The issuer computer or the transaction processing computer may then determine a risk level for the transaction by (i) matching the transaction data in the cryptogram and the authorization request message, (ii) ensuring that the device is in a reasonable proximity to the merchant terminal (e.g., POS terminal), and/or (iii) determining whether a recent transaction count associated with the device is reasonable. Embodiments of the invention improve transaction security relative to conventional methods and systems. For example, the cryptogram that is generated according to embodiments of the invention limits the transaction to one that has the corresponding transaction-specific information that may be passed in an authorization request message (e.g., transaction amount, merchant location, time, date, account identifier, expiration date, CVV, etc.), as well as information surrounding the interaction between the mobile device and the access device at the point of sale. Before discussing embodiments of the invention in detail, some terms are described in further detail below. A “mobile communication device” or a “mobile device” may comprise any suitable electronic device that may be transported and operated by a user, which may also provide remote communication capabilities to a network, Examples of remote communication capabilities include using a mobile phone (wireless) network, wireless data network (e.g. 3G, 4G or similar networks), Wi-Fi, Wi-Max, or any other communication medium that may provide access to a network such as the Internet or a private network. Examples of mobile devices include mobile phones (e.g. cellular phones), PDAs, tablet computers, net books, laptop computers, personal music players, hand-held specialized readers, etc. Further examples of mobile devices include wearable devices, such as smart watches, fitness bands, ankle bracelets, rings, earrings, etc., as well as automobiles with remote communication capabilities. A mobile device may comprise any suitable hardware and software for performing such functions, and may also include multiple devices or components (e.g. when a device has remote access to a network by tethering to another device—i.e. using the other device as a modem—both devices taken together may be considered a single mobile device). A “payment device” may include any suitable device that may be used to conduct a financial transaction, such as to provide payment credentials to a merchant. The payment device may be a software object, a hardware object, or a physical object. Suitable payment devices can be hand-held and compact so that they can fit into a user's wallet and/or pocket (e.g., pocket-sized). Example payment devices may include smart cards, magnetic stripe cards, keychain devices (such as the Speedpass™ commercially available from Exxon-Mobil Corp.), etc, Such devices can operate in either a contact or contactless mode. In some embodiments, a mobile device can function as a payment device (e.g., a mobile device can store and be able to transmit payment credentials for a transaction), “Communication device data” may include any suitable data associated with a communication device, Such data may be stored within a communication, and in some cases, it may exist independent of any communication with an access device at a resource provider. Examples of communication device data may include account identifiers stored on the communication device, device identifiers, authentication data relating to authentication processes performed by a communication device (e.g., biometric templates, stared secrets such as passwords, etc.), timestamps created by the communication device, etc. A “credential” may be any suitable information that serves as reliable evidence of worth, ownership, identity, or authority. A credential may be a string of numbers, letters, or any other suitable characters, as well as any object or document that can serve as confirmation. Examples of credentials include value credentials, identification cards, certified documents, access cards, passcodes and other login information, etc. A “value credential” may be information associated with worth. Examples of value credentials include payment credentials, coupon identifiers, information needed to obtain a promotional offer, etc. “Payment credentials” may include any suitable information associated with an account (e.g. a payment account and/or payment device associated with the account). Such information may be directly related to the account or may be derived from information related to the account. Examples of account information may include a PAN (primary account number or “account number”), user name, expiration date, CVV (card verification value), dCVV (dynamic card verification value), CVV2 (card verification value 2), CVC3 card verification values, etc. Payment credentials may be any information that identifies or is associated with a payment account. Payment credentials may be provided in order to make a payment from a payment account. Payment credentials can also include a user name, an expiration date, a gift card number or code, and any other suitable information. An “application” may be computer code or other data stored on a computer readable medium (e.g. memory element or secure element) that may be executable by a processor to complete a task. A “digital wallet” can include an electronic device that allows an individual to conduct electronic commerce transactions. A digital wallet may store user profile information, payment credentials, bank account information, one or more digital wallet identifiers and/or the like and can be used in a variety of transactions, such as but not limited to eCommerce, social networks, money transfer/personal payments, mobile commerce, proximity payments, gaming, and/or the like for retail purchases, digital goods purchases, utility payments, purchasing games or gaming credits from gaming websites, transferring funds between users, and/or the like. A digital wallet may allow the user to load one or more payment cards onto the digital wallet so as to make a payment without having to enter an account number or present a physical card. A “digital wallet provider” may include an entity, such as an issuing bank or third party service provider, that issues a digital wallet to a user that enables the user to conduct financial transactions. A digital wallet provider may provide standalone user-facing software applications that store account numbers, or representations of the account numbers (e.g., payment tokens), on behalf of a cardholder (or other user) to facilitate payments at more than one unrelated merchant, perform person-to-person payments, or load financial value into the digital wallet. A digital wallet provider may enable a user to access its account via a personal computer, mobile device or access device. Additionally, a digital wallet provider may also provide one or more of the following functions: storing multiple payment cards and other payment products on behalf of a user, storing other information including billing address, shipping addresses, and transaction history, and initiating a transaction by one or more methods. A “token” may be a substitute value for a credential. A token may be a string of numbers, letters, or any other suitable characters. Examples of tokens include payment tokens; access tokens, personal identification tokens, etc. A “payment token” may include an identifier for a payment account that is a substitute for an account identifier, such as a primary account number (PAN). For example, a token may include a series of alphanumeric characters that may be used as a substitute for an original account identifier. For example, a token “4900 0000 0000 0001” may be used in place of a PAN “4147 0900 0000 1234.” In some embodiments, a token may be “format preserving” and may have a numeric format that conforms to the account identifiers used in existing transaction processing networks (e.g., ISO 8583 financial transaction message format). In some embodiments, a token may be used in place of a PAN to initiate, authorize, settle or resolve a payment transaction or represent the original credential in other systems where the original credential would typically be provided. In some embodiments, a token value may be generated such that the recovery of the original PAN or other account identifier from the token value may not be computationally derived. Further, in some embodiments, the token format may be configured to allow the entity receiving the token to identify it as a token and recognize the entity that issued the token. “Tokenization” is a process by which data is replaced with substitute data. For example, a payment account identifier (e.g., a primary account number (PAN)) may be tokenized by replacing the primary account identifier with a substitute number (e.g. a token) that may be associated with the payment account identifier. Further, tokenization may be applied to any other information that may be replaced with a substitute value (i.e., token). Tokenization may be used to enhance transaction efficiency and improve transaction security. A “tokenization computer” or “token service system” can include a system that services payment tokens. In some embodiments, a token service system can facilitate requesting, determining (e.g., generating) and/or issuing tokens, as well as maintaining an established mapping of tokens to primary account numbers (PANS) in a repository (e.g. token vault). In some embodiments, the token service system may establish a token assurance level for a given token to indicate the confidence level of the token to PAN binding. The token service system may include or be in communication with a token vault where the generated tokens are stored. The token service system may support token processing of payment transactions submitted using tokens by de-tokenizing the token to obtain the actual PAN. In some embodiments, a token service system may include a tokenization computer alone, or in combination with other computers such as a transaction processing network computer. Various entities of a tokenization ecosystem may assume the roles of the token service provider. For example, payment networks and issuers or their agents may become the token service provider by implementing the token services. A “token domain” may indicate an area and/or circumstance in which a token can be used. Examples of the token domain may include, but are not limited to, payment channels (e.g., e-commerce, physical point of sale, etc.), POS entry modes (e.g., contactless, magnetic stripe, etc.), and merchant identifiers to uniquely identify where the token can be used. A set of parameters (i.e. token domain restriction controls) may be established as part of token issuance by the token service provider that may allow for enforcing appropriate usage of the token in payment transactions. For example, the token domain restriction controls may restrict the use of the token with particular presentment modes, such as contactless or e-commerce presentment modes. In some embodiments, the token domain restriction controls may restrict the use of the token at a particular merchant that can be uniquely identified. Some exemplary token domain restriction controls may require the verification of the presence of a token cryptogram that is unique to a given transaction. In some embodiments, a token domain can be associated with a token requestor. A “token expiry date” may refer to the expiration date/time of the token. The token expiry date may be passed among the entities of the tokenization ecosystem during transaction processing to ensure interoperability. The token expiration date may be a numeric value (e.g. a 4-digit numeric value). A “token request message” or “token request” may be an electronic message for requesting a token. In some embodiments, a token request message may include information usable for identifying a payment account or digital wallet, and/or information for generating a token. For example, a token request message may include payment credentials, mobile device identification information (e.g. a phone number or MS ISDN), a digital wallet identifier, information identifying a tokenization service provider, a merchant identifier, a cryptogram, and/or any other suitable information. Information included in a token request message can be encrypted (e.g., with an issuer-specific key). In some embodiments, a token request message may include a flag or other indicator specifying that the message is a token request message. A “token response message” or “token response” may be a message that responds to a token request. A token response message may include an indication that a token request was approved or denied. A token response message may also include a token, mobile device identification information (e.g. a phone number or MSISDN), a digital wallet identifier, information identifying a tokenization service provider, a merchant identifier, a cryptogram, and/or any other suitable information. Information included in a token response message can be encrypted (e.g., with an issuer-specific key). In some embodiments, a token response message may include a flag or other indicator specifying that the message is a token response message. A “cryptogram” may include encrypted characters. Cryptograms can be of any suitable length and may be formed using any suitable data transformation process. Exemplary data transformation processes include encryption, and encryption processes such as DES, triple DES, AES, and ECC may be used. Keys used with such encryption process can be of any appropriate length and may have any suitable characteristics. A “resource provider” may be an entity that can provide a resource such as goods, services, information, and/or access. Examples of a resource provider include merchants, access devices, secure data access points, etc. A “merchant” may typically be an entity that engages in transactions and can sell goods or services, or provide access to goods or services. An “acquirer” may typically be a business entity (e.g., a commercial bank) that has a business relationship with a particular merchant or other entity. Some entities can perform both issuer and acquirer functions. Some embodiments may encompass such single entity issuer-acquirers. An acquirer may operate an acquirer computer, which can also be generically referred to as a “transport computer”. An “authorizing entity” may be an entity that authorizes a request. Examples of an authorizing entity may be an issuer, a governmental agency, a document repository, an access administrator, etc. An “issuer” may typically refer to a business entity (e.g., a bank) that maintains an account for a user. An issuer may also issue payment credentials stored on a user device, such as a cellular telephone, smart card, tablet, or laptop to the consumer. An authorizing entity may operate an authorizing computer. An “access device” may be any suitable device that provides access to a remote system. An access device may also be used for communicating with a merchant computer, a transaction processing computer, an authentication computer, or any other suitable system. An access device may generally be located in any suitable location, such as at the location of a merchant. An access device may be in any suitable form. Some examples of access devices include POS or point of sale devices (e.g., POS terminals), cellular phones, PDAs, personal computers (PCs), tablet PCs, hand-held specialized readers, set-top boxes, electronic cash registers (ECRs), automated teller machines (ATMs), virtual cash registers (VCRs), kiosks, security systems, access systems, and the like. An access device may use any suitable contact or contactless mode of operation to send or receive data from, or associated with, a user mobile device. In some embodiments, where an access device may comprise a POS terminal, any suitable POS terminal may be used and may include a reader, a processor, and a computer-readable medium. A reader may include any suitable contact or contactless mode of operation. For example, exemplary card readers can include radio frequency (RF) antennas, optical scanners, bar code readers, or magnetic stripe readers to interact with a payment device and/or mobile device. In some embodiments, a cellular phone, tablet, or other dedicated wireless device used as a POS terminal may be referred to as a mobile point of sale or an “mPOS” terminal. “Access device data” may include any suitable data obtained from an access device. Examples of access device data may include a merchant identifier, transaction amount, location information (e.g., a GPS location of the access device), transaction timestamp, an access device identifier, etc. An “authorization request message” may be an electronic message that requests authorization for a transaction. In some embodiments, it is sent to a transaction processing computer and/or an issuer of a payment card to request authorization for a transaction. An authorization request message according to some embodiments may comply with ISO 8583, which is a standard for systems that exchange electronic transaction information associated with a payment made by a user using a payment device or payment account. The authorization request message may include an issuer account identifier that may be associated with a payment device or payment account. An authorization request message may also comprise additional data elements corresponding to “identification information” including, by way of example only: a service code, a CVV (card verification value), a dCVV (dynamic card verification value), a PAN (primary account number or “account number”), a payment token, a user name, an expiration date, etc. An authorization request message may also comprise “transaction information,” such as any information associated with a current transaction, such as the transaction amount, merchant identifier, merchant location, acquirer bank identification number (BIN), card acceptor ID, information identifying items being purchased, etc., as well as any other information that may be utilized in determining whether to identify and/or authorize a transaction. An “authorization response message” may be a message that responds to an authorization request. In some cases, it may be an electronic message reply to an authorization request message. The authorization response message may be generated by an issuing financial institution or a transaction processing computer. The authorization response message may include, by way of example only, one or more of the following status indicators: Approval—transaction was approved; Decline—transaction was not approved; or Call Center—response pending more information, merchant must call the toll-free authorization phone number. The authorization response message may also include an authorization code, which may be a code that a credit card issuing bank returns in response to an authorization request message in an electronic message (either directly or through the transaction processing computer) to the merchant's access device (e.g. POS equipment) that indicates approval of the transaction. The code may serve as proof of authorization. As noted above, in some embodiments, a transaction processing computer may generate or forward the authorization response message to the merchant. A “server computer” may include a powerful computer or cluster of computers. For example, the server computer can be a large mainframe, a minicomputer cluster, or a group of servers functioning as a unit. In one example, the server computer may be a database server coupled to a Web server. The server computer may be coupled to a database and may include any hardware, software, other logic, or combination of the preceding for servicing the requests from one or more client computers. A “processor” may include a a central processing unit (CPU). A processor can include a single-core processor, a plurality of single-core processors, a multi-core processor, a plurality of multi-core processors, or any other suitable combination of hardware configured to perform arithmetical, logical, and/or input/output operations of a computing device. FIG.1shows a block diagram and a flow diagram of a token processing system100according to embodiments of the invention. The token processing system100may be used to perform payment transactions. The token processing system100comprises a mobile device115that can interact with an access device125. The access device125may communicate with an authorization computer160via a resource provider computer130, a transport computer140, and a transaction processing computer150. The mobile device115may also communicate with the tokenization computer170. Each of the computers and devices shown inFIG.1may communicate using any suitable communications network. Suitable communications networks may be any one and/or the combination of the following; a direct interconnection; the Internet; a Local Area Network (LAN); a Metropolitan Area Network (MAN); an Operating Missions as Nodes on the Internet (OMNI); a secured custom connection; a Wide Area Network (WAN); a wireless network (e.g., employing protocols such as, but not limited to a Wireless Application Protocol (WAP), I-mode, and/or the like); and/or the like. Messages between the computers, networks, and devices described herein may be transmitted using a secure communications protocols such as, but not limited to, File Transfer Protocol (FTP); HyperText Transfer Protocol (HTTP); Secure Hypertext Transfer Protocol (HTTPS), Secure Socket Layer (SSL), ISO (e.g., ISO 8583) and/or the like. FIG.2shows a detailed block diagram of a mobile device115according to an embodiment of the invention. Mobile device115may include circuitry that is used to enable certain device functions, such as telephony. The functional elements responsible for enabling those functions may include a processor115A that can execute instructions that implement the functions and operations of the device. Processor115A may access memory115E (or another suitable data storage region or element) to retrieve instructions or data used in executing the instructions, such as provisioning scripts and mobile applications. Data input/output elements115C, such as a biometric scanner (e.g., a fingerprint or retinal scanner), keyboard or touchscreen, can allow a user to operate the mobile device115and input data (e.g., user authentication data). Other data input elements can include a camera, which can be used to capture a two dimensional code (e.g., a QR code) that is displayed on an access device or other device. Data input/output elements may also be configured to output data (via a speaker, for example). An example of an output element may include a display115B that may be used to output data to a user. Communications element115D may be used to enable data transfer between mobile device115and a wired or wireless network (via antenna115H, for example) to assist in connectivity to the Internet or other network, and enabling data transfer functions. Mobile device115may also include contactless element interface115F to enable data transfer between contactless element115E and other elements of the device, where contactless element115G may include a secure memory and a near field communications data transfer element (or another form of short range communications technology). As noted, a cellular phone or similar device is an example of a mobile device115that may be used in accordance with embodiments of the present invention. However, other forms or types of devices may be used without departing from the underlying concepts of the invention. For example, the mobile device115may alternatively be in the form of a payment card, a key fob, a tablet computer, a wearable device, etc. The memory115E may comprise a digital wallet application115E-1, a tokenization module115E-2, and an authentication module115E-3, and any other suitable module or data. The mobile device115may have any number of mobile applications installed or stored on the memory115E and is not limited to that shown inFIG.7. The memory115E may also comprise code, executable by the processor115A for implementing a method comprising: receiving access device data from an access device; generating a token request including the access device data and communication device data; sending the token request to a server computer, wherein the server computer thereafter determines a token associated with the user and generates a cryptogram, wherein the cryptogram was generated using the access device data and the communication device data; receiving the token and the cryptogram; and providing the token and the cryptogram to the access device, wherein the access device forwards the cryptogram and the token to the server computer, which verifies the cryptogram and processes the token. In some embodiments, the memory115E may also comprise a secure element, which may store encryption keys, account identifiers, and/or tokens and cryptograms. The digital wallet application115E-1may provide a user interface for the user110to provide input and initiate, facilitate, and manage transactions using the mobile device115. The digital wallet application115E-1may be able to store and/or access a payment token and/or payment credentials. The digital wallet application115E-1may also store an issuer-specific key, or any other suitable encryption means. The digital wallet application115E-1may be able to cause the mobile device115to transmit the payment token and/or payment credentials in any suitable manner (e.g., NEC, QR code, etc.). The digital wallet application115E-1may be associated with and/or provided by a wallet provider computer, the authorizing computer160, the transaction processing computer150, the transport computer140, the resource provider computer130, or any other suitable entity. The tokenization module115E-2may be a module of the digital wallet application115E-1or a separate application on the mobile device120. The tokenization module115E-2may comprise code that causes the processor115A to obtain payment tokens. For example, the tokenization module115E-2may contain logic that causes the processor115A to request a token from the tokenization computer170or any other suitable tokenization service provider or system (e.g., the authorizing computer160or the transaction processing computer150). In some embodiments, the mobile device120may be able to communicate over-the-air with the tokenization computer170, and may be able to send a direct request to the tokenization computer170. The authentication module115E-3may comprise code that causes the processor115A to conduct an authentication process to authenticate the user110. The authentication module115E-3may include biometric information of the user, secrets of the user, etc., which may be used to authenticate the user110. FIG.3shows a detailed block diagram of an access device125according to an embodiment of the invention. The access device125may comprise a processor125A operatively coupled to at least a network interface125B, a device communication interface125C, and a computer readable medium125D. The computer readable medium125D comprises a machine readable code generation module125D-1and an authorization processing module125D-2. The code generation module125D-1may comprise code, which when executed by the processor125A, may cause the access device125to generate a machine readable code, such as a one or two-dimensional bar code. The authorization processing module125D-2may comprise code, which when executed by the processor125A, may cause the access device125to generate authorization request messages, transmit authorization request messages, receive authorization response messages, and process authorization response messages. The computer readable medium125D may also comprise code, which when executed by the processor125A may implement a method comprising providing access device data to a mobile communication device, wherein the mobile communication device generates a token request including the access device data and communication device data and sends the token request to a server computer, which returns a token and a token cryptogram to the mobile communication device; receiving, by the access device, the token and the cryptogram from the mobile communication device; generating, by the access device, an authorization request message; and transmitting, by the access device, the authorization request message to the server computer, wherein the server computer verifies the cryptogram and processes the transaction using the token. FIG.4shows a detailed block diagram of a tokenization computer170according to an embodiment of the invention. The tokenization computer170comprises a processor170A, a network interface170B, a token record database170C, and a computer readable medium170D. The computer readable medium170D may comprise a tokenization module170D-1, a detokenization module170D-2, a security module170D-3, and any other suitable software module. The tokenization module170D-1may comprise code that causes the processor170A to provide payment tokens. For example, the tokenization module170D-1may contain logic that causes the processor170A to generate a payment token and/or associate the payment token with a set of payment credentials. A token record may then be stored in the token record database170C indicating that the payment token is associated with a certain user110or a certain set of payment credentials. The detokenization module170D-2may comprise code that causes the processor170A to detokenize payment tokens. For example, the detokenization module170D-2may contain logic that causes the processor170A to identify a token record associated with a payment token in the token record database170C. A set of payment credentials associated with the payment token (as indicated in the token record) can then be identified. The security module170D-3may comprise code that causes the processor170A to validate token requests before a payment token is provided. For example, security module170D-3may contain logic that causes the processor170A to confirm that a token request message is authentic by decrypting a cryptogram included in the message, by confirming that the payment credentials are authentic and associated with the requesting user110, by assessing risk associated with the requesting resource provider computer130, or by using any other suitable information. If the payment credentials are encrypted, the security module170D-3may be able to decrypt the encrypted payment credentials (e.g. via an issuer-specific key or merchant-acquirer encryption models). Also, the security module170D-3may include keys and algorithms (e.g., DES, TDES. AES) that can be used with the processor170A to generate transaction specific cryptograms and decrypt such cryptograms. The security module170D-3may also be programmed to cause the processor170A to distribute keys to other entities that may perform such cryptogram encryption/decryption functions. A method for performing a transaction can be described with reference toFIGS.1-4. The transaction that is being performed may be a payment transaction at a resource provider such as a merchant. Initially, the user110may select goods or services to purchase at the resource provider. At step S1, after the goods or services are selected, the access device125(e.g, a merchant terminal such as a POS terminal) may generate and display a unique QR code on a display. The QR code may encode transaction-specific information including, but not limited to, access device data such as a merchant identifier, transaction amount, location information (e.g., a GPS location of the access device), transaction timestamp, etc. Other examples of the transaction-specific information may include, but are not limited to, a number of purchases in a past time period, location information of the merchant terminal, an access device ID, a token request timestamp, a mode of QR code scan (e.g., voice or manual), etc. At step S2, the user may scan the QR code displayed on the access device125using his/her mobile device115. For example, the mobile device115may be a smartphone with a camera. The camera may capture the QR code and storage an image of the QR code within memory of the device for further processing. In other embodiments, the access data may be transferred from the access device125to the mobile device115using a wireless connection such as WiFi, Bluetooth, or NRC, or even a contact mode of interaction. In some embodiments, the user110authenticates to the mobile device115before obtaining transaction information (e.g., information encoded in a QR code displayed by an access device) from the access device125. Alternatively, in some embodiments, the user110may use a second device (e.g., a wearable device) communicatively coupled to the mobile device115to capture the QR code (e.g., a head mounted display connected to a smartphone device). The user may authenticate to either device by, e.g., issuing a voice command to validate the user with the device (e.g., voice authentication). In some cases, a head mounted device may capture the QR code displayed at the access device125(e.g., by using a camera connected to the head mounted device such as Google Glass™). The head mounted device sends the captured QR code information to the mobile device115. This may be sent over a secure connection supported over a wireless data protocol, such as Bluetooth, Wi-Fi, etc. In some embodiments, the system may determine whether the transaction amount in the QR code is more than the threshold transaction amount set by the authorizing entity (e.g., an issuer) operating the authorizing computer160and/or the user110. For example, if the transaction amount encoded in the QR code is $500, but the threshold transaction amount is $250, the system (e.g., any of the computers or devices inFIG.1, such as the combination of the mobile device115and the access device125) may alert the authorizing entity operating the authorizing computer160, the resource provider operating the resource provider computer130, and/or the user110. In this case, further steps may be required to continue with the transaction. For example, if the threshold transaction amount is less than the transaction amount in the QR code, then the user110may be prompted by the access device125for a challenge PIN code, by communicating with the mobile device115. In this case, the user110may enter the PIN code into the access device125to continue with the transaction. Returning toFIG.1, at step S3, the mobile device115may send a token request message to the tokenization computer170, which may be in or itself be a cloud based network. Prior to or concurrently with sending the token request message, the mobile device115may authenticate with the tokenization computer170using, for example, user credentials, a mobile device ID or any other suitable information. The token request message may request a token and may include the transaction-specific information described above. For example, the token request may include a merchant ID, a transaction amount, location information of the user110and/or the resource provider operating the resource provider computer130, a token request timestamp, and a resource provider (e.g., merchant) initiate timestamp. In some embodiments, the mobile device115may send communication device data including transaction information, GPS information, timestamps, a transaction count, a merchant ID, and any user authentication methods to the tokenization computer170. In some embodiments, the mobile device115may encrypt the above information prior to sending it to the tokenization computer170. The information may be encrypted in a manner such that it can be only be decrypted by the tokenization computer170. At step S4, upon receiving the token request message and transaction-specific information from the mobile device115, the tokenization computer170may generate a token for the mobile device115, a token expiration date, and a transaction cryptogram. Appropriate encryption keys and known algorithms such as AES, TOES, and DES may be used. The transaction cryptogram may include authentication information, which may be derived from at least the transaction-specific information including the merchant ID, the transaction amount, the locations of the user and the merchant, and a time stamp. The token may be associated with a PAN of a payment account belonging to the user110of the mobile device115. At step S5, the tokenization computer170may transmit the generated token and cryptogram in a token response message to the mobile device115. The generation of the token and cryptogram may be conditional upon authentication of the mobile device115and the user110by the tokenization computer170. In some embodiments, the token and cryptogram may be sent to the mobile device115using ISO 8583 fields. In some embodiments, the tokenization computer170may return a token provisioned for the mobile device115and a uniquely generated cryptogram to be used for validation and token assurance. The following information may be sent to the mobile device115and then to the access device125according and conforming to ISO 8583 fields: the token, token expiration data, and the uniquely generated cryptogram (e.g., the token cryptogram). The token cryptogram may include the following information: a user's (e.g., device's) GPS location information, a number of purchases in a past time frame (specified by the issuer), a merchant ID, a GPS location of the merchant terminal, transaction initiation timestamp, a transaction amount, token request timestamp, and a mode of QR code scan initiation (e.g., voice initiated, manual screen unlock, manual head mounted display initiation, etc.). At step S6, after the mobile device115receives the token and the cryptogram, the user110may interact with the access device125using his/her mobile device. For example, the user110may tap his/her mobile device115to the access device125to initiate an NFC connection. In another example, the user110may select the access device125via the mobile device115to establish a short range connection such as a Bluetooth™ connection. In yet another example, the mobile device115may generate a unique QR code that encodes the information received from the tokenization computer170. The QR code may then be scanned by the access device125. Regardless of the method of interacting with the access device125, upon initiating a connection with the access device125, the access device125may receive at least the token, the token expiration date, and the transaction cryptogram from the mobile device115. In some embodiments, the user110presents the token and the token cryptogram to the access device125at the resource provider. The token and token cryptogram may be presented to the access device125by, for example, tapping the mobile device115on an NFC reader associated with the access device125. An application running on the mobile device115may transmit the token, token expiration date, and the token cryptogram to the access device125. In steps S7, S8, and S9, the access device125may generate and send an authorization request message to the transaction processing computer150via the transport computer140and the resource provider computer130. The authorization request message may also include the token, the token expiration date, and the transaction cryptogram. At step S10, the transaction processing computer150may transmit the authorization request message to the tokenization computer170or may provide the token and the token cryptogram to the tokenization computer170. The transaction processing computer150may then decrypt the cryptogram and may also determine the real account identifier from token. Encryption keys and/or lookup tables may be used to perform these functions. For example, a real account identifier may be obtained from a token through a lookup table. In another example, a real account identifier may be mathematically derived from a token. The tokenization computer170may determine a token assurance level based on the transaction-specific information contained within the transaction cryptogram. For example, the token assurance level may be based in part on (i) matching the transaction data in the cryptogram and the authorization request message, (ii) ensuring reasonable proximity of the user110and the access device125(based on the location information), and/or (iii) determining that a recent transaction count is reasonable. The token assurance level may be based on other transaction-specific information contained within the token cryptogram, as described above. The tokenization computer170may compare the transaction-specific information obtained from the decrypted cryptogram with the information in the authorization request message. The token assurance level may also be based at least in part on identification and verification scoring that can be performed done based on the method used to initiate the QR code scan. For example, if both voice and PIN entry are used, a higher assurance value may be assigned. If only voice/PIN is used, then a medium assurance value may be assigned. If neither voice nor PIN are used, then the issuer can decide the assurance level based solely on the mobile device that is used and any predetermined cloud authentication methods. In some embodiments, the tokenization computer170may also determine if the token is subject to any domain restrictions. For example, as noted above, a single account may have many tokens associated with it. Each token may be used for a different domain. For example, one token associated with an account may be used for e-commerce transactions, while another token associated with the account may be used for in person transactions with a chip card. If the received token is outside of its intended domain, then the transaction can be rejected. In some embodiments, the tokenization computer170, the transaction processing computer150, and/or the authorizing entity (e.g, an issuer) operating the authorizing computer160may use the transaction information in the authorization request and match it against the transaction information in the cryptogram. An authorization decision may be made based at least in part on the token assurance level derived from this information. As an illustration, the issuer or the transaction processing computer150may verify that the transaction amount and the merchant ID in the authorization request message matches the information derived from the token cryptogram. Additionally, the issuer or the transaction processing computer150may ensure that the GPS locations of the user (e.g., device) and the merchant terminal indicate that they are within a reasonable proximity of each other. Further, the issuer or the transaction processing computer150may verify that the number of purchases associated with the PAN within a given time frame is reasonable. The issuer or the transaction processing computer150may then return an authorization response message to the merchant terminal indicating whether the transaction is approved/rejected. At step S11, if the tokenization computer170verifies the cryptogram, the tokenization computer170can return the real account identifier associated with the token to the transaction process computer150. If the tokenization computer170did not modify the authorization request message to include the token, the transaction processing computer150may modify the authorization request message to include the token. At step S12, the transaction processing computer150may then transmit the authorization request message to the authorizing computer160. After receiving the authorization request message, the authorizing computer160may then determine if the transaction is authorized. It may do this by determining if there are sufficient funds or credit in the account associated with the account identifier in the authorization request message. It may also conduct any independent fraud screening on the transaction. At step S13, the authorizing computer160may transmit an authorization response message comprising the account identifier back to the transaction processing computer150. At step S14, the transaction processing computer150can then transmit the authorization response message or the information therein to the tokenization computer170. At step S15, the tokenization computer170may then return the authorization response message including the token or the token itself. If the transaction processing computer160only receives the token, it may then modify the authorization response message to include the token and may transmit the authorization response message comprising the token to the transport computer140in step S16. In steps S17and S18, the authorization response message comprising the token is transmitted to the access device125via the resource provider computer130. The resource provider may then store the token instead of the real account identifier. This improves data security, since the token is stored at the resource provider instead of the real account identifier. If the resource provider experiences a data breach and the token is improperly obtained, it is of limited or no use. At the end of the day or at any other suitable time period, a clearing and settlement process, as is known in the art, may be performed between the transport computer140, the transaction processing computer150, and the authorizing computer160. AlthoughFIG.1shows and describes transaction processing computer150that detokenizes a payment token and a transaction cryptogram and that determines a token assurance level, it is understood that these functions could alternatively be performed by other computers including the transport computer140, the transaction processing computer150, and/or the authorizing computer160. Although the above-described embodiment relates to payments, embodiments of the invention are not limited to payments.FIG.5shows a block diagram of a second transaction system according to embodiments of the invention. Steps in a process flow for a user obtaining access to a building are shown are shown inFIG.5. InFIG.5, the mobile device15, the access device125, the resource provider computer130, and the tokenization computer170are described above, and the descriptions need not be repeated here. However,FIG.5also shows a user110that wishes to enter a building180, where access to the building is restricted by the access device125. At step S111, the access device125may generate transaction information such as an access device identifier, a transaction request time, and an access device location. This information may be provided to the mobile device115in step S112. At step S113, the mobile device115may transmit the information from the access device125and information from the mobile device115(e.g., the location of the mobile device115, a mobile device identifier, a real account identifier, a request time, etc.) to the tokenization computer170. At step S114, the tokenization computer may then generate a cryptogram and obtain a token for the user110, and may transmit them back to the mobile device115in step S115. Once the mobile device115obtains the token and the cryptogram, these may be transmitted to the access device125in step S116. At step S117, the access device125may transmit the token and the cryptogram to the tokenization computer170and the tokenization computer170may determine that the cryptogram and the cryptogram are valid and may provide this indication to the access device125(step S118). After the access device125receives this information, the user110may be allowed to pass into the building180as shown in step S120. A computer system may be used to implement any of the entities or components described above. The subsystems in such a system may be interconnected via a system bus. Additional subsystems include a printer, keyboard, fixed disk, and monitor, which is coupled to display adapter. Peripherals and input/output (I/O) devices, which couple to I/O controller, can be connected to the computer system by any number of means known in the art, such as a serial port. For example, serial port or external interface can be used to connect the computer apparatus to a wide area network such as the Internet, a mouse input device, or a scanner. The interconnection via system bus allows the central processor to communicate with each subsystem and to control the execution of instructions from system memory or the fixed disk, as well as the exchange of information between subsystems. The system memory and/or the fixed disk may embody a computer-readable medium. Embodiments of the invention have a number of advantages. For example, by providing a cryptogram with embedded information including data from an access device and a communication device, a token that is used with that cryptogram can be verified for its authenticity. Further, because the cryptogram is created using transaction data, the characteristics of the transaction can be verified to ensure that the transaction characteristics were not tampered with by a resource provider. As described, the inventive service may involve implementing one or more functions, processes, operations or method steps. In some embodiments, the functions, processes, operations or method steps may be implemented as a result of the execution of a set of instructions or software code by a suitably-programmed computing device, microprocessor, data processor, or the like. The set of instructions or software code may be stored in a memory or other form of data storage element which is accessed by the computing device, microprocessor, etc. In other embodiments, the functions, processes, operations or method steps may be implemented by firmware or a dedicated processor, integrated circuit, etc. It should be understood that the present invention as described above can be implemented in the form of control logic using computer software in a modular or integrated manner. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will know and appreciate other ways and/or methods to implement the present invention any combination of hardware and software. Any of the software components or functions described herein may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java, C++ or Perl using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions, or commands on a computer-readable medium, such as a random access memory (RAM), a read-only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a CD-ROM. Any such computer-readable medium may reside on or within a single computational apparatus, and may be present on or within different computational apparatuses. While certain exemplary embodiments have been described in detail and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not intended to be restrictive of the broad invention, and that this invention is not to be limited to the specific arrangements and constructions shown and described, since various other modifications may occur to those with ordinary skill in the art. | 53,429 |
11943232 | DETAILED DESCRIPTION According to exemplary embodiments, a portable electronic device is recognized, securely provisioned, and conveniently operated on networked systems in different physical locations. Multiple devices may be used simultaneously by multiple users networked through the same real-time application. Embodiments of the disclosed system and processes support various use cases, such as users bringing their personal augmented or virtual reality headsets and/or controllers to arcades or family entertainment centers to engage with on-premise experience and events as active players/participants, passive spectators, or active spectators (spectators who aren't players but who can influence or participate in events/activities to some limited extent). Users in any of these roles can also rent equipment or be provided equipment as part of entry to an activity/event (see below). See for exampleFIG.1. Entertainment providers may maintain an inventory of equipment available on-premise from which devices are assigned to user, system, game, or experience on-demand. The inventory may comprise, for example, rental devices, devices issued as part of admission to a facility or event, and devices issued to authorized participants in an event or activity. In one use case, a customer is issued an augmented or virtual reality headset (or glasses) and controller (which may be their phone or a tablet computer) upon entry to the facility, and the system enable dynamical configuration and usage of those devices for interaction with a choice from among multiple experiences, games, and activities. Rather than maintaining “dedicated” or static assignments of devices to specific attractions or activities, the facility operator maintains an inventory or depot of devices that the system dynamically assigns and configures to attractions and activities on-demand. The system may provide application program interfaces (API's), plug-ins, and/or other interfaces to the temporarily provisioned devices. The managed devices may comprise one or more processor, networking functionality, and related peripheral components, such as hand controllers. The provisioning is typically temporary, but may also be set as a longer-term persistent setting for the system. One system embodiment comprises an augmented or virtual reality headset or other device, an on-site management system and private network, and a mobile device such as a cellular phone or tablet. Certain user information and usage statistics may be persistently managed between play locations to which the augmented or virtual reality device is transported. A player profile is established in the system that enables consistent management of play statistics and facilitates networked play amongst multiple users, both from home and when they travel to an interactive application facility. The technology incorporates enterprise networking technology, combined with “in-application” features and a management system utilized at the remote interactive application facility. The technology may be integrated into new or existing software applications using a Software Development Kit (“SDK”). Each application developed with the SDK may comprise: a system application (logic of the interactive application facility's private network), a device application (logic of a user's augmented or virtual reality device or personal mobile device), and a cloud application (a third-party service hosted on a network accessible online server). The system application compatible computer and network hardware cooperate to temporarily enable secure access to the system components and to a private data network. The device application communicates with the system application using physical symbols (such as QR codes) to initiate an access protocol to the system and network. Temporary provisioning enables network access and delivery of exclusive content or activity for a limited period. This content (the payload) may be delivered or activated using logic incorporated into the SDK. The access period is defined via configuration in the system application. An exemplary use-case of the technology enables temporary access to exclusive activities and content, for example for promotional, hygienic, or technological benefits. The following example outlines a process from the perspective of a consumer (e.g., device owner or operator) and an operator who controls the system application, who in some cases could also be the consumer. The consumer transports their augmented or virtual reality all-in-one headset (the “device”) to a local entertainment venue (the “facility”). The facility integrates the device into an attraction system, e.g., a private data network. The facility operator enables the consumer to use the device to engage with an interactive experience. The consumer logs into the device application and presents a QR code to the operator. This QR code comprises the identity of the device to temporarily provision on the attraction system. The operator scans the QR code into the system application. The system application configures the network equipment (e.g., WiFi router) and system application with the device identity. The system application then presents a QR code to the consumer. This QR code comprises the encrypted security credentials of the network to join. The consumer uses the device to scan the QR code. The device application prompts the consumer to join the network and accept the payload. The operator then operates the system application to execute the activity corresponding to the payload, for example an augmented or virtual reality game. At this point the operator may end the temporary provisioning or enable additional activities for the same device. If the operator takes no affirmative action to re-provision, the temporary provisioning expires after the specified time (30 minutes). The core technology comprises three components at a high level, that carry out a unique three-way handshake.System applicationDevice applicationCloud application The specialized handshake performs a function similar to WiFi-Protected Setup (WPS) where a device is enabled to join a network through a push-button or PIN code. The handshake may be implemented in software to operate on any WiFi network device with DHCP and a published API or SNMP 3.0. The computer(s) executing the system application utilize the API/SNMP to enable access to the device. The security of this technology is enforced by three principles:Physical device proximity (two-way physical scanning medium)Remote device authentication (cloud application storage)Enforced network time-out period (removes network access) FIG.2depicts a system200in one embodiment. At a high level, the system200comprises an augmented or virtual reality device202(Oculus, HoloLens, etc.), a device application204executing on a mobile programmable device206, a system application computer208executing a system application210, and a cloud application server212executing a cloud application214. The mobile programmable device206, system application computer208, and cloud application server212communicate with one another over a network216, which may be multiple networks (e.g., a cellular network and the Internet, and/or a local area network and the Internet). The mobile programmable device206and the augmented or virtual reality device202may communicate with one another wirelessly, e.g., using Bluetooth or other short-range communication technologies. The components cooperate to configure a router218on a private data network220and to authenticate the augmented or virtual reality device202to a wireless access point222on the private data network220, to enable the augmented or virtual reality device202to participate in an interactive game or other interactive activity. Certain components of the system200may in some embodiments be implemented as software comprising instructions executed on one or more programmable device. By way of example, components of the disclosed systems may be implemented as an application, an app, drivers, or services. For example, in one particular embodiment, the cloud application214is implemented as a service that executes as one or more processes, modules, subroutines, or tasks on the cloud application server212so as to provide the described capabilities to the device application204of the mobile programmable device206and the system application210of the system application computer208over the network216. The system200as depicted comprises various computer hardware devices and software modules coupled by a network216in one embodiment. Each device includes a native operating system, typically pre-installed on its non-volatile RAM, and a variety of software applications or apps for performing various functions. The mobile programmable device206comprises a native operating system224and various apps or applications (e.g., device application204and device app226). The device application204may communicate directly with the augmented or virtual reality device202or may communicate with the augmented or virtual reality device202through another app or application, such as the device app226(which may be specially designed for interacting with the augmented or virtual reality device202). A system application computer208also includes an operating system228that may include one or more library of native routines to run executable software on that device. The system application computer208also includes various executable applications (e.g., system application210and application230). The mobile programmable device206and system application computer208are configured as clients on the network216. A cloud application server212is also provided and includes an operating system232with native routines specific to providing a service (e.g., service234and cloud application214) available to the networked clients in this configuration. As is well known in the art, an application, an app, or a service may be created by first writing computer code to form a computer program, which typically comprises one or more computer code sections or modules. Computer code may comprise instructions in many forms, including source code, assembly code, object code, executable code, and machine language. Computer programs often implement mathematical functions or algorithms and may implement or utilize one or more application program interfaces. A compiler is typically used to transform source code into object code and thereafter a linker combines object code files into an executable application, recognized by those skilled in the art as an “executable”. The distinct file comprising the executable would then be available for use by the system application computer208, mobile programmable device206, and/or cloud application server212. Any of these devices may employ a loader to place the executable and any associated library in memory for execution. The operating system executes the program by passing control to the loaded program code, creating a task or process. An alternate means of executing an application or app involves the use of an interpreter (e.g., interpreter236). In addition to executing applications (“apps”) and services, the operating system is also typically employed to execute drivers to perform common tasks such as connecting to third-party hardware devices (e.g., printers, displays, input devices), storing data, interpreting commands, and extending the capabilities of applications. For example, a driver238or driver240on the mobile programmable device206or system application computer208(e.g., driver242and driver244) might enable inputs from and outputs to the augmented or virtual reality device202. Any of the devices may read and write data from and to files (e.g., file246or file248) and applications or apps may utilize one or more plug-in (e.g., plug-in250) to extend their capabilities (e.g., to encode or decode video files). The network216in the system200can be of a type understood by those skilled in the art, including a Local Area Network (LAN), Wide Area Network (WAN), Transmission Communication Protocol/Internet Protocol (TCP/IP) network, and so forth. These protocols used by the network216dictate the mechanisms by which data is exchanged between devices. FIG.3depicts a routine300for configuring the augmented or virtual reality device202to interact with other devices on the private data network220, in one embodiment. In block302, a physical address of an augmented or virtual reality device is communicated to a cloud application via a mobile device. In block304, the cloud application generates a scan code (QR code, bar code etc.) based on the physical address of the augmented or virtual reality device, and possibly additional information from an account of the user of the augmented or virtual reality device. In other embodiments, the cloud application communicates to the device application the account information to encode in the scan code along with the physical address, and the device application generates the scan code. The scan code is a “one time code” that expires after a preconfigured amount of time once used. The scan code may be formed from various information about the user of the augmented or virtual reality device202, such as their unique player token (e.g., a temporary JSON web token), the physical address of the augmented or virtual reality device202. The player token may be generated by hashing information (e.g., user's email address, unique player id, handle or user name, authentication token from user login to profile, etc.) from the player profile stored on the cloud application server212. In block306, the scan code is input (e.g., via optical scanning) to a system application. In block308, the scan code by the system application verifies the scan code (i.e., authenticates the user of the augmented or virtual reality device202) using the cloud application. This may be done by passing the user token and the augmented or virtual reality device physical address extracted from the scan code to the cloud application. In block310, the system application may communicate location information indicating where the scan code is being used to the cloud application. The location information may be used to authenticate the request to verify the scan code and for tracking, in the cloud application, where the scan code is being used. In block312, routine300the system application configures a router for a private data network with the physical address of the augmented or virtual reality device (e.g., using SNMP), in response to verifying the scan code. This configuration “whitelists” (i.e., prevents blocking of) the augmented or virtual reality device202on the private data network. In block314, the system application communicates to the cloud application an SSID and a passphrase for the private data network, and an IP address for the private data network, in response to verifying the scan code. In block316, the cloud application communicates to the mobile application (an app or application executing on the user's mobile programmable device206) the SSID and the passphrase for the private data network, and the IP address for the system server. Alternatively to block316, the cloud application may present a new scan code on the screen of the system application comprising the SSID, passphrase and system server IP address at block324. The mobile device comprising the mobile application, or the augmented or virtual reality device comprising the device application, may then scan the scan code to retrieve the private data network credentials. Unless otherwise indicated, it should be understood that the mobile device may comprise logic to perform some or all aspects attributed to the device application, and vice versa. In block318, the augmented or virtual reality device is authenticated on the private data network using the SSID and the passphrase. In block320, the augmented or virtual reality device operates on the private data network using the IP address for the private data network. After some configured period of time the system application may revoke the access by the augmented or virtual reality device to the private data network by removing configuration of the physical address of the augmented or virtual reality device from the private data network router. In block322, following a configured timeout period, the cloud application removes the stored data used to create the scan codes from the system. The system application removes the physical address and IP address allocation from the network device (e.g., router), disconnecting the augmented or virtual reality device from the private data network. FIG.4depicts an embodiment of an operating sequence400between a device application402, cloud application404, and system application406to configure a augmented or virtual reality device308to operate on a private data network. The device application402communicates a physical address of the augmented or virtual reality device408to the cloud application404. In one embodiment this causes the cloud application404to generate a scan code based on the physical address of the augmented or virtual reality device410and the cloud application404communicates the scan code412back to the device application402. In another embodiment, the cloud application404communicates the information to encode into the scan code to the device application402, and the device application402generates the scan code. The device application402may then display the scan code414so that the system application406can scan the scan code416. The system application406communicates the authentication credentials from the scan code418to the cloud application404, and the cloud application404communicates a verification signal420to the system application406and also a physical address of the augmented or virtual reality device422that was authenticated. The system application406applies the physical address to configure the router of the private data network with the physical address of the augmented or virtual reality device424, and communicates an SSID and passphrase for the private data network, and the IP address for interacting on the private data network426to the cloud application404(e.g., IP address of the facility server). The cloud application404communicates the SSID and the passphrase for the private data network, and the IP address provided by the system application428to the device application402. The device application402may then pass the SSID and passphrase to the augmented or virtual reality device430so that the augmented or virtual reality device430can authenticate308on the private data network. Alternatively, the cloud application404may generate a new scan code comprising the SSID and passphrase for the private data network and the IP address provided by the system application330. This scan code is scanned directly by the augmented or virtual reality device430to initiate network authentication. Once authenticated on the private data network using the SSID and the passphrase432, and using the IP address, the augmented or virtual reality device430may now undertake interaction on the private data network using the IP address provided by the system application434. In this scenario the user of the augmented or virtual reality device430may never gain information about the passphrase, which enhances the security of the private data network. After some configured period of time the system application406may revoke the access by the augmented or virtual reality device430to the private data network by removing configuration of the physical address of the augmented or virtual reality device430from the private data network router. FIG.5illustrates a perspective view of an embodiment of a wearable augmented or virtual reality device500that may be dynamically provisioned to different systems in accordance with the techniques disclosed herein. The device500in this embodiment is a computing device in the form of a wearable headset. The device500comprises a headpiece502, which is a headband, arranged to be worn on the wearer's head. The headpiece502has a central portion504intended to fit over the nose bridge of a wearer, and has an inner curvature intended to wrap around the wearer's head above their ears. The headpiece502supports a left optical component506and a right optical component508, which are waveguides. For ease of reference herein an optical component will be considered to be either a left or right component, because in the described embodiment the components are essentially identical apart from being mirror images of each other. Therefore, all description pertaining to the left-hand component also pertains to the right-hand component. The device500comprises augmented reality device logic700that is depicted inFIG.7. The augmented reality device logic700comprises a graphics engine702, which may comprise a micro display and imaging optics in the form of a collimating lens (not shown). The micro display can be any type of image source, such as liquid crystal on silicon (LCOS) displays, transmissive liquid crystal displays (LCD), matrix arrays of LED's (whether organic or inorganic) and any other suitable display. The display is driven by circuitry known in the art to activate individual pixels of the display to generate an image. Substantially collimated light, from each pixel, falls on an exit pupil of the graphics engine702. At the exit pupil, the collimated light beams are coupled into each of the left optical component506and the right optical component508into a respective left in-coupling zone510and rightin-coupling zone512. In-coupled light is then guided, through a mechanism that involves diffraction and TIR, laterally of the optical component in a respective left intermediate zone514and right intermediate zone532, and also downward into a respective left exit zone516and right exit zone518where it exits towards the users' eye. The collimating lens collimates the image into a plurality of beams, which form a virtual version of the displayed image, the virtual version being a virtual image at infinity in the optics sense. The light exits as a plurality of beams, corresponding to the input beams and forming substantially the same virtual image, which the lens of the eye projects onto the retina to form a real image visible to the user. In this manner, the left optical component506and the right optical component508project the displayed image onto the wearer's eyes. The various optical zones can, for example, be suitably arranged diffractions gratings or holograms. Each optical component has a refractive index n which is such that total internal reflection takes place to guide the beam from the light engine along the respective intermediate expansion zone, and down towards respective the exit zone. Each optical component is substantially transparent, whereby the wearer can see through it to view a real-world environment in which they are located simultaneously with the projected image, thereby providing an augmented reality experience. To provide a stereoscopic image, i.e. that is perceived as having 3D structure by the user, slightly different versions of a 2D image can be projected onto each eye for example from multiple graphics engines702(i.e. two micro displays), or from the same light engine (i.e. one micro display) using suitable optics to split the light output from the single display. The device500is just one exemplary configuration. For instance, where two light-engines are used, these may instead be at separate locations to the right and left of the device (near the wearer's ears). Moreover, whilst in this example, the input beams that form the virtual image are generated by collimating light from the display, an alternative light engine based on so-called scanning can replicate this effect with a single beam, the orientation of which is fast modulated whilst simultaneously modulating its intensity and/or colour. A virtual image can be simulated in this manner that is equivalent to a virtual image that would be created by collimating light of a (real) image on a display with collimating optics. Alternatively, a similar AR experience can be provided by embedding substantially transparent pixels in a glass or polymer plate in front of the wearer's eyes, having a similar configuration to the left optical component506and right optical component508though without the need for the zone structures. Other headpiece502embodiments are also within the scope of the subject matter. For instance, the display optics can equally be attached to the users head using a frame (in the manner of conventional spectacles), helmet or other fit system. The purpose of the fit system is to support the display and provide stability to the display and other head borne systems such as tracking systems and cameras. The fit system can be designed to meet user population in anthropometric range and head morphology and provide comfortable support of the display system. The device500also comprises one or more cameras704—for example left stereo camera520and right stereo camera522mounted on the headpiece502and configured to capture an approximate view (“field of view”) from the user's left and right eyes respectfully in this example. The cameras are located towards either side of the user's head on the headpiece502, and thus capture images of the scene forward of the device form slightly different perspectives. In combination, the stereo camera's capture a stereoscopic moving image of the real-world environment as the device moves through it. A stereoscopic moving image means two moving images showing slightly different perspectives of the same scene, each formed of a temporal sequence of frames to be played out in quick succession to replicate movement. When combined, the two images give the impression of moving 3D structure. A left microphone524and a right microphone526are located at the front of the headpiece (from the perspective of the wearer), and left and right channel speakers, earpiece or other audio output transducers are to the left and right of the headpiece502. These are in the form of a pair of bone conduction audio transducers functioning as a left speaker528and right speaker530audio channel output. FIG.6depicts an augmented or virtual reality device600in additional aspects, according to one embodiment. The augmented or virtual reality device600comprises processing units602, input devices604, memory606, output devices608, storage devices610, a network interface612, and various logic614,616,618,620configured to carry out aspects of the techniques disclosed herein (e.g., aspects of the routine300and operating sequence400). The input devices604comprise transducers that convert physical phenomenon into machine internal signals, typically electrical, optical or magnetic signals. Signals may also be wireless in the form of electromagnetic radiation in the radio frequency (RF) range but also potentially in the infrared or optical range. Examples of input devices604are keyboards which respond to touch or physical pressure from an object or proximity of an object to a surface, mice which respond to motion through space or across a plane, microphones which convert vibrations in the medium (typically air) into device signals, scanners which convert optical patterns on two or three dimensional objects into device signals. The signals from the input devices604are provided via various machine signal conductors (e.g., busses or network interfaces) and circuits to memory606. The memory606provides for storage (via configuration of matter or states of matter) of signals received from the input devices604, instructions and information for controlling operation of the processing units602, and signals from storage devices610. The memory606may in fact comprise multiple memory devices of different types, for example random access memory devices and non-volatile (e.g., FLASH memory) devices. Information stored in the memory606is typically directly accessible to the processing units602of the device. Signals input to the Augmented or virtual reality device600cause the reconfiguration of the internal material/energy state of the memory606, creating logic that in essence forms a new machine configuration, influencing the behavior of the Augmented or virtual reality device600by affecting the behavior of the processing units602with control signals (instructions) and data provided in conjunction with the control signals. The storage devices610may provide a slower but higher capacity machine memory capability. Examples of storage devices610are hard disks, optical disks, large capacity flash memories or other non-volatile memory technologies, and magnetic memories. The processing units602may cause the configuration of the memory606to be altered by signals in the storage devices610. In other words, the processing units602may cause data and instructions to be read from storage devices610in the memory606from which may then influence the operations of processing units602as instructions and data signals, and from which it may also be provided to the output devices608. The processing units602may alter the content of the memory606by signaling to a machine interface of memory606to alter the internal configuration, and then converted signals to the storage devices610to alter its material internal configuration. In other words, data and instructions may be backed up from memory606, which is often volatile, to storage devices610, which are often non-volatile. Output devices608are transducers which convert signals received from the memory606into physical phenomenon such as vibrations in the air, or patterns of light on a machine display, or vibrations (i.e., haptic devices) or patterns of ink or other materials (i.e., printers and 3-D printers). The network interface612receives signals from the memory606or processing units602and converts them into electrical, optical, or wireless signals to other machines, typically via a machine network. The network interface612also receives signals from the machine network and converts them into electrical, optical, or wireless signals to the memory606or processing units602. FIG.7depicts components of an exemplary augmented reality device logic700. The augmented reality device logic700comprises a graphics engine702, a camera704, processing units706, including one or more CPU708and/or GPU710, a WiFi712wireless interface, a Bluetooth714wireless interface, speakers716, microphones718, and one or more memory720. In one embodiment, the memory720may comprise instructions that when applied to the processing units706, configure the processing units706to carry out aspects of the techniques disclosed herein (e.g., aspects of routine300and operating sequence400). The processing units706may in some cases comprise programmable devices such as bespoke processing units optimized for a particular function, such as AR related functions. The augmented reality device logic700may comprise other components that are not shown, such as dedicated depth sensors, additional interfaces etc. Some or all of the components inFIG.7may be housed in an AR headset. In some embodiments, some of these components may be housed in a separate housing connected or in wireless communication with the components of the AR headset. For example, a separate housing for some components may be designed to be worn or a belt or to fit in the wearer's pocket, or one or more of the components may be housed in a separate computer device (smartphone, tablet, laptop or desktop computer etc.) which communicates wirelessly with the display and camera apparatus in the AR headset, whereby the headset and separate device constitute the full augmented reality device logic700. The memory720comprises logic722to be applied to the processing units706to execute. In some cases, different parts of the logic722may be executed by different components of the processing units706. The logic722typically comprises code of an operating system, as well as code of one or more applications configured to run on the operating system to carry out aspects of the processes disclosed herein. FIG.8depicts a functional block diagram of an embodiment of augmented or virtual reality device logic800. The augmented or virtual reality device logic800comprises the following functional modules: a rendering engine802, local augmentation logic804, local modeling logic806, device tracking logic808, an encoder810, and a decoder812. Each of these functional modules may be implemented in software, dedicated hardware, firmware, or a combination of these logic types. The rendering engine802controls the graphics engine814to generate a stereoscopic image visible to the wearer, i.e. to generate slightly different images that are projected onto different eyes by the optical components of a headset substantially simultaneously, so as to create the impression of 3D structure. The stereoscopic image is formed by rendering engine802rendering at least one virtual display element (“augmentation”), which is perceived as a 3D element, i.e. having perceived 3D structure, at a real-world location in 3D space by the user. An augmentation is defined by an augmentation object stored in the memory816. The augmentation object comprises: location data defining a desired location in 3D space for the virtual element (e.g. as (x,y,z) Cartesian coordinates); structural data defining 3D surface structure of the virtual element, i.e. a 3D model of the virtual element; and image data defining 2D surface texture of the virtual element to be applied to the surfaces defined by the 3D model. The augmentation object may comprise additional information, such as a desired orientation of the augmentation. The perceived 3D effects are achieved though suitable rendering of the augmentation object. To give the impression of the augmentation having 3D structure, a stereoscopic image is generated based on the 2D surface and 3D augmentation model data in the data object, with the augmentation being rendered to appear at the desired location in the stereoscopic image. A 3D model of a physical object is used to give the impression of the real-world having expected tangible effects on the augmentation, in the way that it would a real-world object. The 3D model represents structure present in the real world, and the information it provides about this structure allows an augmentation to be displayed as though it were a real-world 3D object, thereby providing an immersive augmented reality experience. The 3D model is in the form of 3D mesh. For example, based on the model of the real-world, an impression can be given of the augmentation being obscured by a real-world object that is in front of its perceived location from the perspective of the user; dynamically interacting with a real-world object, e.g. by moving around the object; statically interacting with a real-world object, say by sitting on top of it etc. Whether or not real-world structure should affect an augmentation can be determined based on suitable rendering criteria. For example, by creating a 3D model of the perceived AR world, which includes the real-world surface structure and any augmentations, and projecting it onto a plane along the AR user's line of sight as determined using pose tracking (see below), a suitable criteria for determining whether a real-world object should be perceived as partially obscuring an augmentation is whether the projection of the real-world object in the plane overlaps with the projection of the augmentation, which could be further refined to account for transparent or opaque real world structures. Generally the criteria can depend on the location and/or orientation of the augmented reality device and/or the real-world structure in question. An augmentation can also be mapped to the mesh, in the sense that its desired location and/or orientation is defined relative to a certain structure(s) in the mesh. Should that structure move and/or rotate causing a corresponding change in the mesh, when rendered properly this will cause corresponding change in the location and/or orientation of the augmentation. For example, the desired location of an augmentation may be on, and defined relative to, a table top structure; should the table be moved, the augmentation moves with it. Object recognition can be used to this end, for example to recognize a known shape of table and thereby detect when the table has moved using its recognizable structure. Such object recognition techniques are known in the art. An augmentation that is mapped to the mash in this manner, or is otherwise associated with a particular piece of surface structure embodied in a 3D model, is referred to an “annotation” to that piece of surface structure. In order to annotate a piece of real-world surface structure, it is necessary to have that surface structure represented by the 3D model in question—without this, the real-world structure cannot be annotated. The local modeling logic806generates a local 3D model “LM” of the environment in the memory816, using the AR device's own sensor(s) e.g. cameras818and/or any dedicated depth sensors etc. The local modeling logic806and sensor(s) constitute sensing apparatus. The device tracking logic808tracks the location and orientation of the AR device, e.g. a headset, using local sensor readings captured from the AR device. The sensor readings can be captured in a number of ways, for example using the cameras818and/or other sensor(s) such as accelerometers. The device tracking logic808determines the current location and orientation of the AR device and provides this information to the rendering engine802, for example by outputting a current “pose vector” of the AR device. The pose vector is a six dimensional vector, for example (x, y, z, P, R, Y) where (x,y,z) are the device's Cartesian coordinates with respect to a suitable origin, and (P, R, Y) are the device's pitch, roll and yaw with respect to suitable reference axes. The rendering engine802adapts the local model based on the tracking, to account for the movement of the device i.e. to maintain the perception of the as 3D elements occupying the real-world, for example to ensure that static augmentations appear to remain static (which will in fact be achieved by scaling or rotating them as, from the AR user's perspective, the environment is moving relative to them). The encoder810receives image data from the cameras818and audio data from the microphones820and possibly other types of data (e.g., annotation or text generated by the user of the AR device using the local augmentation logic804) and transmits that information to other devices, for example the devices of collaborators in the AR environment. The decoder812receives an incoming data stream from other devices, and extracts audio, video, and possibly other types of data (e.g., annotations, text) therefrom. Machine Embodiments FIG.9depicts a diagrammatic representation of a machine900in the form of a computer system within which logic may be implemented to cause the machine to perform any one or more of the functions or methods disclosed herein, according to an example embodiment. For example any one of the mobile programmable device206, cloud application server212, and system application computer208could be implemented in a manner similar to the depicted machine900. Specifically,FIG.9depicts a machine900comprising instructions902(e.g., a program, an application, an applet, an app, or other executable code stored in non-volatile static memory918) for causing the machine900to perform any one or more of the functions or methods discussed herein. For example the instructions902may cause the machine900to carry out aspects of the routine300or operating sequence400, and/or to implement the device application402, cloud application404, and/or system application406. The instructions902configure a general, non-programmed machine into a particular machine900programmed to carry out said functions and/or methods. In alternative embodiments, the machine900operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine900may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine900may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions902, sequentially or otherwise, that specify actions to be taken by the machine900. Further, while only a single machine900is depicted, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions902to perform any one or more of the methodologies or subsets thereof discussed herein. The machine900may include processors904, memory906, and I/O components908, which may be configured to communicate with each other such as via one or more bus910. In an example embodiment, the processors904(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, one or more processor (e.g., processor912and processor914) to execute the instructions902. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.9depicts multiple processors904, the machine900may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory906may include one or more of a main memory916, a static memory918, and a storage unit920, each accessible to the processors904such as via the bus910. The main memory916, the static memory918, and storage unit920may be utilized, individually or in combination, to store the instructions902embodying any one or more of the functionality described herein. The instructions902may reside, completely or partially, within the main memory916, within the static memory918, within a machine-readable medium922within the storage unit920, within at least one of the processors904(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine900. The I/O components908may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components908that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components908may include many other components that are not shown inFIG.9. The I/O components908are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components908may include output components924and input components926. The output components924may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers822), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components926may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), one or more cameras for capturing still images and video, and the like. In further example embodiments, the I/O components908may include biometric components928, motion components930, environmental components932, or position components934, among a wide array of possibilities. For example, the biometric components928may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure bio-signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components930may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components932may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components934may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components908may include communication components936operable to couple the machine900to a network938or devices940via a coupling942and a coupling944, respectively. For example, the communication components936may include a network interface component or another suitable device to interface with the network938. In further examples, the communication components936may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices940may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components936may detect identifiers or include components operable to detect identifiers. For example, the communication components936may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components936, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. Instruction and Data Storage Medium Embodiments The various memories (i.e., memory906, main memory916, static memory918, and/or memory of the processors904) and/or storage unit920may store one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions902), when executed by processors904, cause various operations to implement the disclosed embodiments. As used herein, the terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors and internal or external to computer systems. Specific examples of machine-storage media, computer-storage media and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such intangible media, at least some of which are covered under the term “signal medium” discussed below. Communication Network Embodiments In various example embodiments, one or more portions of the network938may be an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network938or a portion of the network938may include a wireless or cellular network, and the coupling942may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling942may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1xRTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology. The instructions902and/or data generated by or received and processed by the instructions902may be transmitted or received over the network938using a transmission medium via a network interface device (e.g., a network interface component included in the communication components936) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions902may be transmitted or received using a transmission medium via the coupling944(e.g., a peer-to-peer coupling) to the devices940. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions902for execution by the machine900, and/or data generated by execution of the instructions902, and/or data to be operated on during execution of the instructions902, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. Listing of Drawing Elements 200system202augmented or virtual reality device204device application206mobile programmable device208system application computer210system application212cloud application server214cloud application216network218router220private data network222wireless access point224operating system226device app228operating system230application232operating system234service236interpreter238driver240driver242driver244driver246file248file250plug-in300routine302block304block306block308block310block312block314block316block318block320block322block324block400operating sequence402device application404cloud application406system application408physical address of the augmented or virtual reality device410generate a scan code based on the physical address of the augmented or virtual reality device412scan code414display the scan code416scan the scan code418authentication credentials from the scan code420verification signal422physical address of the augmented or virtual reality device424configure the router of the private data network with the physical address of the augmented or virtual reality device426SSID and passphrase for the private data network, and the IP address for interacting on the private data network428the SSID and the passphrase for the private data network, and the IP address provided by the system application430augmented or virtual reality device432authenticated on the private data network using the SSID and the passphrase434interaction on the private data network using the IP address provided by the system application500device502headpiece504central portion506left optical component508right optical component510left in-coupling zone512rightin-coupling zone514left intermediate zone516left exit zone518right exit zone520left stereo camera522right stereo camera524left microphone526right microphone528left speaker530right speaker532right intermediate zone600augmented or virtual reality device602processing units604input devices606memory608output devices610storage devices612network interface614logic616logic618logic620logic700augmented reality device logic702graphics engine704camera706processing units708CPU710GPU712WiFi714Bluetooth716speakers718microphones720memory722logic800augmented or virtual reality device logic802rendering engine804local augmentation logic806local modeling logic808device tracking logic810encoder812decoder814graphics engine816memory818cameras820microphones822speakers900machine902instructions904processors906memory908I/O components910bus912processor914processor916main memory918static memory920storage unit922machine-readable medium924output components926input components928biometric components930motion components932environmental components934position components936communication components938network940devices942coupling944coupling “Algorithm” refers to any set of instructions configured to cause a machine to carry out a particular function or process. “App” refers to a type of application with limited functionality, most commonly associated with applications executed on mobile devices. Apps tend to have a more limited feature set and simpler user interface than applications as those terms are commonly understood in the art. “Application” refers to any software that is executed on a device above a level of the operating system. An application will typically be loaded by the operating system for execution and will make function calls to the operating system for lower-level services. An application often has a user interface but this is not always the case. Therefore, the term ‘application’ includes background processes that execute at a higher level than the operating system. “Application program interface” refers to instructions implementing entry points and return values to a module. “Assembly code” refers to a low-level source code language comprising a strong correspondence between the source code statements and machine language instructions. Assembly code is converted into executable code by an assembler. The conversion process is referred to as assembly. Assembly language usually has one statement per machine language instruction, but comments and statements that are assembler directives, macros, and symbolic labels may also be supported. “Compiled computer code” refers to object code or executable code derived by executing a source code compiler and/or subsequent tools such as a linker or loader. “Compiler” refers to logic that transforms source code from a high-level programming language into object code or in some cases, into executable code. “Computer code” refers to any of source code, object code, or executable code. “Computer code section” refers to one or more instructions. “Computer program” refers to another term for ‘application’ or ‘app’. “Driver” refers to low-level logic, typically software, that controls components of a device. Drivers often control the interface between an operating system or application and input/output components or peripherals of a device, for example. “Executable” refers to a file comprising executable code. If the executable code is not interpreted computer code, a loader is typically used to load the executable for execution by a programmable device. “Executable code” refers to instructions in a ready-to-execute form by a programmable device. For example, source code instructions in non-interpreted execution environments are not executable code because they must usually first undergo compilation, linking, and loading by the operating system before they have the proper form for execution. Interpreted computer code may be considered executable code because it can be directly applied to a programmable device (an interpreter) for execution, even though the interpreter itself may further transform the interpreted computer code into machine language instructions. “File” refers to a unitary package for storing, retrieving, and communicating data and/or instructions. A file is distinguished from other types of packaging by having associated management metadata utilized by the operating system to identify, characterize, and access the file. “Instructions” refers to symbols representing commands for execution by a device using a processor, microprocessor, controller, interpreter, or other programmable logic. Broadly, ‘instructions’ can mean source code, object code, and executable code. ‘instructions’ herein is also meant to include commands embodied in programmable read-only memories (EPROM) or hard coded into hardware (e.g., ‘micro-code’) and like implementations wherein the instructions are configured into a machine memory or other hardware component at manufacturing time of a device. “Interpreted computer code” refers to instructions in a form suitable for execution by an interpreter. “Interpreter” refers to an interpreter is logic that directly executes instructions written in a source code scripting language, without requiring the instructions to a priori be compiled into machine language. An interpreter translates the instructions into another form, for example into machine language, or into calls to internal functions and/or calls to functions in other software modules. “Library” refers to a collection of modules organized such that the functionality of all the modules may be included for use by software using references to the library in source code. “Linker” refers to logic that inputs one or more object code files generated by a compiler or an assembler and combines them into a single executable, library, or other unified object code output. One implementation of a linker directs its output directly to machine memory as executable code (performing the function of a loader as well). “Loader” refers to logic for loading programs and libraries. The loader is typically implemented by the operating system. A typical loader copies an executable into memory and prepares it for execution by performing certain transformations, such as on memory addresses. “Logic” refers to machine memory circuits and non-transitory machine readable media comprising machine-executable instructions (software and firmware), and/or circuitry (hardware) which by way of its material and/or material-energy configuration comprises control and/or procedural signals, and/or settings and values (such as resistance, impedance, capacitance, inductance, current/voltage ratings, etc.), that may be applied to influence the operation of a device. Magnetic media, electronic circuits, electrical and optical memory (both volatile and nonvolatile), and firmware are examples of logic. Logic specifically excludes pure signals or software per se (however does not exclude machine memories comprising software and thereby forming configurations of matter). “Machine language” refers to instructions in a form that is directly executable by a programmable device without further translation by a compiler, interpreter, or assembler. In digital devices, machine language instructions are typically sequences of ones and zeros. “Module” refers to a computer code section having defined entry and exit points. Examples of modules are any software comprising an application program interface, drivers, libraries, functions, and subroutines. “Object code” refers to the computer code output by a compiler or as an intermediate output of an interpreter. Object code often takes the form of machine language or an intermediate language such as register transfer language (RTL). “Operating system” refers to logic, typically software, that supports a device's basic functions, such as scheduling tasks, managing files, executing applications, and interacting with peripheral devices. In normal parlance, an application is said to execute “above” the operating system, meaning that the operating system is necessary in order to load and execute the application and the application relies on modules of the operating system in most cases, not vice-versa. The operating system also typically intermediates between applications and drivers. Drivers are said to execute “below” the operating system because they intermediate between the operating system and hardware components or peripheral devices. “Plug-in” refers to software that adds features to an existing computer program without rebuilding (e.g., changing or re-compiling) the computer program. Plug-ins are commonly used for example with Internet browser applications. “Process” refers to software that is in the process of being executed on a device. “Programmable device” refers to any logic (including hardware and software logic) who's operational behavior is configurable with instructions. “Service” refers to a process configurable with one or more associated policies for use of the process. Services are commonly invoked on server devices by client devices, usually over a machine communication network such as the Internet. Many instances of a service may execute as different processes, each configured with a different or the same policies, each for a different client. “Software” refers to logic implemented as instructions for controlling a programmable device or component of a device (e.g., a programmable processor, controller). Software can be source code, object code, executable code, machine language code. Unless otherwise indicated by context, software shall be understood to mean the embodiment of said code in a machine memory or hardware component, including “firmware” and micro-code. “Source code” refers to a high-level textual computer language that requires either interpretation or compilation in order to be executed by a device. “Subroutine” refers to a module configured to perform one or more calculations or other processes. In some contexts the term ‘subroutine’ refers to a module that does not return a value to the logic that invokes it, whereas a ‘function’ returns a value. However herein the term ‘subroutine’ is used synonymously with ‘function’. “Task” refers to one or more operations that a process performs. Various functional operations described herein may be implemented in logic that is referred to using a noun or noun phrase reflecting said operation or function. For example, an association operation may be carried out by an “associator” or “correlator”. Likewise, switching may be carried out by a “switch”, selection by a “selector”, and so on. “Logic” refers to machine memory circuits and non-transitory machine readable media comprising machine-executable instructions (software and firmware), and/or circuitry (hardware) which by way of its material and/or material-energy configuration comprises control and/or procedural signals, and/or settings and values (such as resistance, impedance, capacitance, inductance, current/voltage ratings, etc.), that may be applied to influence the operation of a device. Magnetic media, electronic circuits, electrical and optical memory (both volatile and nonvolatile), and firmware are examples of logic. Logic specifically excludes pure signals or software per se (however does not exclude machine memories comprising software and thereby forming configurations of matter). Within this disclosure, different entities (which may variously be referred to as “units,” “circuits,” other components, etc.) may be described or claimed as “configured” to perform one or more tasks or operations. This formulation—[entity] configured to [perform one or more tasks]—is used herein to refer to structure (i.e., something physical, such as an electronic circuit). More specifically, this formulation is used to indicate that this structure is arranged to perform the one or more tasks during operation. A structure can be said to be “configured to” perform some task even if the structure is not currently being operated. A “credit distribution circuit configured to distribute credits to a plurality of processor cores” is intended to cover, for example, an integrated circuit that has circuitry that performs this function during operation, even if the integrated circuit in question is not currently being used (e.g., a power supply is not connected to it). Thus, an entity described or recited as “configured to” perform some task refers to something physical, such as a device, circuit, memory storing program instructions executable to implement the task, etc. This phrase is not used herein to refer to something intangible. The term “configured to” is not intended to mean “configurable to.” An unprogrammed FPGA, for example, would not be considered to be “configured to” perform some specific function, although it may be “configurable to” perform that function after programming. Reciting in the appended claims that a structure is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f) for that claim element. Accordingly, claims in this application that do not otherwise include the “means for” [performing a function] construct should not be interpreted under 35 U.S.C. § 112(f). As used herein, the term “based on” is used to describe one or more factors that affect a determination. This term does not foreclose the possibility that additional factors may affect the determination. That is, a determination may be solely based on specified factors or based on the specified factors as well as other, unspecified factors. Consider the phrase “determine A based on B.” This phrase specifies that B is a factor that is used to determine A or that affects the determination of A. This phrase does not foreclose that the determination of A may also be based on some other factor, such as C. This phrase is also intended to cover an embodiment in which A is determined based solely on B. As used herein, the phrase “based on” is synonymous with the phrase “based at least in part on.” As used herein, the phrase “in response to” describes one or more factors that trigger an effect. This phrase does not foreclose the possibility that additional factors may affect or otherwise trigger the effect. That is, an effect may be solely in response to those factors, or may be in response to the specified factors as well as other, unspecified factors. Consider the phrase “perform A in response to B.” This phrase specifies that B is a factor that triggers the performance of A. This phrase does not foreclose that performing A may also be in response to some other factor, such as C. This phrase is also intended to cover an embodiment in which A is performed solely in response to B. As used herein, the terms “first,” “second,” etc. are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.), unless stated otherwise. For example, in a register file having eight registers, the terms “first register” and “second register” can be used to refer to any two of the eight registers, and not, for example, just logical registers 0 and 1. When used in the claims, the term “or” is used as an inclusive or and not as an exclusive or. For example, the phrase “at least one of x, y, or z” means any one of x, y, and z, as well as any combination thereof. Having thus described illustrative embodiments in detail, it will be apparent that modifications and variations are possible without departing from the scope of the invention as claimed. The scope of inventive subject matter is not limited to the depicted embodiments but is rather set forth in the following Claims. | 71,001 |
11943233 | DETAILED DESCRIPTION (1) An electronic control unit according to an aspect of the present disclosure is an electronic control unit connected to an in-vehicle network bus in an in-vehicle network system, the in-vehicle network system including a plurality of apparatuses that perform communication of frames via the in-vehicle network bus, the electronic control unit including: a first control circuit; and a second control circuit, wherein the first control circuit is connected to the in-vehicle network bus via the second control circuit over wired communication and/or wireless communication, wherein the second control circuit performs a first determination process on a received frame that is received from the in-vehicle network bus, to which the second control circuit is connected, to determine a conformity of the received frame with a first rule related to at least a reception interval, and, upon determining that the received frame conforms to the first rule, executes a predetermined process based on content of the received frame, and wherein the first control circuit performs a second determination process on the received frame, received via the second control circuit, to determine a conformity of the received frame with a second rule that is different from the first rule. This makes it possible to determine, for example, whether or not a frame whose content is designated by the first control circuit in accordance with, for example, an application program or the like is unauthorized in terms of (does not conform to) rules in the in-vehicle network system (such as criteria for the allowable range of data values, the transmission period, and how frequently transmission occurs). This determination is useful for reducing delivery of an unauthorized frame to the bus. (2) In the aspect described above, the first rule may specify a condition for a period in which the received frame is received or how frequently the received frame is received. This makes it possible to determine whether or not a frame received from the bus is unauthorized in terms of (does not conform to) rules in the in-vehicle network system (such as criteria for the allowable range of data values, the reception period, and how frequently reception occurs). This determination is useful for improving security in the in-vehicle network system. Additionally, the result of the determination may be available for the analysis of whether or not the reception of an unauthorized frame has caused delivery of the unauthorized frame to the bus, and may also be useful for reducing delivery of an unauthorized frame to the bus. (3) In the aspect described above, upon determining that the received frame conforms to the first rule, the second control circuit may transmit the received frame to the first control circuit, and, upon determining that the received frame does not conform to the first rule, the second control circuit may not transmit the received frame to the first control circuit. This may prevent a frame determined not to conform to a rule by one control circuit from being subjected to some processing by the other control circuit, and may reduce the processing burden on each control circuit. (4) In the aspect described above, the first control circuit may be a semiconductor integrated circuit including a first microprocessor and a first memory, and may execute a program stored in the first memory by using the first microprocessor to perform the second determination process. The second control circuit may be a semiconductor integrated circuit including a second microprocessor and a second memory, the second microprocessor having a lower processing performance than the first microprocessor, and may execute a program stored in the second memory by using the second microprocessor to perform the first determination process. This may provide appropriate sharing of processing according to the processing performance of each microprocessor. (5) In the aspect described above, the electronic control unit may transmit, to an external server, information indicating a result of the first determination process and a result of the second determination process. This allows the server device to analyze the conformity of a frame with a rule, and makes it possible to achieve an improvement in security for the in-vehicle network system by using the result of the analysis. (6) In the aspect described above, the electronic control unit may record, on a predetermined recording medium, information indicating a result of the first determination process and a result of the second determination process. This allows the analysis of information recorded on the recording medium, and makes it possible to achieve an improvement in security for the in-vehicle network system by using the result of the analysis. (7) In the aspect described above, the plurality of apparatuses may include the electronic control unit and the plurality of apparatuses may perform the communication of frames in accordance with a Controller Area Network (CAN) protocol. This makes it possible to achieve a reduction in delivery of an unauthorized frame to the bus in the in-vehicle network system in which communication is performed in accordance with the CAN protocol. (8) An in-vehicle network system according to an aspect of the present disclosure is an in-vehicle network system including a plurality of apparatuses that perform communication of frames via an in-vehicle network bus, wherein at least one of the plurality of apparatuses comprises an electronic control unit connected to the in-vehicle network bus, wherein the electronic control unit at least includes a first control circuit and a second control circuit, wherein the first control circuit is connected to the in-vehicle network bus via the second control circuit over wired communication and/or wireless communication, wherein the second control circuit performs a first determination process on a received frame that is received from the in-vehicle network bus, to which the second control circuit is connected, to determine a conformity of the received frame with a first rule related to at least a reception interval, and, upon determining that the received frame conforms to the first rule, executes a predetermined received-frame-based process, based on a content of the received frame, and wherein the first control circuit performs a second determination process on the received frame received via the second control circuit to determine a conformity of the received frame with a second rule that is different from the first rule. This makes it possible to reduce delivery of an unauthorized frame to the bus. (9) A vehicle communication method according to an aspect of the present disclosure is a vehicle communication method for use in an in-vehicle network system including a plurality of apparatuses that perform communication of frames via an in-vehicle network bus, the plurality of apparatuses including a vehicle communication apparatus, the vehicle communication apparatus including a first control circuit and a second control circuit configured to exchange information on frames with the first control circuit via wired communication and/or wireless communication, the vehicle communication method including: performing, by the second control circuit, a first determination process on a received frame that is received from the in-vehicle network bus, to which the second control circuit is connected, to determine a conformity of the received frame with a first rule related to at least a reception interval; upon determining that the received frame conforms to the first rule, executing, by the second control circuit, a predetermined received-frame-based process based on a content of the received frame; and performing, by the first control circuit, a second determination process on the received frame, received via the second control circuit, to determine a conformity of the received frame with a second rule that is different from the first rule. The determination in the vehicle communication method is useful for reducing delivery of an unauthorized frame to the bus. It should be noted that these general or specific aspects may be implemented as a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a compact disc read-only memory (CD-ROM), or may be implemented as any combination of the system, the method, the integrated circuit, the computer program, or the recording medium. In the following, an in-vehicle network system including a vehicle communication apparatus according to embodiments will be described with reference to the drawings. Each of the embodiments described below shows a specific example of the present disclosure. Thus, the numerical values, constituent elements, the way in which the constituent elements are arranged and connected, steps (processes), the processing order of the steps, etc. shown in the following embodiments are mere examples, and do not limit the scope of the present disclosure. Among the constituent elements in the following embodiments, constituent elements not recited in any one of the independent claims are constituent elements that can be optionally added. In addition, the drawings are schematic and not representative of exact proportions or dimensions. First Embodiment Hereinafter, as an embodiment of the present disclosure, an in-vehicle network system10including a vehicle communication apparatus (a head unit100) that performs a vehicle communication method for determining the conformity of frames with a rule, the frames including a transmit frame and the like to be delivered to a bus, will be described with reference to the drawings. 1.1 Overall Configuration of In-Vehicle Network System10 FIG.1is a diagram illustrating an overall configuration of the in-vehicle network system10according to a the first embodiment. The in-vehicle network system10is an example of a network communication system in which communication is performed in accordance with the CAN protocol, and is a network communication system in an automobile provided with various devices such as a control device and a sensor. The in-vehicle network system10includes a plurality of apparatuses that perform communication of frames via a bus and adopts a vehicle communication method. Specifically, as illustrated inFIG.1, the in-vehicle network system10is configured to include a bus200, a head unit100, and nodes connected to the bus200, called ECUs, such as ECUs400ato400dconnected to various devices. While the in-vehicle network system10may include numerous ECUs other than the head unit100and the ECUs400ato400d, the description will be given here focusing on the head unit100and the ECUs400ato400d, for convenience, Each ECU is an apparatus including, for example, digital circuits such as a processor (microprocessor) and a memory, analog circuits, a communication circuit, and so on. The memory is a ROM, a RAM, or the like, and is capable of storing a control program (computer program) to be executed by the processor. For example, the processor operates in accordance with the control program (computer program), which results in the ECU implementing various functions. The computer program is constituted by a plurality of instruction codes indicating instructions for the processor to achieve a predetermined function. The ECUs400ato400dare connected to the bus200, and are connected to an engine310, brakes320, a door open/close sensor330, and a window open/close sensor340, respectively, Each of the ECUs400ato400dacquires the state of the device connected thereto (such as the engine310), and regularly transmits a frame indicating the state (a data frame described below) or the like to a network (that is, the bus200). The head unit100has a function of receiving frames transmitted from the ECUs400ato400dand displaying various states on a display (not illustrated) to present the states to a user. The head unit100further has a function of generating a frame indicating each piece of information acquired by the head unit100and transmitting the frame to one or more ECUs via the bus200. The head unit100further has a function of determining the conformity of a frame to be transmitted or received with a rule to, for example, identify whether or not the frame is an unauthorized frame (that is, a frame that does not conform to the rule) and performing filtering of the frame, if necessary. The head unit100can also have functions such as car navigation, playing music, reproducing moving images, displaying webpages, operating in coordination with a smartphone, and downloading and executing application programs. The head unit100is also a kind of ECU. In the in-vehicle network system10, the ECUs, including the head unit100, exchange frames in accordance with the CAN protocol. There are the following frames in the CAN protocol: a data frame, a remote frame, an overload frame, and an error frame. The description will here focus on the data frame and the error frame, for convenience of illustration. 1.2 Data Frame Format A description will now be given of the data frame, which is a frame used in a network compliant with the CAN protocol. FIG.2is a diagram illustrating the format of a data frame specified in the CAN protocol. In this figure there is illustrated a data frame in the standard ID format specified in the CAN protocol. The data frame is made up of the following fields: SOF (Start Of Frame), ID field, RTR (Remote Transmission Request), IDE (Identifier Extension), reserved bit “r”, DLC (Data Length Code), data field, CRC (Cyclic Redundancy Check) sequence, CRC delimiter “DEL”, ACK (Acknowledgement) slot, ACK delimiter “DEL”, and EOF (End Of Frame). The SOF is made up of one dominant bit. The recessive value is set for a state where the bus is idle, and is changed to the dominant value by the SOF to provide notification of the start of frame transmission. The ID field is a field made up of 11 bits for storing an ID (message ID) that is a value indicating a type of data. The ID field is designed such that a high priority is placed on a frame whose ID has a small value in order to use the ID field to arbitrate communication when a plurality of nodes simultaneously start transmission. The RTR is a value for identifying a data frame and a remote frame, and is made up of one dominant bit for a data frame. The IDE and “r” are both made up of one dominant bit. The DLC is made up of 4 bits, and is a value indicating the length of the data field. The IDE, “r”, and the DLC are collectively referred to as a control field. The data field is a value made up of up to 64 bits, indicating the content of data to be transmitted. The length is adjustable every 8 bits. The specification of data to be sent is not specified in the CAN protocol and is defined in the in-vehicle network system10. Accordingly, the specification is dependent on the type of vehicle, the manufacturer (producer), and so on. The CRC sequence is made up of 15 bits. The CRC sequence is calculated by using transmission values of the SOF, the ID field, the control field, and the data field. The CRC delimiter is a delimiter made up of one recessive bit, indicating the end of the CRC sequence. The CRC sequence and the CRC delimiter are collectively referred to as a CRC field. The ACK slot is made up of 1 bit. A transmitting node sets the recessive value in the ACK slot when transmitting the frame. A receiving node sets the dominant value in the ACK slot and transmits the frame if the receiving node has been able to correctly receive the frame up to the CRC sequence. Since the dominant value overrides the recessive value, if the ACK slot is constituted by the dominant value after transmission, the transmitting node can confirm that any receiving node has been successful in receiving the frame. The ACK delimiter is a delimiter made up of one recessive bit, indicating the end of the ACK. The EOF is made up of 7 bits, and indicates the end of the data frame. 1.3 Error Frame Format FIG.3is a diagram illustrating the format of an error frame specified in the CAN protocol. The error frame is constituted by an error flag (primary), an error flag (secondary), and an error delimiter. The error flag (primary) is used to inform any other node of the occurrence of an error, A node that has detected an error transmits 6 consecutive dominant bits in order to inform any other node of the occurrence of the error. This transmission violates a bit-stuffing rule (in which the same value should not be transmitted over 6 or more consecutive bits) in the CAN protocol, and induces the transmission of an error frame (secondary) from any other node. The error flag (secondary) is made up of 6 consecutive dominant bits, which is used to inform any other node of the occurrence of an error. All the nodes that have received the error flag (primary) and detected the violation of the bit-stuffing rule transmit an error flag (secondary). The error delimiter “DEL” is made up of 8 consecutive recessive bits, and indicates the end of the error frame. 1.4 Configuration of Head Unit100 The head unit100is a vehicle communication apparatus, and is a kind of ECU disposed on, for example, an instrument panel or the like of an automobile, including a display device such as a liquid crystal display (LCD) for displaying information to be viewed by a driver, an input means for accepting the operation of the driver, and so on. FIG.4is a configuration diagram of the head unit100. The head unit100is configured to include a multimedia control unit150(first control unit) and a system control unit110(second control unit). The system control unit110and the multimedia control unit150are each a chip (microchip) or the like that is a packaged semiconductor integrated circuit, for example, and are configured to be capable of communicating with each other via wired or wireless connection. Communication via wireless connection is performed by transmission of signals using electromagnetic waves, and includes optical communication. The system control unit110mainly takes on control over coordination with other in-vehicle devices (the ECUs400ato400d), that is, takes on control of communication via the bus200. The system control unit110is configured to include a frame transceiving unit111, a received frame interpretation unit112, a received-ID judgment unit113, a received-ID list holding unit114, a frame determination unit115, a determination rule holding unit116, a unit-to-unit communication processing unit117, and a transmit frame generation unit118. These constituent elements are functional ones, and each of their functions is implemented by elements integrated on a chip, such as a communication circuit, a memory, a processor (microprocessor) that executes a control program stored in the memory, and other circuits. The frame transceiving unit111transmits and receives a frame compliant with the CAN protocol to and from the bus200. The frame transceiving unit111receives a frame from the bus200bit-by-bit, and transfers the frame to the received frame interpretation unit112. Further, the frame transceiving unit111transmits the content of a frame reported by the transmit frame generation unit118to the bus200bit-by-bit. The received frame interpretation unit112receives the values of the frame from the frame transceiving unit111, and interprets the values so as to map the values into the respective fields in the frame formats specified in the CAN protocol. The received frame interpretation unit112transfers a value judged to correspond to the ID field to the received-ID judgment unit113. In accordance with a determination result reported from the received-ID judgment unit113, the received frame interpretation unit112decides whether to transfer the value in the ID field and the data field that appears after the ID field to the frame determination unit115or to abort reception of the frame (that is, abort interpretation of the frame) after the determination result has been received. Further, the received frame interpretation unit112notifies the transmit frame generation unit118of a request to transmit an error frame if the frame is judged not to comply with the CAN protocol, for example, if the values of the CRC do not match or if an item whose value should be fixed to the dominant value has the recessive value. Further, when an error frame is received, that is, when an error frame is interpreted to have started from a value in the received frame, the received frame interpretation unit112discards the subsequent part of the frame, that is, aborts interpretation of the frame. For example, in a case where an error frame is interpreted to have started in the middle of the data frame, the interpretation of the data frame is aborted and a particular process is not performed according to the data frame. The received-ID judgment unit113receives the value in the ID field reported from the received frame interpretation unit112, and determines whether or not to receive the respective fields of the frame after the ID field, in accordance with a list of message IDs held in the received-ID list holding unit114. The received-ID judgment unit113notifies the received frame interpretation unit112of the determination result. The received-ID list holding unit114holds a received-ID list that is a list of IDs (message IDs) which the head unit100receives.FIG.5illustrates an example of the received-ID list. The frame determination unit115receives the values of the frame received from the received frame interpretation unit112or receives from the unit-to-unit communication processing unit117values that are the content of a frame to be transmitted. Further, the frame determination unit115acquires a determination rule from the determination rule holding unit116, and decides whether or not to notify the transmit frame generation unit118or the unit-to-unit communication processing unit117of the values of the frame in accordance with the result of a frame determination process (system-control-unit frame determination process) based on the determination rule. This frame determination process (system-control-unit frame determination process) is a process for determining the conformity of a frame with the determination rule (such as determining whether or not the frame is unauthorized), and will be described in detail below with reference toFIG.15. The determination rule holding unit116holds a determination rule to be used to perform a frame determination process on a frame which the head unit100transmits or receives. The determination rule is a rule with which frames to be exchanged over the bus200in the in-vehicle network system10are to comply.FIG.6illustrates an example of the determination rule. The unit-to-unit communication processing unit117performs a communication process between different units. Specifically, the unit-to-unit communication processing unit117exchanges information with the multimedia control unit150via wired or wireless communication (transmission or reception). In accordance with a notification of instructions from the received frame interpretation unit112to transmit an error frame, the transmit frame generation unit118forms an error frame and notifies the frame transceiving unit111of the error frame for transmission. Further, the transmit frame generation unit118generates a data frame to be transmitted by using the ID, data, etc. reported from the frame determination unit115. The multimedia control unit150mainly takes on the process of executing and controlling an application program for implementing various functions (for example, functions such as car navigation, playing music, reproducing moving images, displaying webpages, and operating in coordination with a smartphone) on the head unit100. The multimedia control unit150is configured to include a unit-to-unit communication processing unit151, a frame determination unit152, a determination rule holding unit153, and an application execution unit154. These constituent elements are functional ones, and each of their functions is implemented by elements integrated on a chip, such as a communication circuit, a memory a processor that executes a control program stored in the memory, and other circuits. The unit-to-unit communication processing unit151performs a communication process between different units. Specifically, the unit-to-unit communication processing unit151exchanges information with the system control unit110via wired or wireless communication (transmission or reception). The frame determination unit152receives from the application execution unit154values that are the content of a frame to be transmitted or receives the values of the frame received from the unit-to-unit communication processing unit151. Further, the frame determination unit152acquires a determination rule from the determination rule holding unit153, and decides whether or not to notify the application execution unit154or the unit-to-unit communication processing unit151of the values of the frame in accordance with the result of a frame determination process (multimedia-control-unit frame determination process) based on the determination rule. This frame determination process (multimedia-control-unit frame determination process) is a process for determining the conformity of a frame with the determination rule (such as determining whether or not the frame is unauthorized), and will be described in detail below with reference toFIG.16. The determination rule holding unit153holds a determination rule to be used to perform a frame determination process on a frame which the head unit100transmits or receives. The determination rule is a rule with which frames to be exchanged over the bus200in the in-vehicle network system10are to comply. Note that the determination rule held in the determination rule holding unit153of the multimedia control unit150and the determination rule held in the determination rule holding unit116of the system control unit110may be the same or different so long as it is sufficient that each determination rule include at least information, such as criteria and conditions, necessary for the determination performed in the corresponding one of the frame determination unit152and the frame determination unit115. The application execution unit154executes and controls an application program for implementing various functions (for example, functions such as navigation, reproducing moving images, playing music, and web browsing) of the head unit100. For example, the application program is executed and controlled in accordance with an operation of the driver (user) which is accepted through an input means, and the application program is executed and controlled to allow, for example, information which is to be presented to the user to be displayed on the display device. The application program is downloaded via, for example, an external network other than the bus200and is executed on an execution environment such as a predetermined operating system (OS) operating on a processor of the multimedia control unit150by the processor to perform operation. 1.5 Example Received-ID List in Head Unit100 FIG.5is a diagram illustrating an example of the received-ID list held in the received-ID list holding unit114of the head unit100. The received-ID list illustrated by way of example inFIG.5is used to selectively receive and process a frame including a message ID whose ID (message ID) value is any of “1”, “2”, “3”, and “4”. The head unit100receives a frame (message) whose message ID is “1” from the ECU400aconnected to the engine310, a frame whose message ID is “2” from the ECU400bconnected to the brakes320, a frame whose message ID is “3” from the ECU400cconnected to the door open/close sensor330, and a frame whose message ID is “4” from the ECU400dconnected to the window open/close sensor340. 1.6 Example Determination Rule FIG.6is a diagram illustrating an example of the determination rule held in the determination rule holding unit116and the determination rule holding unit153of the head unit100. The determination rule illustrated by way of example inFIG.6indicates, for each ID (message ID), a rule (criterion) with which a message (frame) having the message ID is to comply. The determination rule includes, for each message ID, the following items: a transmission/reception type, a data length, a data range, a period (ms), a margin (ms), the presence or absence of an event, and a threshold frequency-of-occurrence value (the number of occurrences/sec). In the example inFIG.6, the transmission/reception type represents the value 1 for a received frame that is a frame transmitted from any other ECU and acquired by the head unit100via the bus200, and represents the value 0 for a transmit frame that is a frame for transmission from the head unit100to the bus200to discriminate them. The data length represents a criterion for the length (the number of bytes) of a data field included in a frame (data frame). The data range represents a criterion for the range of values that the data can take byte-by-byte for bytes1to8of the data field. As a range of values that the data can take, the sign “**” inFIG.6indicates that any value can be taken. Further, values connected by the sign “,” indicate that either value can be taken, and values connected by “-” (hyphen) indicate that any value within a range of the respective values as the upper and lower limits can be taken. The period represents a criterion for the period in which a frame is transmitted or received when the frame is repeatedly transmitted or received. InFIG.6, the period is represented by a value expressed in milliseconds. InFIG.6, the sign “-” as a period indicates that there is no frame to be periodically transmitted or received as a frame having the corresponding message ID. The margin represents a deviation from a period allowed for the determination of the conformity with the criterion for the period (allowable range of error). InFIG.6, the margin is represented by a value expressed in milliseconds. The presence or absence of an event represents a criterion for whether or not an event frame to be transmitted or received separately from the period is likely to be present as a frame having the corresponding message ID. InFIG.6, the value 1 indicates that such an event frame is likely to be present, and the value 0 indicates that such an event frame is absent. The threshold frequency-of-occurrence value represents a criterion for how frequently a frame is transmitted or received when an event frame to be transmitted or received is likely to be present as a frame having the corresponding message ID. InFIG.6, the threshold frequency-of-occurrence value is represented by the number of transmissions or receptions per second. The example illustrated inFIG.6is merely an example of the determination rule, and the determination rule may have any content so long as the determination rule includes criteria and the like defined as requirements of a frame. InFIG.6, for example, rules (criteria) for data ranges are expressed in bytes. However, the rules may not necessarily be expressed in bytes. In addition, the rule categories (criteria) for a transmit frame may not necessary match the rule categories (criteria) for a received frame. The determination rule may be expressed in table form or may be expressed by mathematical expression, program (command sequence), or the like. 1.7 Configuration of ECU400a FIG.7is a configuration diagram of the ECU400a. The ECU400ais configured to include a frame transceiving unit460, a received frame interpretation unit450, a received-ID judgment unit430, a received-ID list holding unit440, a frame processing unit410, a transmit frame generation unit420, and a data acquisition unit470. These constituent elements are functional ones, and each of their functions is implemented by elements in the ECU400a, such as a communication circuit, a processor that executes a control program stored in a memory, or a digital circuit. The frame transceiving unit460transmits and receives a frame compliant with the CAN protocol to and from the bus200. The frame transceiving unit460receives a frame from the bus200bit-by-bit, and transfers the frame to the received frame interpretation unit450. Further, the frame transceiving unit460transmits the content of a frame reported by the transmit frame generation unit420to the bus200. The received frame interpretation unit450receives the values of the frame from the frame transceiving unit460, and interprets the values so as to map the values into the respective fields in the frame formats specified in the CAN protocol. The received frame interpretation unit450transfers a value judged to correspond to the ID field to the received-ID judgment unit430. In accordance with a determination result reported from the received-ID judgment unit430, the received frame interpretation unit450decides whether to transfer the value in the ID field and the data field that appears after the ID field to the frame processing unit410or to abort reception of the frame (that is, abort interpretation of the frame) after the determination result has been received. Further, the received frame interpretation unit450notifies the transmit frame generation unit420of a request to transmit an error frame if the frame is judged not to comply with the CAN protocol. Further, when an error frame is received, that is, when an error frame is interpreted to have started from a value in the received frame, the received frame interpretation unit450discards the subsequent part of the frame, that is, aborts interpretation of the frame. The received-ID judgment unit430receives the value in the ID field reported from the received frame interpretation unit450, and determines whether or not to receive the respective fields of the frame after the ID field, in accordance with a list of message IDs held in the received-ID list holding unit440. The received-ID judgment unit430notifies the received frame interpretation unit450of the determination result. The received-ID list holding unit440holds a received-ID list that is a list of IDs (message IDs) which the ECU400areceives.FIG.8illustrates an example of the received-ID list. The frame processing unit410performs a process related to a function that is different from ECU to ECU in accordance with the data of the received frame. For example, the ECU400aconnected to the engine310has a function of sounding an alarm when a door is open while the vehicle speed is over 30 km per hour. The ECU400aincludes, for example, a speaker or the like for sounding an alarm. The frame processing unit410of the ECU400amanages data (for example, information indicating the state of the doors) received from any other ECU, and performs a process such as sounding an alarm in a certain condition on the basis of the average speed per hour acquired from the engine310. The data acquisition unit470acquires data indicating the state of the elements connected to the ECU400a, such as devices and sensors, and notifies the transmit frame generation unit420of the data. In accordance with a notification of instructions to transmit an error frame, which are reported from the received frame interpretation unit450, the transmit frame generation unit420forms an error frame and notifies the frame transceiving unit460of the error frame for transmission. Further, the transmit frame generation unit420adds a predetermined message ID to the value of the data reported from the data acquisition unit470to form a frame, and notifies the frame transceiving unit460of the frame. Each of the ECUs400bto400dalso has a configuration basically similar to that of the ECU400adescribed above. The received-ID list held in the received-ID list holding unit440may have content different from ECU to ECU, or may have the same or substantially the same content. Furthermore, the content of the process performed by the frame processing unit410differs from ECU to ECU. For example, the content of the process performed by the frame processing unit410in the ECU400cincludes a process related to a function of sounding an alarm if a door is opened while the brakes are not applied. For example, the frame processing units410in the ECU400band the ECU400ddo not perform a special process. Each ECU may have functions other than those described here for illustrative purposes. The content of respective frames transmitted from the ECUs400ato400dwill be described below with reference toFIGS.9to12. 1.8 Example Received-ID List in ECUs400ato400d FIG.8is a diagram illustrating an example of the received-ID list held in each of the ECU400a, the ECU400b, the ECU400c, and the ECU400d. The received-ID list illustrated by way of example inFIG.8is used to selectively receive and process a frame including a message ID whose ID (message ID) value is any of “1”, “2”, “3”, “4”, “5”, “6”, and “7”. 1.9 Example Transmit Frame from Engine-Related ECU400a FIG.9is a diagram illustrating an example of an ID (message ID) and a data field (data) in a frame transmitted from the ECU400aconnected to the engine310. The ECU400atransmits a frame whose message ID is “1”. The data represents the average speed per hour (km/h), taking a value in the range from a minimum speed of 0 (km/h) to a maximum speed of 180 (km/h), and has a length of 1 byte.FIG.9illustrates, from top to bottom, message IDs and data corresponding to frames transmitted sequentially from the ECU400a, by way of example, and depicts acceleration, increasing the speed from 0 km/h in increments of 1 km/h. 1.10 Example Transmit Frame from Brake-Related ECU400b FIG.10is a diagram illustrating an example of an ID (message ID) and a data field (data) in a frame transmitted from the ECU400bconnected to the brakes320. The ECU400btransmits a frame whose message ID is “2”. The data represents the degree to which the brakes are applied, expressed as a percentage (%), and has a length of 1 byte. A percentage of 0 (%) indicates a state where the brakes are not applied at all and 100 (%) indicates a state where the brakes are maximally applied.FIG.10illustrates, from top to bottom, message Ds and data corresponding to frames transmitted sequentially from the ECU400b, by way of example, and depicts a gradual easing off of the brakes from 100%. 1.11 Example Transmit Frame from Door-Open/Close-Sensor-Related ECU400c FIG.11is a diagram illustrating an example of an ID (message ID) and a data field (data) in a frame transmitted from the ECU400cconnected to the door open/close sensor330. The ECU400ctransmits a frame whose message ID is “3”. The data represents the open or closed state for the door, and has a length of 1 byte. The data has the value “1” for a door-open state and the value “0” for a door-closed state.FIG.11illustrates, from top to bottom, message IDs and data corresponding to frames transmitted sequentially from the ECU400c, by way of example, and depicts a gradual transition from the door-open state to the closed state. 1.12 Example Transmit Frame from Window-Open/Close-Sensor-Related ECU400d FIG.12is a diagram illustrating an example of an ID (message ID) and a data field (data) in a frame transmitted from the ECU400dconnected to the window open/close sensor340. The ECU400dtransmits a frame whose message ID is “4”. The data represents the open or closed state for the window, expressed as a percentage (%), and has a length of 1 byte. A percentage of 0 (%) indicates a state where the window is completely closed and 100 (%) indicates a state where the window is completely open.FIG.12illustrates, from top to bottom, message IDs and data corresponding to frames transmitted sequentially from the ECU400d, by way of example, and depicts a gradual transition from the window-closed state to the open state. 1.13 Frame Reception Operation Performed by Head Unit100 FIG.13is a flowchart illustrating an example of a frame reception process performed in the head unit100. The operation performed by the head unit100when a data frame is received will now be described in accordance withFIG.13. The head unit100receives a frame appearing on the bus200by using the frame transceiving unit111of the system control unit110(step S1100). When the frame transceiving unit111receives the ID portion of the frame, the received frame interpretation unit112and the received-ID judgment unit113refer to the received-ID list held in the received-ID list holding unit114to identify whether or not the frame is a frame including an ID to be received, thereby determining whether or not to receive the subsequent part of the frame (step S1200). If the received frame includes an ID not contained in the received-ID list, the frame reception process ends. If the ID of the received frame is included in the received-ID list in step S1200, the frame is received and the frame determination unit115performs a system-control-unit frame determination process (step S1300). The system-control-unit frame determination process S1300will be described below with reference toFIG.15. If the determination result of the system-control-unit frame determination process is OK (normal) (that is, if it is determined that the received frame conforms to the determination rule) (step S1400), the system control unit110notifies the multimedia control unit150of the content of the frame (received frame) by using a unit-to-unit communication process (step S1500). If the determination result of the system-control-unit frame determination process is not OK in step S1400, the system control unit110does not notify the multimedia control unit150of the frame, and then ends the frame reception process. Then, in the multimedia control unit150, when the notification of the received frame is received, the frame determination unit152performs a multimedia-control-unit frame determination process (step S1600). The multimedia-control-unit frame determination process S1600will be described below with reference toFIG.16. In the multimedia-control-unit frame determination process, if the determination result is OK (that is, if it is determined that the received frame conforms to the determination rule) (step S1700), the frame determination unit152notifies the application execution unit154of the content of the received frame. Then, the application execution unit154executes an application program for implementing the various functions of the head unit100to perform a process corresponding to the content of the received frame (step S1800). Examples of the process corresponding to the content of the received frame (received-frame-based process) include computation based on the content of the data field in the received frame, and the output of a control signal based on the result of the computation (for example, the output of a control signal to display information on the display device). If the determination result of the multimedia-control-unit frame determination process is not OK in step S1700, the multimedia control unit150does not perform a process corresponding to the content of the received frame in accordance with the application program. 1.14 Frame Transmission Operation Performed by Head Unit100 FIG.14is a flowchart illustrating an example of a frame transmission process performed in the head unit100. The operation performed by the head unit100for transmitting a data frame will now be described in accordance withFIG.14. It is assumed here that the application program executed by the application execution unit154includes content (command sequence) for providing a transmission instruction by specifying the content (for example, the ID and the content of the data field) of a frame (transmit frame) to be transmitted at constant intervals or when necessary for implementing the function. The application execution unit154executes the application program, and an instruction is given to transmit a data frame in accordance with the application program (step S2100). When a transmission instruction is provided by specifying the content of the transmit frame, the frame determination unit152performs a multimedia-control-unit frame determination process upon receipt of the notification of the specified content of the transmit frame from the application execution unit154(step S1600). In the illustrated example, it is assumed that a multimedia-control-unit frame determination process S1600, which is the same or substantially the same as the frame reception process described above (FIG.13), is performed in the frame transmission process. In the multimedia-control-unit frame determination process, if the determination result is OK (that is, if it is determined that the transmit frame conforms to the determination rule) (step S2200), the multimedia control unit150notifies the system control unit110of the content of the frame (transmit frame) by using a unit-to-unit communication process (step S2300). If the determination result of the multimedia-control-unit frame determination process is not OK in step S2200, the multimedia control unit150does not notify the system control unit110of the content of the transmit frame, and then ends the frame transmission process. Then, in the system control unit110, when the notification of the content of the transmit frame is received, the frame determination unit115performs a system-control-unit frame determination process (step S1300). In the illustrated example, it is assumed that a system-control-unit frame determination process S1300, which is the same or substantially the same as the frame reception process described above (FIG.13), is performed in the frame transmission process. In the system-control-unit frame determination process, if the determination result is OK (that is, if it is determined that the transmit frame conforms to the determination rule) (step S2400), the transmit frame generation unit118generates a transmit frame (data frame) on the basis of the content of the transmit frame (the content designated by the multimedia control unit150in accordance with the application program) (step S2500). Then, the frame transceiving unit111delivers the transmit frame generated by the transmit frame generation unit118to the bus200to transmit a data frame (step S2600). If the determination result of the system-control-unit frame determination process is not OK in step S2400, the system control unit110does not deliver the transmit frame to the bus200. That is, if the transmit frame conforms to the rule, the head unit100delivers the transmit frame to the bus200, whereas if the transmit frame does not conform to the rule (if nonconformity exists), the transmit frame is prevented from being delivered to the bus200. 1.15 System-Control-Unit Frame Determination Process FIG.15is a flowchart illustrating an example of the frame determination process (system-control-unit frame determination process) performed in the system control unit110. The system-control-unit frame determination process S1300, which is performed by the frame determination unit115of the system control unit110, will now be described in accordance withFIG.15. In the system-control-unit frame determination process S1300, the determination rule (seeFIG.6) held in the determination rule holding unit116is referenced to determine the conformity of a frame with the determination rule. If the frame conforms to the determination rule, the determination result is regarded as OK (normal), whereas, if the frame does not conform to the determination rule, the determination result is regarded as NG (unauthorized). It is assumed here that the determination rule includes the ID of each frame that the head unit100transmits or receives. The frame determination unit115identifies whether the target to be determined is a transmit frame or a received frame (step S1301). If the target to be determined is a transmit frame, the frame determination unit115regards the determination result as OK, and then ends the system-control-unit frame determination process (step S1302). If the target to be determined is a received frame, the process proceeds to step S1303. Upon being notified of data (a received frame) by the received frame interpretation unit112, the frame determination unit115identifies the target to be determined as a received frame. Upon being notified of data (the content of a transmit frame) by the unit-to-unit communication processing unit117, the frame determination unit115identifies the target to be determined as a transmit frame. In step S1303, the frame determination unit115checks whether or not an ID that matches the ID of the frame to be determined is included in the determination rule (step S1303). If the ID is included, the frame determination unit115acquires the individual categories in the determination rule regarding the ID (step S1304). If an ID that matches the ID of the frame to be determined is not included in the determination rule, the frame determination unit115regards the determination result as NG (step S1306), and then ends the system-control-unit frame determination process. The frame determination unit115determines whether or not the frame to be determined conforms to the criteria for the categories, namely, the data length and the data range, in the determination rule, which are acquired in step S1304(step S1305). If the frame to be determined does not conform to one of the criterion for the data length and the criterion for the data range, the frame determination unit115regards the determination result as NG (step S1306), and then ends the system-control-unit frame determination process. If it is determined in step S1305that the frame to be determined conforms to both the criterion for the data length and the criterion for the data range, the frame determination unit115identifies whether or not the frame to be determined is a periodically transmitted or received frame (step S1307). Specifically, a frame for which the category for the period in the determination rule does not indicate the absence of a periodically transmitted or received frame and for which the category for the presence or absence of an event indicates the absence of an event frame is identified as a periodically transmitted or received frame, and a frame otherwise is identified as not being a periodically transmitted or received frame. If the frame to be determined is identified as a periodically transmitted or received frame, the frame determination unit115performs a frame period determination process (step S1330). If the frame to be determined is identified as not being a periodically transmitted or received frame, the frame determination unit115performs a frame frequency-of-occurrence determination process (step S1350). The frame period determination process S1330and the frame frequency-of-occurrence determination process S1350will be described below. 1.16 Multimedia-Control-Unit Frame Determination Process FIG.16is a flowchart illustrating an example of the frame determination process (multimedia-control-unit frame determination process) performed in the multimedia control unit150. The multimedia-control-unit frame determination process S1600performed by the frame determination unit152of the multimedia control unit150will now be described in accordance withFIG.16. In the multimedia-control-unit frame determination process S1600, the determination rule (seeFIG.6) held in the determination rule holding unit153is referenced to determine the conformity of a frame with the determination rule. If the frame conforms to the determination rule, the determination result is regarded as OK (normal), whereas, if the frame does not conform to the determination rule, the determination result is regarded as NG (unauthorized). The frame determination unit152identifies whether the target to be determined is a transmit frame or a received frame (step S1601). If the target to be determined is a received frame, the frame determination unit152regards the determination result as OK, and then ends the multimedia-control-unit frame determination process (step S1602). If the target to be determined is a transmit frame, the process proceeds to step S1303, Upon being notified of data (a received frame) by the unit-to-unit communication processing unit151, the frame determination unit152identifies the target to be determined as a received frame. Upon being notified of data (the content of a transmit frame) by the application execution unit154, the frame determination unit152identifies the target to be determined as a transmit frame. In the multimedia-control-unit frame determination process S1600, steps S1303, S1304, S1305, S1306, S1307, S1330, and S1350have the same or substantially the same content as those in the system-control-unit frame determination process S1300(FIG.15) described above, and are not described herein. 1.17 Frame Period Determination Process FIG.17is a flowchart illustrating an example of the frame period determination process S1330. The frame determination unit (namely, the frame determination unit115or the frame determination unit152) executes the frame period determination process S1330. The frame period determination process S1330is a process for determining whether or not the frame to be determined conforms to the criterion for periodicity, that is, whether or not the frame is being transmitted or received in a correct period. In the frame period determination process S1330, first, the frame determination unit acquires the current time (step S1331). For example, the frame determination unit115acquires the current time by using a time counting mechanism (such as a timer) in the system control unit110, and the frame determination unit152acquires the current time by using a time counting mechanism (such as a timer) in the multimedia control unit160. Alternatively, each frame determination unit may acquire the current time from a time counting mechanism (such as a timer) that is a circuit separate from the system control unit110and the multimedia control unit150in the head unit100. Then, the frame determination unit calculates a difference between the acquired current time and a saved reference time (step S1332). When the frame period determination process S1330is executed for the first time, there is no saved reference time. Thus, as an exception, the determination result is regarded as OK, for example, and the current time is saved as a reference time. Subsequently to step S1332, the frame determination unit identifies whether or not the difference (referred to as difference time) between the current time and the reference time falls within the range of period±margin (step S1333). This identification is performed by using the period and margin, corresponding to the ID of the frame to be determined by the frame determination unit, in the determination rule (seeFIG.6). If the difference time is identified in step S1333as being within the range of period±margin, the frame determination unit regards the determination result as OK (normal) (step S1334), and updates the reference time by using the current time acquired in step S1331(step S1337). Then, the frame period determination process S1330ends. If the difference time is identified in step S1333as not being within the range of period±margin, the frame determination unit regards the determination result as NG (unauthorized) (step S1335), and identifies whether or not the difference time is larger than the value given by period+margin (step S1336). If the difference time is identified in step S1336as not being larger than the value given by period+margin, the frame determination unit ends the frame period determination process S1330. If the difference time is identified as being larger than the value given by period+margin, the frame determination unit updates the reference time by using the current time acquired in step S1331(step S1337), and then ends the frame period determination process S1330. 1.18 Frame Frequency-of-Occurrence Determination Process FIG.18is a flowchart illustrating an example of the frame frequency-of-occurrence determination process S1350. The frame determination unit (namely, the frame determination unit115or the frame determination unit152) executes the frame frequency-of-occurrence determination process S1350. The frame frequency-of-occurrence determination process S1350is a process for determining whether or not the frame to be determined conforms to the criterion for the frequency of occurrence, that is, whether or not the frame is being transmitted or received with a correct frequency (with a frequency less than the threshold frequency-of-occurrence value). In the frame frequency-of-occurrence determination process S1350, first, the frame determination unit identifies whether or not a frequency-of-occurrence check timer exceeds a set time (which is set here to one second) (step S1351). For example, the frame determination unit115causes a time counting mechanism (such as a timer) in the system control unit110to function as a frequency-of-occurrence check timer, and the frame determination unit152causes a time counting mechanism (such as a timer) in the multimedia control unit150to function as a frequency-of-occurrence check timer. Alternatively, each frame determination unit may implement the function of the frequency-of-occurrence check timer by using a time counting mechanism (such as a timer) that is a circuit separate from the system control unit110and the multimedia control unit150in the head unit100. Then, the frame determination unit clears (resets) the frequency-of-occurrence check timer when the frequency-of-occurrence check timer exceeds the set time, and clears a frequency-of-occurrence counter serving as a variable for counting the frequency of occurrence (step S1352). If the frequency-of-occurrence check timer does not exceed the set time in step S1351and when the frequency-of-occurrence check timer is reset in step S1352, the frame determination unit increments (increases by 1) the frequency-of-occurrence counter (step S1353). Then, the frame determination unit identifies whether or not the frequency-of-occurrence counter is smaller than the threshold frequency-of-occurrence value, corresponding to the ID of the frame to be determined by the frame determination unit, in the determination rule (seeFIG.6) (step S1354). If the frequency-of-occurrence counter is identified in step S1354as being smaller than the threshold frequency-of-occurrence value, the frame determination unit regards the determination result as OK (normal) (step S1355), and then ends the frame frequency-of-occurrence determination process S1350. If the frequency-of-occurrence counter is identified in step S1354as not being smaller than the threshold frequency-of-occurrence value, the frame determination unit regards the determination result as NG (unauthorized) (step S1356), and then ends the frame frequency-of-occurrence determination process S1350. 1.19 Advantageous Effects of First Embodiment In the in-vehicle network system10according to the first embodiment, whether or not a frame conforms to a specified rule is determined by using a plurality of rule categories (criteria) such as how frequently a frame is transmitted or received or the period in which a frame is transmitted or received, and the subsequent processing is filtered (prevented from taking place) if the frame does not conform to the specified rule. This reduces the delivery of an unauthorized frame to the bus200according to an unauthorized application program. In addition, the respective frame determination units of the system control unit110, which is a chip that mainly takes charge of communication using the bus200, and the multimedia control unit150, which is a chip that mainly takes charge of operations such as executing and controlling an application program, share the determination for a transmit frame and a received frame to determine whether or not the frames conform to rules. This prevents a frame that does not conform to a rule from being transmitted between the chips, thereby reducing the processing load. 1.20 First Modification of First Embodiment There will now be described a frame period determination process S3330, which is obtained by partially modifying the frame period determination process S1330(seeFIG.17) described above. FIG.19illustrates a flowchart of the frame period determination process S3330. The frame determination unit (namely, the frame determination unit115or the frame determination unit152) executes the frame period determination process S3330instead of the frame period determination process S1330described above. In the frame period determination process S3330, steps S1331to S1336have the same or substantially the same content as those in the frame period determination process S1330(FIG.17) described above and are not described, as appropriate. If the difference time is identified in step S1333as being within the range of period±margin, the frame determination unit regards the determination result as OK (normal) (step S1334), and saves, as the latest reception time, the time at which the frame to be determined was received (step S3331). After step S1335or step S3331, the frame determination unit identifies whether or not the difference time is larger than a value given by period+margin (step S1336). If the difference time is identified in step S1336as not being larger than the value given by period+margin, the frame determination unit ends the frame period determination process S3330. If the difference time is identified as being larger than the value given by period+margin, the frame determination unit updates the reference time by using the latest reception time saved in step S3331(step S3332). In step S3332, the reference time is updated by using the latest (last) among reception times of a plurality of frames that were received over a range of times for which the determination result was regarded as OK. Instead of this, for example, the reference time may be updated by using the initial reception time among reception times of a plurality of frames that were received over a range of times for which the determination result was regarded as OK. Alternatively, the reference time may be updated by using a reception time among the reception times of the plurality of frames that is the closest to a time obtained by adding the period to the original reference time before the update. Subsequently to step S3332, the frame determination unit saves, as the latest reception time, a value given by reference time+period (step S3333), and then ends the frame period determination process S3330. In step S3333, instead of the value given by reference time+period, a certain time within a range from a value given by reference time+period−margin to a value given by reference time+period+margin may be saved as the latest reception time. According to a first modification of the first embodiment, the determination results for all the frames within the range of reference time+period±margin are regarded as OK, which may prevent a frame that conforms to a rule from being erroneously determined to be unauthorized. 1.21 Second Modification of First Embodiment The frame determination unit115of the system control unit110and the frame determination unit152of the multimedia control unit150may share the frame determination process (the determination of conformity with a determination rule) in a way different from that described above (FIG.15, seeFIG.16).FIG.20andFIG.21illustrate another example of sharing according to a second modification of the first embodiment. FIG.20is a flowchart illustrating a system-control-unit frame determination process S4300as a modification of the system-control-unit frame determination process S1300. In the system-control-unit frame determination process84300, steps S1301to S1306have the same or substantially the same content as those in the system-control-unit frame determination process S1300(FIG.15) described above, and are not described herein. If it is determined in step S1305that the frame to be determined conforms to both the criterion for the data length and the criterion for the data range, the frame determination unit115regards the determination result as OK (normal) (step S4301), and then ends the system-control-unit frame determination process. The system-control-unit frame determination process84300does not include the frame period determination process S1330or the frame frequency-of-occurrence determination process81350of the system-control-unit frame determination process S1300(FIG.15). FIG.21is a flowchart illustrating a multimedia-control-unit frame determination process S4600as a modification of the multimedia-control-unit frame determination process S1600. In the multimedia-control-unit frame determination process S4600, steps S1303to S1307, S1601, S1330, and S1350have the same or substantially the same content as those in the multimedia-control-unit frame determination process S1600(FIG.16) described above and are not described, as appropriate. The frame determination unit152checks whether or not an ID that matches the ID of the frame to be determined is included in the determination rule (seeFIG.6) (step S1303). If the ID is included, the frame determination unit152acquires the individual categories in the determination rule regarding the ID (step S1304). If an ID that matches the ID of the frame to be determined is not included in the determination rule, the frame determination unit152regards the determination result as NG (step S1306), and then ends the multimedia-control-unit frame determination process. The frame determination unit152identifies whether the target to be determined is a transmit frame or a received frame (step S1601). If the target to be determined is a transmit frame, the frame determination unit152determines whether or not the frame to be determined conforms to the criteria for the categories, namely, the data length and the data range, in the determination rule, which are acquired in step S1304(step S1305). Upon being notified of data (a received frame) by the unit-to-unit communication processing unit151, the frame determination unit152identifies the target to be determined as a received frame. Upon being notified of data (the content of a transmit frame) by the application execution unit154, the frame determination unit152identifies the target to be determined as a transmit frame. If the frame to be determined does not conform to one of the criterion for the data length and the criterion for the data range in step S1305, the frame determination unit152regards the determination result as NG (step S1306), and then ends the multimedia-control-unit frame determination process. If it is determined in step S1305that the frame to be determined conforms to both the criterion for the data length and the criterion for the data range or if the target to be determined is identified as a received frame in step S1601, the frame determination unit152identifies whether or not the frame to be determined is a periodically transmitted or received frame (step S1307). If the frame to be determined is identified as a periodically transmitted or received frame, the frame determination unit152performs a frame period determination process (step S1330). If the frame to be determined is identified as not being a periodically transmitted or received frame, the frame determination unit152performs a frame frequency-of-occurrence determination process (step S1350). In the second modification of the first embodiment, accordingly, in the multimedia-control-unit frame determination process S4600, not only a transmit frame but also a received frame is subjected to either of the frame period determination process S1330and the frame frequency-of-occurrence determination process S1350. That is, in the second modification, as compared to the first embodiment, the role of part of processing for a received frame (such as the frame period determination process S1330and the frame frequency-of-occurrence determination process S1350) is shifted from the system control unit110to the multimedia control unit150. the second modification is effective when, for example, the multimedia control unit has higher performance than the system control unit. Accordingly, it is useful to distribute the load of a process for determining the conformity of a frame with a rule in accordance with the processing capacities of each control unit. 1.22 Third Modification of First Embodiment There will now be described a third modification, as still another example, in which the frame determination unit115of the system control unit110and the frame determination unit152of the multimedia control unit150share the frame determination process (the determination of conformity with a determination rule) (seeFIG.15andFIG.16) in a different way. FIG.22is a flowchart illustrating a system-control-unit frame determination process S5300as another modification of the system-control-unit frame determination process S1300. In the system-control-unit frame determination process S5300, steps S1301to S1304, S1306, S1307, S1330, and S1350have the same or substantially the same content as those in the system-control-unit frame determination process S1300(FIG.15) described above, and are not described herein, as appropriate. After step S1304, the frame determination unit115identifies whether or not the frame to be determined is a periodically transmitted or received frame (step S1307). The system-control-unit frame determination process S5300does not include step S1305of the system-control-unit frame determination process S1300(FIG.15) for determining whether or not the frame to be determined conforms to the criteria for the data length and the data range in the determination rule. FIG.23is a flowchart illustrating a multimedia-control-unit frame determination process S5600as another modification of the multimedia-control-unit frame determination process S1600. In the multimedia-control-unit frame determination process S5600, steps S1303to S1307, S1601, S1602, S1330, and S1350have the same or substantially the same content as those in the multimedia-control-unit frame determination process S1600(FIG.16) described above, and are not described herein, as appropriate. The frame determination unit152checks whether or not an ID that matches the ID of the frame to be determined is included in the determination rule (seeFIG.6) (step S1303). If the ID is included, the frame determination unit152acquires the individual categories in the determination rule regarding the ID (step S1304). If an ID that matches the ID of the frame to be determined is not included in the determination rule, the frame determination unit152regards the determination result as NG (step S1306), and then ends the multimedia-control-unit frame determination process. Subsequently to step S1304, the frame determination unit152determines whether or not the frame to be determined conforms to the criteria for the categories, namely, the data length and the data range, in the determination rule (step S1305). If the frame to be determined does not conform to one of the criterion for the data length and the criterion for the data range in step S1305, the frame determination unit152regards the determination result as NG (step S1306), and then ends the multimedia-control-unit frame determination process. If it is determined in step S1305that the frame to be determined conforms to both the criterion for the data length and the criterion for the data range, the frame determination unit152identifies whether the target to be determined is a transmit frame or a received frame (step S1601). If the target to be determined is a received frame rather than a transmit frame, the frame determination unit152regards the determination result as OK (step S1602), and then ends the multimedia-control-unit frame determination process. If the target to be determined is a transmit frame, the process proceeds to step S1307. In the third modification of the first embodiment, accordingly, in the multimedia-control-unit frame determination process S5600, not only a transmit frame but also a received frame is subjected to step S1305for determining whether or not the frame to be determined conforms to the criteria for the data length and the data range in the determination rule. That is, in the third modification, as compared to the first embodiment, the role of part of processing for a received frame (step S1305) is shifted from the system control unit110to the multimedia control unit150. the third modification is effective when, for example, the multimedia control unit has higher performance than the system control unit. In addition, out of the system control unit110and the multimedia control unit150, the system control unit110, which earlier processes a received frame from the bus200, determines the conformity of the received frame with the rule category (criterion) for the period or the frequency of occurrence. This allows the determination to be performed accurately without being affected by a delay and the like caused by transmission between the two units. Accordingly, it is useful to appropriately distribute the load of a process for determining the conformity of a frame with a rule in accordance with the positional relationship between each control unit and the bus200and in accordance with the processing capacities of each control unit. In the third modification, for example, a program stored in a memory on a chip that is the multimedia control unit150(first control unit) of the head unit100is executed by a processor to implement the function of a first determination unit that determines whether or not a transmit frame conforms to a first predetermined rule (such as the data length and the data range, or the period and the frequency of occurrence). If the first determination unit determines that the transmit frame does not conform to the first predetermined rule, information on the transmit frame is prevented from being transmitted from the first control unit to the system control unit110(second control unit). In addition, a program stored in a memory on a chip that is the system control unit110(second control unit) of the head unit100is executed by a processor to implement the function of a second determination unit that determines whether or not a received frame conforms to a second predetermined rule (such as the period and the frequency of occurrence). Furthermore, a program stored in a memory on a chip that is the multimedia control unit150is executed by a processor to implement the function of a third determination unit that determines whether or not a received frame conforms to a third predetermined rule (such as the data length and the data range). Note that the first determination unit and the third determination unit are constituent elements of the frame determination unit152and the second determination unit is a constituent element of the frame determination unit115. Second Embodiment Hereinafter, as an embodiment of the present disclosure, an in-vehicle network system including a vehicle communication apparatus (head unit2100) that performs a vehicle communication method for determining the conformity of frames with a rule, the frames including a transmit frame and the like to be delivered to a bus, and saving the determination result as a log will be described with reference to the drawings. The in-vehicle network system according to this embodiment is constructed by replacing the head unit100of the in-vehicle network system10illustrated in the first embodiment with a head unit2100. 2.1 Configuration of Head Unit2100 The head unit2100is a vehicle communication apparatus, similarly to the head unit100, and is a kind of ECU disposed on, for example, an instrument panel or the like of an automobile, including a display device for displaying information to be viewed by a driver, an input means for accepting the operation of the driver, and so on. FIG.24is a configuration diagram of the head unit2100. As illustrated inFIG.24, the head unit2100is configured to include a system control unit2110and a multimedia control unit2150. Constituent elements identical to or substantially identical to those illustrated in the first embodiment are assigned the same numerals, and are not described herein. The system control unit2110is obtained by partially modifying the system control unit110illustrated in the first embodiment, and the multimedia control unit2150is obtained by partially modifying the multimedia control unit150illustrated in the first embodiment. The system control unit2110mainly takes on control of communication via the bus200, and is configured to include the frame transceiving unit111, the received frame interpretation unit112, the received-ID judgment unit113, the received-ID list holding unit114, a frame determination unit2115, the determination rule holding unit116, the unit-to-unit communication processing unit117, the transmit frame generation unit118, and a determination result holding unit2119. The frame determination unit2115receives the values of the frame received from the received frame interpretation unit112or receives from the unit-to-unit communication processing unit117values that are the content of a frame to be transmitted. Further, the frame determination unit2115acquires a determination rule from the determination rule holding unit116, and notifies the determination result holding unit2119of information concerning the result of a frame determination process (system-control-unit frame determination process) based on the determination rule. The content of the system-control-unit frame determination process is the same or substantially the same as that illustrated in the first embodiment (seeFIG.15). Further, the frame determination unit2115notifies the transmit frame generation unit118or the unit-to-unit communication processing unit117of the values of the frame regardless of the result of the system-control-unit frame determination process. The determination result holding unit2119is implemented by including a storage medium such as a memory, and, upon being notified of information concerning the result of the frame determination process by the frame determination unit2115, saves the information in the storage medium as a log. The information concerning the result of the frame determination process is information including the content of a frame to be determined and the determination result. The multimedia control unit2150mainly takes on the process of executing and controlling an application program, and is configured to include the unit-to-unit communication processing unit151, a frame determination unit2152, the determination rule holding unit153, the application execution unit154, and a determination result holding unit2155. The frame determination unit2152receives from the application execution unit154values that are the content of a frame to be transmitted or receives the values of the frame received from the unit-to-unit communication processing unit151. Further, the frame determination unit2152acquires a determination rule from the determination rule holding unit153, and notifies the determination result holding unit2155of information concerning the result of a frame determination process (multimedia-control-unit frame determination process) based on the determination rule. The content of the multimedia-control-unit frame determination process is the same or substantially the same as that illustrated in the first embodiment (seeFIG.16), Further, the frame determination unit2152notifies the application execution unit154or the unit-to-unit communication processing unit151of the values of the frame regardless of the result of the multimedia-control-unit frame determination process. The determination result holding unit2155is implemented by including a storage medium such as a memory, and, upon being notified of information concerning the result of the frame determination process (information including the content of a frame to be determined and the determination result) by the frame determination unit2152, saves the information in the storage medium as a log. 2.2 Frame Reception Operation Performed by Head Unit2100 FIG.25is a flowchart illustrating an example of a frame reception process performed in the head unit2100. As illustrated inFIG.25, the frame reception process performed in the head unit2100includes the same or substantially the same processing as that in steps S1100, S1200, S1300, S1500, S1600, and S1800of the frame reception process performed in the head unit100(seeFIG.13). The same or substantially the same processing is not described herein, as appropriate. Subsequently to the system-control-unit frame determination process S1300, the frame determination unit2115in the system control unit2110of the head unit2100notifies the determination result holding unit2119of information concerning the determination result (information including the content of a frame to be determined and the determination result) to cause the information to be saved as a log (step S11001). Then, the system control unit2110notifies the multimedia control unit2150of the content of the frame (received frame) by using a unit-to-unit communication process (step S1500). Further, subsequently to the multimedia-control-unit frame determination process S1600, the frame determination unit2152in the multimedia control unit2150of the head unit2100notifies the determination result holding unit2155of information concerning the determination result (information including the content of a frame to be determined and the determination result) to cause the information to be saved as a log (step S11002). Then, the frame determination unit2152notifies the application execution unit154of the content of the received frame. 2.3 Frame Transmission Operation Performed by Head Unit2100 FIG.26is a flowchart illustrating an example of a frame transmission process performed in the head unit2100. As illustrated inFIG.26, the frame transmission process performed in the head unit2100includes the same or substantially the same processing as that in steps S2100, S1600, S2300, S1300, S2500, and S2600of the frame transmission process performed in the head unit100(seeFIG.14). The same or substantially the same processing is not described herein, as appropriate. Subsequently to the multimedia-control-unit frame determination process S1600, the frame determination unit2152in the multimedia control unit2150of the head unit2100causes the determination result holding unit2155to save information concerning the determination result (information including the content of a frame to be determined and the determination result) in a log (step S11003). Then, the multimedia control unit2150notifies the system control unit2110of the content of the frame (transmit frame) by using a unit-to-unit communication process (step S2300). Further, subsequently to the system-control-unit frame determination process S1300, the frame determination unit2115in the system control unit2110of the head unit2100causes the determination result holding unit2119to save information concerning the determination result (information including the content of a frame to be determined and the determination result) in a log (step S11004). Then, the transmit frame generation unit118generates a transmit frame (data frame) on the basis of the content of the transmit frame (the content designated by the multimedia control unit2150in accordance with the application program) (step S2500). 2.4 Advantageous Effects of Second Embodiment In the head unit2100of the in-vehicle network system according to the second embodiment, the conformity of a frame to be transmitted or received with a plurality of rule categories (criteria) such as the frequency of occurrence and the period is determined, and the results of the determination are saved as logs. The saved logs can be collected and analyzed for, for example, reassessment of the individual criteria in the determination rule, providing an improvement in the accuracy of frame determination, and so on. The result of the reassessment of the individual criteria in the determination rule may be utilized for the manufactures of an apparatus and the like constituting an in-vehicle network system or for the update of information used in the apparatus, such as data and programs, for example. Third Embodiment Hereinafter, as an embodiment of the present disclosure, an in-vehicle network system including a vehicle communication apparatus (head unit3100) that performs a vehicle communication method for determining the conformity of frames with a rule, the frames including a transmit frame and the like to be delivered to a bus, and providing (transmitting) notification of the determination result to a server (computer) located outside the vehicle will be described with reference to the drawings. An in-vehicle network system10aaccording to this embodiment is constructed by replacing the head unit100of the in-vehicle network system10illustrated in the first embodiment with a head unit3100. 3.1 Overall Configuration of Network System30 FIG.27is a diagram illustrating an overall configuration of a network system30according to the present disclosure. The network system30is configured to include an in-vehicle network system10amounted in a vehicle, and a server3500located outside the vehicle. The in-vehicle network system10ais obtained by partially modifying the in-vehicle network system10, and is configured to include the bus200, the head unit3100, and nodes connected to the bus200, called ECUs, such as the ECUs400ato400dconnected to various devices. Among the constituent elements of the in-vehicle network system10a, constituent elements identical to or substantially identical to those of the in-vehicle network system10(seeFIG.1) are assigned the same numerals, and are not described herein. In the in-vehicle network system10a, the ECUs, including the head unit3100, exchange frames in accordance with the CAN protocol. The head unit3100is a vehicle communication apparatus, similarly to the head unit100, and is a kind of ECU disposed on, for example, an instrument panel or the like of an automobile, including a display device for displaying information to be viewed by a driver: an input means for accepting the operation of the driver, and so on. The head unit3100has a function of receiving frames transmitted from the ECUs400ato400dand displaying various states on a display (not illustrated) to present the states to a user. The head unit3100further has a function of generating a frame indicating each piece of information acquired by the head unit3100and transmitting the frame to one or more ECUs via the bus200. The head unit3100further has a function of determining the conformity of a frame to be transmitted or received with a rule to, for example, identify whether or not the frame is an unauthorized frame (that is, a frame that does not conform to the rule), and providing (transmitting) notification of information concerning the determination result to the server3500. The head unit3100is also a kind of ECU. The server3500is a device (server device) capable of communicating with the head unit3100, and is a computer having a function of receiving and collecting information concerning the determination results of frames from the head unit3100. 3.2 Configuration of Head Unit3100 FIG.28is a configuration diagram of the head unit3100. The head unit3100is configured to include a system control unit3110, a multimedia control unit3150, and an external communication control unit3170. Constituent elements identical to or substantially identical to those illustrated in the first embodiment are assigned the same numerals, and are not described herein. The system control unit3110is obtained by partially modifying the system control unit110illustrated in the first embodiment, and the multimedia control unit3150is obtained by partially modifying the multimedia control unit150illustrated in the first embodiment. The system control unit3110mainly takes on control of communication via the bus200, and is configured to include the frame transceiving unit111, the received frame interpretation unit112, the received-ID judgment unit113, the received-ID list holding unit114, a frame determination unit3115, the determination rule holding unit116, the unit-to-unit communication processing unit117, and the transmit frame generation unit118. The frame determination unit3115receives the values of the frame received from the received frame interpretation unit112or receives from the unit-to-unit communication processing unit117values that are the content of a frame to be transmitted. Further, the frame determination unit3115acquires a determination rule from the determination rule holding unit116, and notifies the unit-to-unit communication processing unit117of information concerning the result of a frame determination process (system-control-unit frame determination process) based on the determination rule to notify the external communication control unit3170of the information. The content of the system-control-unit frame determination process is the same or substantially the same as that illustrated in the first embodiment (seeFIG.15). Further, the frame determination unit3115notifies the transmit frame generation unit118or the unit-to-unit communication processing unit117of the values of the frame regardless of the result of the system-control-unit frame determination process. The multimedia control unit3150mainly takes on the process of executing and controlling an application program, and is configured to include the unit-to-unit communication processing unit151, a frame determination unit3152, the determination rule holding unit153, and the application execution unit154. The frame determination unit3152receives from the application execution unit154values that are the content of a frame to be transmitted or receives the values of the frame received from the unit-to-unit communication processing unit151. Further, the frame determination unit3152acquires a determination rule from the determination rule holding unit153, and notifies the unit-to-unit communication processing unit151of information concerning the result of a frame determination process (multimedia-control-unit frame determination process) based on the determination rule to notify the external communication control unit3170of the information. The content of the multimedia-control-unit frame determination process is the same or substantially the same as that illustrated in the first embodiment (seeFIG.16). Further, the frame determination unit3152notifies the application execution unit154or the unit-to-unit communication processing unit151of the values of the frame regardless of the result of the multimedia-control-unit frame determination process. The external communication control unit3170is a chip different from the system control unit3110and the multimedia control unit3150, for example, and is configured to include, as functional constituent elements, a unit-to-unit communication processing unit3171and an external communication processing unit3172. These functional constituent elements are each implemented by elements integrated on a chip, such as a communication circuit, a memory, a processor that executes a control program stored in the memory, and other circuits. The unit-to-unit communication processing unit3171performs a communication process between different units. Specifically, the unit-to-unit communication processing unit3171exchanges information with the system control unit3110or the multimedia control unit3150via wired or wireless communication. The external communication processing unit3172has a function to perform wireless communication with the server3500located outside the vehicle and, upon being notified of information concerning the determination result of a frame by the unit-to-unit communication processing unit3171, provides (transmits) notification of the information to the server3500. 3.3 Frame Reception Operation Performed by Head Unit3100 FIG.29is a flowchart illustrating an example of a frame reception process performed in the head unit3100. As illustrated inFIG.29, the frame reception process performed in the head unit3100includes the same or substantially the same processing as that in steps S1100, S1200, S1300, S1500, S1600, and S1800of the frame reception process performed in the head unit100(seeFIG.13). The same or substantially the same processing is not described herein, as appropriate. Subsequently to the system-control-unit frame determination process S1300, the frame determination unit3115in the system control unit3110of the head unit3100notifies the external communication control unit3170of information concerning the determination result (information including the content of a frame to be determined and the determination result) via the unit-to-unit communication processing unit117(step S21001). In response to this notification, the external communication control unit3170notifies the server3500of the information concerning the determination result through the external communication processing unit3172(step S21002). Then, the system control unit3110notifies the multimedia control unit3150of the content of the frame (received frame) by using a unit-to-unit communication process (step S1500). Subsequently to the multimedia-control-unit frame determination process S1600, the frame determination unit3152in the multimedia control unit3150of the head unit3100notifies the external communication control unit3170of information concerning the determination result (information including the content of a frame to be determined and the determination result) via the unit-to-unit communication processing unit151(step S21003). In response to this notification, the external communication control unit3170notifies the server3500of the information concerning the determination result through the external communication processing unit3172(step S21004). Then, the frame determination unit3152notifies the application execution unit154of the content of the received frame. 3.4 Frame Transmission Operation Performed by Head Unit3100 FIG.30is a flowchart illustrating an example of a frame transmission process performed in the head unit3100. As illustrated inFIG.30, the frame transmission process performed in the head unit3100includes the same or substantially the same processing as that in steps S2100, S1600, S2300, S1300, S2500, and S2600of the frame transmission process performed in the head unit100(seeFIG.14). The same or substantially the same processing is not described herein, as appropriate. Subsequently to the multimedia-control-unit frame determination process S1600, the frame determination unit3152in the multimedia control unit3150of the head unit3100notifies the external communication control unit3170of information concerning the determination result (information including the content of a frame to be determined and the determination result) via the unit-to-unit communication processing unit151(step321003). In response to this notification, the external communication control unit3170notifies the server3500of the information concerning the determination result through the external communication processing unit3172(step S21004). Then, the multimedia control unit3150notifies the system control unit3110of the content of the frame (transmit frame) by using a unit-to-unit communication process (step S2300). Subsequently to the system-control-unit frame determination process S1300, the frame determination unit3115in the system control unit3110of the head unit3100notifies the external communication control unit3170of information concerning the determination result (information including the content of a frame to be determined and the determination result) via the unit-to-unit communication processing unit117(step S21001). In response to this notification, the external communication control unit3170notifies the server3500of the information concerning the determination result through the external communication processing unit3172(step S21002). Then, the transmit frame generation unit118generates a transmit frame (data frame) on the basis of the content of the transmit frame (the content designated by the multimedia control unit3150in accordance with the application program) (step32500). 3.5 Advantageous Effects of Third Embodiment The head unit3100of the in-vehicle network system10aaccording to the third embodiment determines the conformity of a frame to be transmitted or received with a plurality of rule categories (criteria) such as the frequency of occurrence and the period, and transmits information concerning the result of the determination to the server3500. The server3500can collect and analyze the information concerning the result of the determination for, for example, reassessment of the individual criteria in the determination rule, providing an improvement in the accuracy of frame determination, and so on. The result of the reassessment of the individual criteria in the determination rule may be utilized for the manufactures of an apparatus and the like constituting an in-vehicle network system or for the update of information used in the apparatus, such as data and programs, for example. Other Embodiments As described above, Embodiments 1 to 3 have been described as illustrative examples of the technique according to the present disclosure. However, the technique according to the present disclosure is not limited to these embodiments and is also applicable to embodiments in which modifications, replacements, additions, omissions, and others are made as desired. For example, the following modifications are also included in embodiments of the present disclosure. (1) In the embodiments described above, a data frame in the CAN protocol is configured in the standard ID format. The data frame may be in an extended ID format. In the extended ID format, an ID (message ID) is expressed in 29 bits in which the base ID at the ID position in the standard ID format and an ID extension are combined. This 29-bit ID may be handled as an ID (message ID) in the embodiments described above. (2) In the embodiments described above, the number of margins to be used in a frame period determination process is specified to be one. However, a plurality of margins may be used. In addition, determination results obtained by using the individual margins may be handled as being identical, or determination results obtained by using the individual margins may be assigned weights. In addition, processing after the determination results have been obtained may differ depending on the margin. For example, the determination result of a frame that falls within a first margin range may be merely recorded as a log, and the determination result of a frame that falls within a second margin range may be used for filtering and the transmission or reception process of the frame may be prevented when the frame is determined to be unauthorized. In addition, the value of a margin in a determination rule is not limited to a fixed value, and may be a computational formula. Furthermore, the value of a margin may be dynamically changed in accordance with the total number of frame processing operations for all the IDs. (3) In the embodiments described above, the number of threshold values (threshold frequency-of-occurrence values) to be used in a frame frequency-of-occurrence determination process is specified to be one. However, a plurality of threshold values may be used. In addition, determination results obtained by using the individual threshold values may be handled as being identical, or determination results obtained by using the individual threshold values may be assigned weights. In addition, processing after the determination results have been obtained may differ depending on the threshold value. For example, the determination result of a frame using a first threshold value may be merely recorded as a log, and the determination result of a frame that corresponds to a second threshold value may be used for filtering and the transmission or reception process of the frame may be prevented when the frame is determined to be unauthorized. In addition, a threshold frequency-of-occurrence value in a determination rule is not limited to a fixed value, and may be a computational formula. Furthermore, a threshold frequency-of-occurrence value may be dynamically changed in accordance with the total number of frame processing operations for all the IDs. (4) In the embodiments described above, two types of determination results, namely, OK (normal) and NG (unauthorized), are used for the determination of the conformity of frames with the criterion for the period. Alternatively, a plurality of types of determination results more than two types of determination results may be used. For example, as in the first modification of the first embodiment, in a case where the reception of a plurality of frames is allowed within a certain margin range, a notification of reception of the plurality of frames may be included in a determination result. Alternatively, the conformity with each category (criterion) in a determination rule may be represented by the degree of conformity with respect to a certain evaluation criterion (such as a conformity of 100% or a conformity of 80%), and the resulting degree of conformity may be used as a determination result. (5) In the embodiments described above, the frame determination process is divided into a section to be performed by a system control unit and a section to be performed by a multimedia control unit. Alternatively, the frame determination process may be performed by only one of them. In addition, a vehicle communication apparatus that is an ECU, such as a head unit, may not necessarily determine the conformity of a received frame with a rule so long as the vehicle communication apparatus determines the conformity of a transmit frame with a rule. In the embodiments described above, furthermore, the frame determination process is divided into sections to be performed by separate units on the basis of the discrimination of a received frame and a transmit frame, by way of example. However, the frame determination process may be divided on the basis of the discrimination of the ID, the timing, or the like. Alternatively, portions that perform frame determination (the determination of the conformity of a frame with a rule) may be located at a plurality of separate locations in the same unit. For example, frame determination may be divided on a per-application-program basis, or may be divided on a per-OS basis when a plurality of operating systems (OSs) operate on the same chip. Alternatively, a plurality of locations at which frame determination is feasible may be provided and a location at which frame determination is to be actually performed may be dynamically selected in accordance with the processing load on the entire system, how frequently frame processing is performed, and so on. In addition, the multimedia control unit and the system control unit may share the determination of the conformity of a received frame or a transmit frame with a rule in any way. That is, each unit may take charge of the determination regarding any rule category (criterion). It is sufficient that a vehicle communication apparatus that is an ECU, such as a head unit, which is connected to a bus in an in-vehicle network system include a multimedia control unit (first control unit) that identifies a transmit frame which is a frame to be delivered to the bus, and a system control unit (second control unit) that is capable of communicating with the first control unit via wired or wireless connection to exchange information on frames, and it is also sufficient that at least one of the first control unit and the second control unit be configured to determine the conformity of a transmit frame or the like with a rule. In addition, the first control unit may include a first determination unit that determines whether or not a transmit frame conforms to a first predetermined rule (for example, a criterion for any one of the categories in the determination rule inFIG.6), and the second control unit may include a second determination unit that determines whether or not a received frame conforms to a second predetermined rule (for example, a criterion for any one of the categories in the determination rule inFIG.6). (6) In the embodiments described above, the numbers of periods and margins in a determination rule to be used in a frame period determination process are each specified to be one for each ID. However, the numbers of periods and margins may be each specified for each group of a plurality of IDs. In addition, but not limited to, a frame period determination process is performed once for each transmission or reception of one frame. Alternatively, a plurality of frame period determination processes having different determination content, such as a frame period determination process for all the IDs and a frame period determination process for each group, may be performed in combination. (7) In the embodiments described above, the number of threshold values (threshold frequency-of-occurrence values) in a determination rule to be used in a frame frequency-of-occurrence determination process is specified to be one for each ID. However, the number of threshold values (threshold frequency-of-occurrence values) may be specified for each group of a plurality of IDs. In addition, but not limited to, a frame frequency-of-occurrence determination process is performed once for each transmission or reception of one frame. Alternatively, a plurality of frame frequency-of-occurrence determination processes having different determination content, such as a frame frequency-of-occurrence determination process for all the IDs and a frame frequency-of-occurrence determination process for each group, may be performed in combination. (8) In the second embodiment described above, each unit (each of a system control unit and a multimedia control unit) includes a determination result holding unit. However, this configuration is not restrictive. Only one of them may include a determination result holding unit, or only a different independent unit (such as a chip) in the head unit may include a determination result holding unit. In addition, a storage medium (recording medium) in which the determination result holding unit saves a log may be an internal memory of the determination result holding unit or any other device such as a memory or a hard disk implemented by an external circuit, apparatus, or the like. (9) In the second embodiment described above, it may be feasible to switch whether or not to save the determination result in a log in accordance with the determination result of the frame. Alternatively, the determination results of all the IDs may be saved or only the determination result of a specific ID may be saved. It may also be feasible to switch whether or not to save the determination result for each unit. (10) In the third embodiment described above, the external communication control unit3170is used for communicating with a server. However, this configuration is not restrictive. Either or both of the system control unit3110and the multimedia control unit3150may include a configuration (such as an integrated circuit) for communicating with a server. In addition, information concerning the determination result of a frame may be transmitted from the system control unit3110not directly to the external communication control unit3170but via the multimedia control unit3150(the unit-to-unit communication processing unit151). (11) In the third embodiment described above, it may be feasible to switch whether or not to notify a server of the determination result in accordance with the determination result of the frame. Alternatively, the server may be notified of the determination results of all the IDs or the server may be notified of only the determination result of a specific ID. It may also be feasible to switch whether or not to notify the server of the determination result for each unit. (12) In the third embodiment described above, each unit notifies a server of the determination result each time the unit performs a frame determination process. Alternatively, each unit may collectively notify the server of determination results of the frame determination process when a certain number of determination results are accumulated. Alternatively, each unit may notify the server of the determination result of a frame regularly at certain time intervals. (13) The determination rule (seeFIG.6) illustrated in the embodiments described above is merely an example, and may include categories other than the rule categories described in the illustrative examples or may have values different from the values described in the illustrative examples. The determination rule may be set during the manufacture of a vehicle communication apparatus (for example, a head unit) or during shipment or the like, or may be set during shipment of a vehicle in which the in-vehicle network system is to be mounted. The determination rule may be updated during operation of the in-vehicle network system. The determination rule may also be set and updated through communication with an external device, set by using various recording media and the like, or set by using tools or the like. (14) The CAN protocol illustrated in the embodiments described above may have a broad meaning including its derivative protocols, such as TTCAN (Time-Triggered CAN) and CAN FD (CAN with Flexible Data Rate). (15) A head unit as an example of a vehicle communication apparatus in the embodiments described above is designed to include, for example, a chip of a semiconductor integrated circuit including a communication circuit, a memory, a processor, and other circuits, and so on, but may include other hardware constituent elements such as a hard disk device, a display, a keyboard, and a mouse. In addition, instead of a control program stored in a memory being executed by a processor to implement functions in software, the functions may be implemented by an integrated circuit without using a control program. The chip may not necessarily be packaged. (16) Some or all of the constituent elements included in each device in the embodiments described above may be constituted by a single system LSI (Large Scale Integration). The system LSI is a super-multifunctional LSI manufactured by integrating a plurality of configuration units on a single chip, and is specifically a computer system configured to include a microprocessor, a ROM, a RAM, and so on. The RAM has recorded thereon a computer program. The microprocessor operates in accordance with the computer program, thereby allowing the system LSI to achieve its function. In addition, constituent units included in each device may be integrated into individual chips or into a single chip that includes some or all of the units. While the system LSI is used here, an integrated circuit may also be referred to as an IC, an LSI, a super LSI, or an ultra LSI depending on the difference in the degree of integration. In addition, a technique for forming an integrated circuit is not limited to the LSI, and may be implemented by using a dedicated circuit or a general-purpose processor. An FPGA (Field Programmable Gate Array) that can be programmed after the manufacture of the LSI or a reconfigurable processor capable of reconfiguring connection or setting of circuit cells in the LSI may be used. Additionally, if a technique for forming an integrated circuit is introduced in place of the LSI along with development in semiconductor technology or other derivative technology, it is a matter of course that the technique may be used for the integration of functional blocks. One potential approach is to apply biotechnology, for example. (17) Some or all of the constituent elements included in each of the devices described above may be constituted by an IC card removably set in each device or a stand-alone module. The IC card or the module is a computer system constituted by a microprocessor, a ROM, a RAM, and so on. The IC card or the module may include the super-multifunctional LSI described above. The microprocessor operates in accordance with a computer program, thereby allowing the IC card or the module to achieve its function. This IC card or module may be tamper-resistant. (18) An aspect of the present disclosure may provide the vehicle communication method described above. An aspect of the present disclosure may also provide a computer program for implementing this method by using a computer, or a digital signal including the computer program. In an aspect of the present disclosure, furthermore, the computer program or the digital signal may be recorded on a computer-readable recording medium such as a flexible disk, a hard disk, a CD-ROM, an MO, a DVD, a DVD-ROM, a DVD-RAM, a BD (Blu-ray (registered trademark) Disc), or a semiconductor memory. An aspect of the present disclosure may also provide the digital signal recorded on such recording media. In an aspect of the present disclosure, furthermore, the computer program or the digital signal may be transmitted via a telecommunication line, a wireless or wired communication line, a network represented by the Internet, data broadcasting, or the like. A further aspect of the present disclosure may provide a computer system including a microprocessor and a memory, in which the memory has recorded thereon the computer program described above and the microprocessor operates in accordance with the computer program. Moreover, the program or the digital signal may be recorded on the recording medium and transported, or the program or the digital signal may be transported via the network or the like, so as to be performed by any other independent computer system. (19) Embodiments achieved by any combination of constituent elements and functions illustrated in the embodiments described above and the modifications described above also fall within the scope of the present disclosure. The present disclosure can be used for reducing delivery of unauthorized frames to a bus within an in-vehicle network system. | 115,043 |
11943234 | DETAILED DESCRIPTION This disclosure provides solutions to the aforementioned and other problems of previous technology by determining a volatile file based on a selection factor.FIG.1is a schematic diagram of an example system for determining a volatile file based on a selection factor.FIG.2is a block diagram of an example user device of the system ofFIG.1.FIG.3is a flow diagram illustrating an example operation of the system ofFIG.1. Example System for Determining a Volatile File Based on a Selection Factor FIG.1illustrates a schematic diagram of an example system100for determining a volatile file based on a selection factor. The system100may include a first user device102, a first entity device104, and a server106, wherein a first user108is associated with the first user device102, and wherein a first entity110is associated with the first entity device104. The system100may be communicatively coupled to a communication network112and may be operable to transmit data between the first user device102, first entity device104, and the server106through the communication network112. In general, the server106may perform an identification process with the first user device102. In particular embodiments, this process utilizes user preferences114to determine which one of one or more files116contained within a digital folder118, associated with the first user108, for the first user108to utilize in an interaction with the first entity110. The determined one of the one or more files116may be transmitted to the first user device102as an indication to be displayed for the first user108. For example, the first user108may be attempting to conduct an interaction with the first entity110via the first entity device104. In another example, the first user108may be attempting to conduct an interaction with the first entity110via a card120(e.g., a credit card, debit card, or any other suitable cards) associated with the first user108. In both examples, the first entity device104and the card120may be associated with the digital folder118of the first user108, wherein the first user108may be attempting to utilize one of the one or more files116contained within the digital folder118to conduct the interaction with the first entity110. Without limitations, the one or more files116stored in the digital folder118may include various digital assets (for example, Bitcoin, Ethereum, Solana, Cardano, Ripple, and the like). In embodiments, a digital asset may be a collection of binary data which is designed to work as a medium of exchange and is built on blockchain technology protocols. Records of each digital asset may be stored in a digital ledger, such as a computerized database using cryptography to secure a change in the records. As the value of each digital asset may be volatile, the first user108may not know which one of the various digital assets stored in the digital folder118has the greatest value of utility to be utilized in the interaction with the first entity110(for example, which one has the highest value in view of an acceptable reference, such as USD). In this example, the first entity device104may receive a signal122transmitted by the first user device102requesting to initiate an interaction session with the first entity device104. The first user device102may be communicatively coupled to the first entity device104. The first user device102may establish a peer-to-peer connection with the first entity device104through near field communications (NFC), Bluetooth, Wi-Fi, or combinations thereof. In another embodiment, the first user device102may be operable to scan an identification item near the first entity device104to transmit the signal122, wherein the identification item is at least one of a barcode, a Quick Response (QR) code, a coded image, or a coded text. The first entity device104may then transmit a signal124verifying an established interaction session with the first user device102to the server106. The server106may determine which one of the one or more files116, associated with the first user108, for the first user108to utilize in the interaction with the first entity110within the established interaction session. The server106may be operable to generate a file vector126based on the user preferences114stored in the server106and on file information128for each of the one or more files116. The file information128may be transmitted to the server106from an external exchange130operable to generate and maintain the file information128. Upon receiving the transmitted file information128, the server106may store the file information128within a database132. In embodiments, the server106may receive and store file information128from the external exchange130in periodic time intervals, in real-time, when the signal124verifying an established interaction session is received, and any combination thereof. Without limitations, the file information128may include a value of each of the one or more files116in view of an acceptable reference (i.e., USD) at the time of transmission to the server106. The server106may initially sort the file vector126based on applying a selection factor to the received file information128for each of the one or more files116. In embodiments, the selection factor may be a spot-value at approximately a time of the interaction session. For example, the file vector126may be sorted to display a listing of the one or more files116, wherein a first file116acomprises the highest spot-value and each subsequent file116comprises the next highest spot-value in a descending order, at a time of the server106receiving the signal124verifying an established interaction session. Once generated, the server106may determine which one of the one or more files116in the file vector126to be utilized by the first user108by performing secondary rearranging or sorting based on the stored user preferences114. For example, the user preferences114may include absolute conditions, such as to never utilize Ethereum while conducting an interaction with the first entity110. The user preferences114may further include relative conditions between the one or more files116, such as to prioritize using Ripple over Solana for an interaction with the first entity110. In further embodiments, the user preferences114may include technical analysis conditions related to trends, chart patterns, volume indicators, momentum indicators, oscillators (for example, relative strength index), moving averages, support and resistance levels, and any combination thereof. For example, the user preferences114may include rearranging the generated file vector126in a descending order from the first file116acomprising the highest percentage value above its moving average (i.e., at a spot-value 15% above the moving average). In other embodiments, user preferences114may include choosing which file116would maximize potential rewards, points, and/or benefits. The server106may access the database132storing the received file information128, perform a technical analysis determination, based on the user preference114, on the file information128for each of the one or more files116in the generated file vector126, and rearrange the generated file vector126based on the technical analysis determination. The server106may then send a signal134to the first entity device104requesting verification that the first file116ais acceptable during the interaction session. For example, the first file116amay be designated as Ethereum, but the first entity110may not accept Ethereum for an interaction with the first user108. The first entity device104may transmit a signal136that either verifies or does not verify that the first file116ais acceptable during the interaction session. If the signal136does not verify that the first file116ais acceptable during the interaction session, the server106may perform an internal conversion process. In embodiments, the conversion process may comprise of assigning a value to the first file116awith reference to an acceptable file and converting the first file116ato the acceptable file for the assigned value. An acceptable file may be any suitable file accepted by the first entity110. In an example, the acceptable file may be USD or another digital asset, such as Bitcoin. The server106may then designate the converted, acceptable file as the first file116aand transmit a signal138comprising the file vector126and an indication to utilize the first file116aof the file vector126in an interaction between the first user108and the first entity110during the interaction session. In embodiments wherein the signal136does verify that the first file116ais acceptable during the interaction session, the server106may transmit the signal138without performing the conversion process. The first user108may authorize the server106to use the first file116a, and the server106may conduct the interaction during the interaction session between the first user108and the first entity110in response to receiving the authorization from the first user108. The server106is generally a suitable server (e.g., including a physical server and/or virtual server) operable to store data in a memory140and/or provide access to application(s) or other services. The server106may be a backend server associated with a particular group that facilitates conducting interactions between entities and one or more users. Details of the operations of the server106are described in conjunction withFIG.3. Memory140includes software instructions that, when executed by a processor142, cause the server106to perform one or more functions described herein. Memory140may be volatile or non-volatile and may comprise a read-only memory (ROM), random-access memory (RAM), ternary content-addressable memory (TCAM), dynamic random-access memory (DRAM), and static random-access memory (SRAM). Memory140may be implemented using one or more disks, tape drives, solid-state drives, and/or the like. Memory140is operable to store software instructions, the digital folder118, user preferences114, and/or any other data or instructions. The software instructions may comprise any suitable set of instructions, logic, rules, or code operable to execute the processor142. In these examples, the processor142may be communicatively coupled to the memory140and may access the memory140for these determinations. Digital folder118comprises a database of one or more items associated with the first user108. For example, digital folder118may include one or more files116, each comprising a digital asset. Thus, in one embodiment, the digital folder118may be a digital wallet or the like. In embodiments, one or more of the files116may be used for an electronic interaction. Digital folder118may comprise a control component (e.g., control software) and a data component (e.g., database of items). The control component may provide security and encryption for the data component and for external communications with other systems, such as electronic interaction systems, or other devices, such as first user device102. Digital folder118may be stored in memory140of the server106. First user108may possess a user device, such as the first user device102, configured to access the digital folder118. Processor142comprises one or more processors operably coupled to the memory140. The processor142is any electronic circuitry including, but not limited to, state machines, one or more central processing unit (CPU) chips, logic units, cores (e.g. a multi-core processor), field-programmable gate array (FPGAs), application-specific integrated circuits (ASICs), or digital signal processors (DSPs). The processor142may be a programmable logic device, a microcontroller, a microprocessor, or any suitable combination of the preceding. The one or more processors are configured to process data and may be implemented in hardware or software. For example, the processor142may be 8-bit, 16-bit, 32-bit, 64-bit, or of any other suitable architecture. The processor142may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components. The one or more processors are configured to implement various instructions. For example, the one or more processors are configured to execute software instructions. In this way, processor142may be a special-purpose computer designed to implement the functions disclosed herein. In an embodiment, the processor142is implemented using logic units, FPGAs, ASICs, DSPs, or any other suitable hardware. The processor142is configured to operate as described inFIGS.1and3. For example, the processor142may be configured to perform the steps of method300as described inFIG.3. As illustrated, the server106may further comprise a network interface144. Network interface144is configured to enable wired and/or wireless communications (e.g., via communication network112). The network interface144is configured to communicate data between the server106and other devices (e.g., first user device102), databases, systems, or domain(s). For example, the network interface144may comprise a WIFI interface, a local area network (LAN) interface, a wide area network (WAN) interface, a modem, a switch, or a router. The processor142is configured to send and receive data using the network interface144. The network interface144may be configured to use any suitable type of communication protocol as would be appreciated by one of skill in the art. The communication network112may facilitate communication within the system100. This disclosure contemplates the communication network112being any suitable network operable to facilitate communication between the first user device102, first entity device104, and the server106. Communication network112may include any interconnecting system capable of transmitting audio, video, signals, data, messages, or any combination of the preceding. Communication network112may include all or a portion of a public switched telephone network (PSTN), a public or private data network, a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), a local, regional, or global communication or computer network, such as the Internet, a wireline or wireless network, an enterprise intranet, or any other suitable communication link, including combinations thereof, operable to facilitate communication between the components. In other embodiments, system100may not have all of the components listed and/or may have other elements instead of, or in addition to, those listed above. The first user device102may be any computing device configured to communicate with other devices, such as other user devices102, servers (e.g., server106), databases, etc. through the communication network112. The first user device102may be configured to perform specific functions described herein and interact with entities110, e.g., via its user interfaces. Examples of a first user device102include but are not limited to mobile phones, wearable devices, tablet computers, laptop computers, servers, etc. In one example, a particular first user device102(associated with a particular user108) may be a smartphone or wearable device that is operable to initiate an interaction session to conduct an interaction with the first entity110. Typically, the first user108, who is a client of an organization, may access first user's files on an interaction application (for example, interaction application206inFIG.2) from the first user device102. First user device102is described in more detail below inFIG.2. The first entity device104may be any suitable device for facilitating an interaction with the first user108. For example, first entity device104may be a register, a tablet, a phone, a laptop, a personal computer, a terminal, etc. The first entity device104may be operable to receive information from a user and/or card when an interaction is requested. The first entity device104then may proceed to process the requested interaction. The first entity device104may include any appropriate device for communicating with components of system100over the communication network112. As an example and not by way of limitation, first entity device104may include a computer, a laptop, a wireless or cellular telephone, an electronic notebook, a personal digital assistant, a tablet, or any other device capable of receiving, processing, storing, and/or communicating information with other components of system100. This disclosure contemplates first entity device104being any appropriate device for sending and receiving communications over communication network112. The first entity device104may also include a user interface, such as a display, a microphone, keypad, or other appropriate terminal equipment usable by a user and/or the first entity110. In some embodiments, an application executed by first entity device104may perform the functions described herein. As illustrated, the first entity device104may be associated with the first entity110. The first entity110may be an individual that provides items or services to the first user108in exchange for resources. Example User Device FIG.2is an example of the first user device102ofFIG.1. While the present example is described as the first user device102,FIG.2can be illustrative of any suitable user device102. The first user device102may include a processor200, a memory202, and a network interface204. The first user device102may be configured as shown or in any other suitable configuration. The processor200comprises one or more processors operably coupled to the memory202. The processor200is any electronic circuitry including, but not limited to, state machines, one or more central processing unit (CPU) chips, logic units, cores (e.g. a multi-core processor), field-programmable gate array (FPGAs), application specific integrated circuits (ASICs), or digital signal processors (DSPs). The processor200may be a programmable logic device, a microcontroller, a microprocessor, or any suitable combination of the preceding. The processor200is communicatively coupled to and in signal communication with the memory202and the network interface204. The one or more processors are configured to process data and may be implemented in hardware or software. For example, the processor200may be 8-bit, 16-bit, 32-bit, 64-bit or of any other suitable architecture. The processor200may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components. The one or more processors are configured to implement various instructions. For example, the one or more processors are configured to execute instructions to implement the function disclosed herein, such as some or all of those described with respect toFIGS.1-3. In some embodiments, the function described herein is implemented using logic units, FPGAs, ASICs, DSPs, or any other suitable hardware or electronic circuitry. The memory202is operable to store any of the information described with respect toFIGS.1-3along with any other data, instructions, logic, rules, or code operable to implement the function(s) described herein when executed by processor200. For example, the memory202may store code for application(s) (for example, for an interaction application206), and/or software instructions208, which are described below with respect toFIG.3. The memory202comprises one or more disks, tape drives, or solid-state drives, and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution. The memory202may be volatile or non-volatile and may comprise read-only memory (ROM), random-access memory (RAM), ternary content-addressable memory (TCAM), dynamic random-access memory (DRAM), and static random-access memory (SRAM). Interaction application206may be a software application, a mobile application, a web application, and/or a software infrastructure associated with an organization. The interaction application206is configured to provide a user interface to enable the first user108(referring toFIG.1) to access first user's bank files, records, transfers to and from other users108, requests to the organization, etc. In one example, the interaction application206may be a web application on a website. In this example, the first user108may access first user's bank files (via the interaction application206) on the website once the first user108is authenticated, e.g., by entering first user's username and password. In another example, the interaction application206may be a mobile application that is installed on the first user device102, such as a smartphone or a wearable device. In this example, the first user108may access first user's files (via the interaction application206) when the first user108is authenticated, e.g., by entering first user's username and password on the interaction application206. In embodiments, the server106(referring toFIG.1) may be associated with the interaction application206. The first user device102may transmit authorization to the server106through the communication network112(referring toFIG.1) in order to conduct an interaction with the first entity device104(referring toFIG.1) with the determined first file116a(referring toFIG.1). The network interface204is configured to enable wired and/or wireless communications. The network interface204is configured to communicate data between the first user device102and other network devices, systems, or domain(s). For example, the network interface204may comprise a WIFI interface, a local area network (LAN) interface, a wide area network (WAN) interface, a modem, a switch, or a router. The processor200is configured to send and receive data using the network interface204. The network interface204may be configured to use any suitable type of communication protocol as would be appreciated by one of skill in the art. Example Operation of the System for Determining a Volatile File Based on a Selection Factor FIG.3is a flow diagram illustrating an example method300of the system100ofFIG.1. The method300may be implemented using the first user device102, the first entity device104, and the server106ofFIG.1. The method300may begin at step302where the first entity device104(referring toFIG.1) may transmit the signal124(referring to FIG.1) to the server106(referring toFIG.1) verifying an established interaction session with the first user108(referring toFIG.1). In embodiments, the first user108may have initiated an interaction session with the first entity110(referring toFIG.1) by providing the card120(referring toFIG.1) to the first entity110or by transmitting the signal122(referring toFIG.1) to the first entity device104via the first user device102(referring toFIG.1). The card120may be associated with the server106and operable to identify the first user108to the server106. In addition, the first user device102may be operable to identify the first user108to the server106. The processor142(referring toFIG.1) of the server106may be operable to receive the transmission from the first entity device104. At step304, the processor142of the server106may receive the file information128(referring toFIG.1) corresponding to the one or more files116(referring toFIG.1) stored in the digital folder118(referring toFIG.1). For example, the one or more files116may include various digital assets (for example, Bitcoin, Ethereum, Solana, Cardano, Ripple, and the like) associated with the first user108. Without limitations, the file information128may include a value of each of the one or more files116in view of an acceptable reference (i.e., USD) at the time of transmission to the server106(for example, the value of Bitcoin in USD at the time of the established interaction session). The file information128may be transmitted to the server106from the external exchange130(referring toFIG.1), which is operable to generate and maintain the file information128. Upon receiving the transmitted file information128, the processor142may store the file information128within the database132(referring toFIG.1). In embodiments, the processor142may receive and store file information128from the external exchange130in periodic time intervals, in real-time, when the signal124verifying an established interaction session is received, and any combination thereof. At step306, the processor142of the server106may generate the file vector126(referring toFIG.1) based on the received file information128. The processor142of the server106may sort the one or more files116within the file vector126based on applying a selection factor to the received file information128for each of the one or more files116. In embodiments, the selection factor may be a spot-value at approximately a time of the interaction session. For example, the file vector126may be sorted to display a listing of the one or more files116, wherein the first file116a(referring toFIG.1) comprises the highest spot-value and each subsequent file116comprises the next highest spot-value in a descending order, at a time of the server106receiving the signal124verifying an established interaction session. At step308, the processor142of the server106may rearrange the generated file vector126based on user preferences114(referring toFIG.1) stored in the memory140(referring toFIG.1) of the server106. Without limitations, the user preferences114may include absolute conditions, relative conditions between one or more files116, technical analysis conditions, and any combination thereof. At step310, the processor142of the server106may determine whether the user preferences114include technical analysis conditions. If there is a determination that the user preferences114do include technical analysis conditions, the method300proceeds to step312. Otherwise, the method300proceeds to step314. At step312, in response to a determination that the user preferences114do include technical analysis conditions, the processor142of the server106may be operable to perform a technical analysis determination. The processor142may access the database132storing the file information128and analyze the file information128for each of the one or more files116within the generated file vector126. For example, the user preferences114may include rearranging the generated file vector126in a descending order from the first file116acomprising the highest percentage value above its moving average (i.e., at a spot-value 15% above the 10-day moving average). The processor142may determine the moving average of each of the one or more files116of the file vector126and compare each spot-value to the respective moving average. The processor142may then rearrange the generated file vector126based on the technical analysis determination. The method300then proceeds to step314. At step314, the processor142of the server106may determine whether the first entity110would accept the determined first file116aduring the interaction within the interaction session with the first user108. If there is a determination that the first entity110would not accept the determined first file116a, the method300proceeds to step316. Otherwise, the method300proceeds to step318. At step316, in response to a determination that the first entity110would not accept the determined first file116a, the processor142of the server106may perform an internal conversion process. In embodiments, the conversion process may comprise of assigning a value to the first file116awith reference to an acceptable file and converting the first file116ato the acceptable file for the assigned value. An acceptable file may be any suitable file accepted by the first entity110. In an example, the acceptable file may be USD or another digital asset, such as Bitcoin. In this example, the processor142may determine that 1 unit of Ethereum has the same value as 0.06 Bitcoin or 3,200 USD (the assigned value) and convert the Ethereum to Bitcoin or USD to be used in the interaction. The processor142may then designate the converted, acceptable file as the first file116a. The method300then proceeds to step318. At step318, the processor142of the server106may transmit the file vector126and an indication to utilize the first file116ato the first user device102in the interaction between the first user108and the first entity110during the interaction session. In embodiments wherein there is a determination in step314that the first entity110would accept the determined first file116a, the processor142of the server106may transmit the file vector126and the indication of the first file116awithout performing the conversion process. The first user108may then authorize the server106to use the first file116avia the interaction application206(referring toFIG.2), and the processor142of the server106may conduct the interaction during the interaction session between the first user108and the first entity110with the determined first file116a. The method300then proceeds to end. While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented. In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein. To aid the Patent Office, and any readers of any patent issued on this application in interpreting the claims appended hereto, applicants note that they do not intend any of the appended claims to invoke 35 U.S.C. § 112(f) as it exists on the date of filing hereof unless the words “means for” or “step for” are explicitly used in the particular claim. | 31,100 |
11943235 | Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION The following detailed description describes techniques for information technology (IT) cybersecurity monitoring, specifically detecting suspicious user logins in private networks, for example, using machine learning (ML). Various modifications, alterations, and permutations of the disclosed implementations can be made and will be readily apparent to those of ordinary skill in the art, and the general principles defined may be applied to other implementations and applications, without departing from scope of the disclosure. In some instances, details unnecessary to obtain an understanding of the described subject matter may be omitted so as to not obscure one or more described implementations with unnecessary detail and inasmuch as such details are within the skill of one of ordinary skill in the art. The present disclosure is not intended to be limited to the described or illustrated implementations, but to be accorded the widest scope consistent with the described principles and features. A system for detecting suspicious login utilizing machine learning can be dynamic and can be automated to continuously analyze accounts for existing and new users. The system can adapt to changes in the environment by a systematic retraining approach. In addition, the system can consider many features of the account login to be able to correctly identify any suspicious behavior using machine learning with low false positive alerts. In some implementation, systems can include the use of unsupervised machine learning (deep learning) with uniquely extracted and engineered login features to detect suspicious logins. False positives alerts that are common in conventional systems often result from a user login to a novel system. Reducing false positives alerts can be handled through different techniques. For example, if a user login to a novel system shares similar naming patterns to systems logged into before by the user, these logins can be identified as normal. Another aspect that improves (reduces) false positive rates includes techniques that are not susceptible to noise. For example, a login that follows a same historical pattern with small changes, such as a small change in login rates, will not result in the reporting (for example, by a model) the login as an anomaly. In addition, techniques of the present disclosure can be completely automated and automatically adjust to changes in the network/user accounts, which can result in reduced numbers of false positives and false negatives. Moreover, techniques of the present disclosure can be implemented in large networks that support large numbers of users. In addition, the techniques of the present disclosure require low maintenance and resources compared to other approaches. The present disclosure describes a system (and methods) for detecting suspicious user login behavior in private networks. In some implementations, the system can be composed of several modules. A collection and filtering module can be used to collect event logs from monitored systems (for example, including workstations and servers) on a periodic basis (such as every 24 hours). The logs for each period can be filtered to include only user account successful logins. A data processing module can be used to group and process the logs for each user so that for each user. For example, a summary record can be created for a particular period. The summary record can include the total number of successful logins, the number of destination systems accessed, the number of systems accessed from (source), the list of systems accessed, the list of systems accessed from, and the number of times different authentication protocols were utilized. Different authentication protocols that are tracked include, but are not limited to, New Technology (NT) large area network (LAN) Manager (NTLM) and Kerberos. A conversion module can be used to add additional numerical data to the record by converting the systems list to a list of integers based on a count of characters in each system name and a number of times that the system has been accessed or accessed from. A training module can use an anomaly detection machine learning algorithm, such as deep learning auto encoders, to train a model for each user account using the summarized user records collected from previous periods over sometime as training samples. Whenever a new period has passed (for example, every 24 hours), the logs for that period can be processed for each user and evaluated by an evaluation module using the user trained ML model to produce a deviation score. A reporting module can be used to report users for which the deviation score is higher than a threshold. Reports can be provided, for example, to security operation analysts for further investigation. The reporting module can also enrich the reported alerts with information from the user's previous records and perform correlation between reported records to prioritize alerts. FIG.1is a flow diagram of an example of a workflow100for detecting suspicious user logins in private networks, according to some implementations of the present disclosure. In some implementations, the workflow100includes the steps of data collection and filtering102, data processing104, prediction106, user login records maintenance108, training110, and enrichment, correlation, and reporting112. Data collection and filtering102produces login data114which is used by data processing104. Data processing104produces a user's login records for a current period116, which is used by prediction106and user login records maintenance108. Prediction106creates a user machine learning (ML) model118. User login records maintenance108creates user periodic login records120. Training110occurs on the ML model118using the user periodic login records120. The ultimate output of the workflow100is the enrichment, correlation, and reporting112. FIG.2is a flow diagram of an example of a workflow200for data collection and filtering102, according to some implementations of the present disclosure. In a first step, collect event log202, data is collected from sources204(for example, monitored workstations and servers). The data is filtered (206) to collect only successful user account login attempts and to remove local user account successful login. The filtered data is stored in login data208, for example, that can serve as a centralized repository. FIG.3is a flow diagram of an example of a workflow300for data processing104, according to some implementations of the present disclosure. Data from a login data store302is collected (304) and processed whenever a certain configurable period has passed (306) (for example, every 24 hours). The collected data in the last period is grouped (308) by user account. The login data for each user is aggregated and summarized (310) such as each user record312for the following: 1) the number of logins performed; 2) the number of systems accessed; 3) the number of systems used to access from if available (can be ignored if not available); 4) the list of unique accessed systems with login frequency for each system; 5) a list of unique accessed from systems with login from frequency (which can be optional); and 6) a count of each authentication protocol used (if there are different login protocols in the environment). User records312can be created even if some data is not available, such as the system accessed from (login source). After that, the list of systems accessed and access frequencies for each system are used to create (314) additional numerical features (316). This can be done using the following algorithm. First, the characters used in hostnames in the network are determined. Second, for each of the characters, two variables (columns) are created: one for accessed-to system (destination) and one for accessed-from systems (source). Third, the number of times each character appears in the host name is counted and is multiplied by the number of times that the system was accessed. This can be done for each system that was logged in to, and values for each character can be summed. Fourth, the same counting and multiplying is performed for the accessed-from (source) systems. FIG.4is a flow diagram of an example of a workflow400for prediction106, according to some implementations of the present disclosure. Numerical features are selected402from each user's current period login records404for a current period. If an ML model already exists406for the user account, the selected feature vector is evaluated using user ML model412, and an anomaly score is predicted (410), and an anomaly score is output (414). The ML algorithm used can be a deep learning auto encoder, for example. If no model exists at406, then exit can occur at408. The anomaly score can be calculated by calculating the distance between the input and the output of the deep learning neural network. FIG.5is a flow diagram of an example of a workflow500for user login records maintenance108, according to some implementations of the present disclosure. User historical login records are updated with a new user's login record for a current period (502). For each user with a new login record in the current period (504), the following algorithm can be followed to add the new record to the user historical login record if it exists. First, the date and time stamp are checked for the last record added (510) to the historical list. A new historical record list is created (508) if one does not already exist. A number of periods is calculated (512) between the user last login record and the current one. If the difference between the time of the new and last record is one period (514), then the new record is added (516) directly to the end of the list. If the time difference is x (not=1) periods, then x−1 records are added (518) with 0 number of loggings, in an empty login to and from list, with zeroes added in all remaining columns in the historical list. These empty records represent days in which the account has not performed any login activities. The new record for the current period is then added to the bottom of the user historical login list. Old records can be removed from the top if the number of records in the list exceeds a threshold after the addition is performed. In case the user account does not have historical login records, a new historical user login record is created, and the current period record is added. If the number of records in the list exceeds the maximum allowed (520), then a number of records are removed (522) from the list. FIG.6is a flow diagram of an example of a workflow600for training110, according to some implementations of the present disclosure. Models for new users and existing users can be trained for each available user account file history record. For example, using users' periodic login list602, the following steps can be used for each historical login list604. The process associated with the workflow600can continue as long as the number of trained models during the current period does not exceed (606) the maximum number allowed (for example, controlled using a configurable parameter). First, if a model for the user account exists (608), the model creation date is compared with the history file last update date to determine how many periods have passed between the user model last modified date and the history list last modified date (to determine delta period). If the difference is more than a set threshold (614) that specifies how old a model has to be before it is updated. The user history file can be used to train an auto encoder deep learning model. The number of trained model in this period is then incremented. Second, if the user history file belongs to a user that does not have a trained model, a check is made if there are enough records in the file. This is done by counting (612) the number of records in the user history list and comparing the number with a threshold (610). If the number of records in the file reaches (610) a threshold (minimum number of records required to train a model), then the feature vector is extracted and sent to an ML algorithm to generate a model and train (616) the user's ML model. The number of trained models is also incremented (618). Third, if the number of trained models in this period reaches the maximum number of training allowed in a period (at606), then the training stops for the current period. This is done as a way to manage resources and to distribute retraining. FIG.7is a flow diagram of an example of a workflow700for enrichment, correlation, and reporting112, according to some implementations of the present disclosure. User login records that generate an anomaly score higher than a threshold are identified in the workflow700. A user login record will already have information related to the login activities, such as the list of systems being accessed and the number of logins performed. This information can be enriched by using information in the user historical login records. This information can include, but is not limited to: 1) the average number of logins the user performs per day; 2) the average number of systems accessed per day; 3) the first date each system that has been accessed during the current period has been accessed before, or if it is a novel login; and 4) the number of periods during which the accessed systems has been accessed before. An anomaly score702is read for the user, and a determination is made whether the recorded anomaly score exceeds a threshold. If so, the user's information is further processed in the workflow700. The user's current record is enriched (706) with user history, using the user's current period login records708and the user periodic login list710. The user login records is also enriched (712) with external information such as user account information (for example, user role, recent role change, and creation dates) and systems information (for example, system function and system owner). Some of this enrichment information can be used to filter results, for example, by not including results for a user that has recently changed his/her role. The enriched anomalous login records can be correlated between themselves or correlated with external events/alerts. Internal correlation can facilitate the determination that a single system or host is being logged to/from by multiple users. The correlation can create a relationship between anomalous logins, which can result in assigning a higher priority for the investigation of the logins. The results can finally be sent as a report or alert for security analysts to investigate or can be graphed in a user interface. FIG.8is a flowchart of an example of a method800for detecting and generating reports of suspicious user logins in private networks, according to some implementations of the present disclosure. For clarity of presentation, the description that follows generally describes method800in the context of the other figures in this description. However, it will be understood that method800can be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. In some implementations, various steps of method800can be run in parallel, in combination, in loops, or in any order. In some implementations, modules identified in steps802-816can be implemented as modules represented by components ofFIG.1. At802, user login data for users is filtered, including monitoring workstations and servers accessed by users to obtain the user login data for the users. For example, filtering the user login data for the users can include collecting and filtering an event log documenting successful login events for users. From802, method800proceeds to804. At804, user login records are created for a current time period based, at least in part, on the user login data. As an example, creating user login records for a current time period can include: determining that a configurable period of time has passed; and grouping the user login data for the configurable period of time, including determining, for each of the users, a number of logins performed, a number of systems accessed, a number of source systems used to access a system, a list of unique access-to systems with a login frequency for each access-to system, a list of unique access-from systems with a login frequency for each access-from system, and a count of each authentication protocol used. From804, method800proceeds to806. At806, an anomaly score is determined for each user, where the anomaly score indicates a deviation by the user from historical login patterns of the user. For example, determining the anomaly score for each user can include: extracting numerical features from each user's current period login records for the current time period; evaluating the numerical features using user ML model; and predicting, using the evaluating, the anomaly score for the user. From806, method800proceeds to808. At808, a user machine learning (ML) model is updated based on the predicting. The system can use a feedback loop from a security analyst to retrain a user machine learning model if a false positive alert is reported before the threshold (that specifies how old a model has to be before it is updated) is met. This is especially useful if the threshold is set to a high value for resource conservation. In addition, the system can trigger an automatic user ML model update if the user had a recent role change and started to generate many alerts. From808, method800proceeds to810. At810, user period login records are maintained over time using user login data that are processed. For example, maintaining the user period login records over time can include updating historical login records using login data for current time period and maintaining the historical login records within a threshold list size. From810, method800proceeds to812. At812, the user ML model is trained using the user periodic login records. As an example, training the user ML model using the user periodic login records can include summarizing user records collected from previous time periods used as training samples. From812, method800proceeds to814. At814, enriched login statistics are generated using the user ML model and the user periodic login records. For example, generating the enriched login statistics can include: generating an average number of logins the user performs per day using information in user historical login records; generating an average number of systems accessed per day; identifying a first date each system has been accessed; and identifying a number of periods during which the accessed systems have been accessed. From814, method800proceeds to816. At816, a report that includes the enriched login statistics is generated in a graphical user interface. The report can include the following for each reported user account. A deviation score can include a priority rating (for example, high, medium, or low). Role information can indicate, for example, “Does the user have a valid change request during the current period, user account information (for example, user role, recent role change, and creation dates)?” A login activity summary table can include, for example, a total number of logins the user has performed in a current alerted period, the average number of logins the user performs per previous periods, the number of accessed systems during current period, the average number of accessed system per previous periods, the number of source systems during the current period, and the average number of source systems per previous periods. An accessed systems information table can include the systems being accessed, the first time the system has been accessed, the number of periods the system has been accessed, the system function, and the system owner. A source systems information table can identify the systems being accessed from, the first time the system has been accessed from, the number of periods the system has been accessed from, the system function, and the system owner. Based on an alert that is generated, additional automatic actions can be taken besides sending a report, such as disabling the user account. For example, a user with a high deviation score (and with no valid change request, no recent role change, and no correlation between the user role and server role) can be automatically disabled. Automatic actions can be determined, for example, based on a rules set indicating which actions are to be performed in response to certain login conditions being met. After816, method800can stop. FIG.9is a block diagram of an example computer system900used to provide computational functionalities associated with described algorithms, methods, functions, processes, flows, and procedures described in the present disclosure, according to some implementations of the present disclosure. The illustrated computer902is intended to encompass any computing device such as a server, a desktop computer, a laptop/notebook computer, a wireless data port, a smart phone, a personal data assistant (PDA), a tablet computing device, or one or more processors within these devices, including physical instances, virtual instances, or both. The computer902can include input devices such as keypads, keyboards, and touch screens that can accept user information. Also, the computer902can include output devices that can convey information associated with the operation of the computer902. The information can include digital data, visual data, audio information, or a combination of information. The information can be presented in a graphical user interface (UI) (or GUI). The computer902can serve in a role as a client, a network component, a server, a database, a persistency, or components of a computer system for performing the subject matter described in the present disclosure. The illustrated computer902is communicably coupled with a network930. In some implementations, one or more components of the computer902can be configured to operate within different environments, including cloud-computing-based environments, local environments, global environments, and combinations of environments. At a top level, the computer902is an electronic computing device operable to receive, transmit, process, store, and manage data and information associated with the described subject matter. According to some implementations, the computer902can also include, or be communicably coupled with, an application server, an email server, a web server, a caching server, a streaming data server, or a combination of servers. The computer902can receive requests over network930from a client application (for example, executing on another computer902). The computer902can respond to the received requests by processing the received requests using software applications. Requests can also be sent to the computer902from internal users (for example, from a command console), external (or third) parties, automated applications, entities, individuals, systems, and computers. Each of the components of the computer902can communicate using a system bus903. In some implementations, any or all of the components of the computer902, including hardware or software components, can interface with each other or the interface904(or a combination of both) over the system bus903. Interfaces can use an application programming interface (API)912, a service layer913, or a combination of the API912and service layer913. The API912can include specifications for routines, data structures, and object classes. The API912can be either computer-language independent or dependent. The API912can refer to a complete interface, a single function, or a set of APIs. The service layer913can provide software services to the computer902and other components (whether illustrated or not) that are communicably coupled to the computer902. The functionality of the computer902can be accessible for all service consumers using this service layer. Software services, such as those provided by the service layer913, can provide reusable, defined functionalities through a defined interface. For example, the interface can be software written in JAVA, C++, or a language providing data in extensible markup language (XML) format. While illustrated as an integrated component of the computer902, in alternative implementations, the API912or the service layer913can be stand-alone components in relation to other components of the computer902and other components communicably coupled to the computer902. Moreover, any or all parts of the API912or the service layer913can be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of the present disclosure. The computer902includes an interface904. Although illustrated as a single interface904inFIG.9, two or more interfaces904can be used according to particular needs, desires, or particular implementations of the computer902and the described functionality. The interface904can be used by the computer902for communicating with other systems that are connected to the network930(whether illustrated or not) in a distributed environment. Generally, the interface904can include, or be implemented using, logic encoded in software or hardware (or a combination of software and hardware) operable to communicate with the network930. More specifically, the interface904can include software supporting one or more communication protocols associated with communications. As such, the network930or the interface's hardware can be operable to communicate physical signals within and outside of the illustrated computer902. The computer902includes a processor905. Although illustrated as a single processor905inFIG.9, two or more processors905can be used according to particular needs, desires, or particular implementations of the computer902and the described functionality. Generally, the processor905can execute instructions and can manipulate data to perform the operations of the computer902, including operations using algorithms, methods, functions, processes, flows, and procedures as described in the present disclosure. The computer902also includes a database906that can hold data for the computer902and other components connected to the network930(whether illustrated or not). For example, database906can be an in-memory, conventional, or a database storing data consistent with the present disclosure. In some implementations, database906can be a combination of two or more different database types (for example, hybrid in-memory and conventional databases) according to particular needs, desires, or particular implementations of the computer902and the described functionality. Although illustrated as a single database906inFIG.9, two or more databases (of the same, different, or combination of types) can be used according to particular needs, desires, or particular implementations of the computer902and the described functionality. While database906is illustrated as an internal component of the computer902, in alternative implementations, database906can be external to the computer902. The computer902also includes a memory907that can hold data for the computer902or a combination of components connected to the network930(whether illustrated or not). Memory907can store any data consistent with the present disclosure. In some implementations, memory907can be a combination of two or more different types of memory (for example, a combination of semiconductor and magnetic storage) according to particular needs, desires, or particular implementations of the computer902and the described functionality. Although illustrated as a single memory907inFIG.9, two or more memories907(of the same, different, or combination of types) can be used according to particular needs, desires, or particular implementations of the computer902and the described functionality. While memory907is illustrated as an internal component of the computer902, in alternative implementations, memory907can be external to the computer902. The application908can be an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer902and the described functionality. For example, application908can serve as one or more components, modules, or applications. Further, although illustrated as a single application908, the application908can be implemented as multiple applications908on the computer902. In addition, although illustrated as internal to the computer902, in alternative implementations, the application908can be external to the computer902. The computer902can also include a power supply914. The power supply914can include a rechargeable or non-rechargeable battery that can be configured to be either user- or non-user-replaceable. In some implementations, the power supply914can include power-conversion and management circuits, including recharging, standby, and power management functionalities. In some implementations, the power-supply914can include a power plug to allow the computer902to be plugged into a wall socket or a power source to, for example, power the computer902or recharge a rechargeable battery. There can be any number of computers902associated with, or external to, a computer system containing computer902, with each computer902communicating over network930. Further, the terms “client,” “user,” and other appropriate terminology can be used interchangeably, as appropriate, without departing from the scope of the present disclosure. Moreover, the present disclosure contemplates that many users can use one computer902and one user can use multiple computers902. Described implementations of the subject matter can include one or more features, alone or in combination. For example, in a first implementation, a computer-implemented method includes the following. User login data for users is filtered, including monitoring workstations and servers accessed by users to obtain the user login data for the users. User login records are created for a current time period based, at least in part, on the user login data. An anomaly score is determined for each user, where the anomaly score indicates a deviation by the user from historical login patterns of the user. A user machine learning (ML) model is updated based on the predicting. User period login records are maintained over time using processed user login data. The user ML model is trained using the user periodic login records. Enriched login statistics are generated using the user ML model and the user periodic login records. A report that includes the enriched login statistics is generated in a graphical user interface. The foregoing and other described implementations can each, optionally, include one or more of the following features: A first feature, combinable with any of the following features, where filtering the user login data for the users includes collecting and filtering an event log documenting successful login events for users. A second feature, combinable with any of the previous or following features, where creating user login records for a current time period includes: determining that a configurable period of time has passed; and grouping the user login data for the configurable period of time, including determining, for each of the users, a number of logins performed, a number of systems accessed, a number of source systems used to access a system, a list of unique access-to systems with a login frequency for each access-to system, a list of unique access-from systems with a login frequency for each access-from system, and a count of each authentication protocol used. A third feature, combinable with any of the previous or following features, where determining the anomaly score for each user includes: extracting numerical features from each user's current period login records for the current time period; evaluating the numerical features using user ML model; and predicting, using the evaluating, the anomaly score for the user. A fourth feature, combinable with any of the previous or following features, where maintaining the user period login records over time includes: updating historical login records using login data for current time period; and maintaining the historical login records within a threshold list size. A fifth feature, combinable with any of the previous or following features, where training the user ML model using the user periodic login records includes summarizing user records collected from previous time periods used as training samples. A sixth feature, combinable with any of the previous or following features, where generating the enriched login statistics includes: generating an average number of logins the user performs per day using information in user historical login records; generating an average number of systems accessed per day; identifying a first date each system has been accessed; and identifying a number of periods during which the accessed systems have been accessed. In a second implementation, a non-transitory, computer-readable medium stores one or more instructions executable by a computer system to perform operations including the following. User login data for users is filtered, including monitoring workstations and servers accessed by users to obtain the user login data for the users. User login records are created for a current time period based, at least in part, on the user login data. An anomaly score is determined for each user, where the anomaly score indicates a deviation by the user from historical login patterns of the user. A user machine learning (ML) model is updated based on the predicting. User period login records are maintained over time using processed user login data. The user ML model is trained using the user periodic login records. Enriched login statistics are generated using the user ML model and the user periodic login records. A report that includes the enriched login statistics is generated in a graphical user interface. The foregoing and other described implementations can each, optionally, include one or more of the following features: A first feature, combinable with any of the following features, where filtering the user login data for the users includes collecting and filtering an event log documenting successful login events for users. A second feature, combinable with any of the previous or following features, where creating user login records for a current time period includes: determining that a configurable period of time has passed; and grouping the user login data for the configurable period of time, including determining, for each of the users, a number of logins performed, a number of systems accessed, a number of source systems used to access a system, a list of unique access-to systems with a login frequency for each access-to system, a list of unique access-from systems with a login frequency for each access-from system, and a count of each authentication protocol used. A third feature, combinable with any of the previous or following features, where determining the anomaly score for each user includes: extracting numerical features from each user's current period login records for the current time period; evaluating the numerical features using user ML model; and predicting, using the evaluating, the anomaly score for the user. A fourth feature, combinable with any of the previous or following features, where maintaining the user period login records over time includes: updating historical login records using login data for current time period; and maintaining the historical login records within a threshold list size. A fifth feature, combinable with any of the previous or following features, where training the user ML model using the user periodic login records includes summarizing user records collected from previous time periods used as training samples. A sixth feature, combinable with any of the previous or following features, where generating the enriched login statistics includes: generating an average number of logins the user performs per day using information in user historical login records; generating an average number of systems accessed per day; identifying a first date each system has been accessed; and identifying a number of periods during which the accessed systems have been accessed. In a third implementation, a computer-implemented system includes one or more processors and a non-transitory computer-readable storage medium coupled to the one or more processors and storing programming instructions for execution by the one or more processors. The programming instructions instruct the one or more processors to perform operations including the following. User login data for users is filtered, including monitoring workstations and servers accessed by users to obtain the user login data for the users. User login records are created for a current time period based, at least in part, on the user login data. An anomaly score is determined for each user, where the anomaly score indicates a deviation by the user from historical login patterns of the user. A user machine learning (ML) model is updated based on the predicting. User period login records are maintained over time using processed user login data. The user ML model is trained using the user periodic login records. Enriched login statistics are generated using the user ML model and the user periodic login records. A report that includes the enriched login statistics is generated in a graphical user interface. The foregoing and other described implementations can each, optionally, include one or more of the following features: A first feature, combinable with any of the following features, where filtering the user login data for the users includes collecting and filtering an event log documenting successful login events for users. A second feature, combinable with any of the previous or following features, where creating user login records for a current time period includes: determining that a configurable period of time has passed; and grouping the user login data for the configurable period of time, including determining, for each of the users, a number of logins performed, a number of systems accessed, a number of source systems used to access a system, a list of unique access-to systems with a login frequency for each access-to system, a list of unique access-from systems with a login frequency for each access-from system, and a count of each authentication protocol used. A third feature, combinable with any of the previous or following features, where determining the anomaly score for each user includes: extracting numerical features from each user's current period login records for the current time period; evaluating the numerical features using user ML model; and predicting, using the evaluating, the anomaly score for the user. A fourth feature, combinable with any of the previous or following features, where maintaining the user period login records over time includes: updating historical login records using login data for current time period; and maintaining the historical login records within a threshold list size. A fifth feature, combinable with any of the previous or following features, where training the user ML model using the user periodic login records includes summarizing user records collected from previous time periods used as training samples. Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Software implementations of the described subject matter can be implemented as one or more computer programs. Each computer program can include one or more modules of computer program instructions encoded on a tangible, non-transitory, computer-readable computer-storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively, or additionally, the program instructions can be encoded in/on an artificially generated propagated signal. For example, the signal can be a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to a suitable receiver apparatus for execution by a data processing apparatus. The computer-storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of computer-storage mediums. The terms “data processing apparatus,” “computer,” and “electronic computer device” (or equivalent as understood by one of ordinary skill in the art) refer to data processing hardware. For example, a data processing apparatus can encompass all kinds of apparatuses, devices, and machines for processing data, including by way of example, a programmable processor, a computer, or multiple processors or computers. The apparatus can also include special purpose logic circuitry including, for example, a central processing unit (CPU), a field-programmable gate array (FPGA), or an application-specific integrated circuit (ASIC). In some implementations, the data processing apparatus or special purpose logic circuitry (or a combination of the data processing apparatus or special purpose logic circuitry) can be hardware- or software-based (or a combination of both hardware- and software-based). The apparatus can optionally include code that creates an execution environment for computer programs, for example, code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of execution environments. The present disclosure contemplates the use of data processing apparatuses with or without conventional operating systems, such as LINUX, UNIX, WINDOWS, MAC OS, ANDROID, or IOS. A computer program, which can also be referred to or described as a program, software, a software application, a module, a software module, a script, or code, can be written in any form of programming language. Programming languages can include, for example, compiled languages, interpreted languages, declarative languages, or procedural languages. Programs can be deployed in any form, including as stand-alone programs, modules, components, subroutines, or units for use in a computing environment. A computer program can, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, for example, one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files storing one or more modules, sub-programs, or portions of code. A computer program can be deployed for execution on one computer or on multiple computers that are located, for example, at one site or distributed across multiple sites that are interconnected by a communication network. While portions of the programs illustrated in the various figures may be shown as individual modules that implement the various features and functionality through various objects, methods, or processes, the programs can instead include a number of sub-modules, third-party services, components, and libraries. Conversely, the features and functionality of various components can be combined into single components as appropriate. Thresholds used to make computational determinations can be statically, dynamically, or both statically and dynamically determined. The methods, processes, or logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The methods, processes, or logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, for example, a CPU, an FPGA, or an ASIC. Computers suitable for the execution of a computer program can be based on one or more of general and special purpose microprocessors and other kinds of CPUs. The elements of a computer are a CPU for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a CPU can receive instructions and data from (and write data to) a memory. Graphics processing units (GPUs) can also be used in combination with CPUs. The GPUs can provide specialized processing that occurs in parallel to processing performed by CPUs. The specialized processing can include artificial intelligence (AI) applications and processing, for example. GPUs can be used in GPU clusters or in multi-GPU computing. A computer can include, or be operatively coupled to, one or more mass storage devices for storing data. In some implementations, a computer can receive data from, and transfer data to, the mass storage devices including, for example, magnetic, magneto-optical disks, or optical disks. Moreover, a computer can be embedded in another device, for example, a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a global positioning system (GPS) receiver, or a portable storage device such as a universal serial bus (USB) flash drive. Computer-readable media (transitory or non-transitory, as appropriate) suitable for storing computer program instructions and data can include all forms of permanent/non-permanent and volatile/non-volatile memory, media, and memory devices. Computer-readable media can include, for example, semiconductor memory devices such as random access memory (RAM), read-only memory (ROM), phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and flash memory devices. Computer-readable media can also include, for example, magnetic devices such as tape, cartridges, cassettes, and internal/removable disks. Computer-readable media can also include magneto-optical disks and optical memory devices and technologies including, for example, digital video disc (DVD), CD-ROM, DVD+/−R, DVD-RAM, DVD-ROM, HD-DVD, and BLU-RAY. The memory can store various objects or data, including caches, classes, frameworks, applications, modules, backup data, jobs, web pages, web page templates, data structures, database tables, repositories, and dynamic information. Types of objects and data stored in memory can include parameters, variables, algorithms, instructions, rules, constraints, and references. Additionally, the memory can include logs, policies, security or access data, and reporting files. The processor and the memory can be supplemented by, or incorporated into, special purpose logic circuitry. Implementations of the subject matter described in the present disclosure can be implemented on a computer having a display device for providing interaction with a user, including displaying information to (and receiving input from) the user. Types of display devices can include, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), a light-emitting diode (LED), and a plasma monitor. Display devices can include a keyboard and pointing devices including, for example, a mouse, a trackball, or a trackpad. User input can also be provided to the computer through the use of a touchscreen, such as a tablet computer surface with pressure sensitivity or a multi-touch screen using capacitive or electric sensing. Other kinds of devices can be used to provide for interaction with a user, including to receive user feedback including, for example, sensory feedback including visual feedback, auditory feedback, or tactile feedback. Input from the user can be received in the form of acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to, and receiving documents from, a device that the user uses. For example, the computer can send web pages to a web browser on a user's client device in response to requests received from the web browser. The term “graphical user interface,” or “GUI,” can be used in the singular or the plural to describe one or more graphical user interfaces and each of the displays of a particular graphical user interface. Therefore, a GUI can represent any graphical user interface, including, but not limited to, a web browser, a touch-screen, or a command line interface (CLI) that processes information and efficiently presents the information results to the user. In general, a GUI can include a plurality of user interface (UI) elements, some or all associated with a web browser, such as interactive fields, pull-down lists, and buttons. These and other UI elements can be related to or represent the functions of the web browser. Implementations of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, for example, as a data server, or that includes a middleware component, for example, an application server. Moreover, the computing system can include a front-end component, for example, a client computer having one or both of a graphical user interface or a Web browser through which a user can interact with the computer. The components of the system can be interconnected by any form or medium of wireline or wireless digital data communication (or a combination of data communication) in a communication network. Examples of communication networks include a local area network (LAN), a radio access network (RAN), a metropolitan area network (MAN), a wide area network (WAN), Worldwide Interoperability for Microwave Access (WIMAX), a wireless local area network (WLAN) (for example, using 802.11 a/b/g/n or 802.20 or a combination of protocols), all or a portion of the Internet, or any other communication system or systems at one or more locations (or a combination of communication networks). The network can communicate with, for example, Internet Protocol (IP) packets, frame relay frames, asynchronous transfer mode (ATM) cells, voice, video, data, or a combination of communication types between network addresses. The computing system can include clients and servers. A client and server can generally be remote from each other and can typically interact through a communication network. The relationship of client and server can arise by virtue of computer programs running on the respective computers and having a client-server relationship. Cluster file systems can be any file system type accessible from multiple servers for read and update. Locking or consistency tracking may not be necessary since the locking of exchange file system can be done at application layer. Furthermore, Unicode data files can be different from non-Unicode data files. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular implementations. Certain features that are described in this specification in the context of separate implementations can also be implemented, in combination, in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations, separately, or in any suitable sub-combination. Moreover, although previously described features may be described as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can, in some cases, be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. Particular implementations of the subject matter have been described. Other implementations, alterations, and permutations of the described implementations are within the scope of the following claims as will be apparent to those skilled in the art. While operations are depicted in the drawings or claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed (some operations may be considered optional), to achieve desirable results. In certain circumstances, multitasking or parallel processing (or a combination of multitasking and parallel processing) may be advantageous and performed as deemed appropriate. Moreover, the separation or integration of various system modules and components in the previously described implementations should not be understood as requiring such separation or integration in all implementations. It should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Accordingly, the previously described example implementations do not define or constrain the present disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of the present disclosure. Furthermore, any claimed implementation is considered to be applicable to at least a computer-implemented method; a non-transitory, computer-readable medium storing computer-readable instructions to perform the computer-implemented method; and a computer system including a computer memory interoperably coupled with a hardware processor configured to perform the computer-implemented method or the instructions stored on the non-transitory, computer-readable medium. | 54,269 |
11943236 | DETAILED DESCRIPTION OF THE DRAWINGS While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims. References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device). In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features. Referring now toFIG.1, a system100for detecting cyber-attacks against electrical distribution devices includes a set of transformers (i.e., electrical distribution devices)110,112, each of which may provide electricity to one or more other devices (not shown), and a control system120(e.g., an industrial control system) communicatively coupled to the transformers110,112through a network130. As used herein, the term “electrical distribution” includes both electrical transmission devices and electrical distribution devices, and the corresponding power levels. Each transformer110,112may be embodied as any device or circuitry (e.g., a combination of power electronics and digital electronics, etc.) capable of selectively increasing or decreasing a voltage of an alternating current, such as by transferring electrical energy from a first coil to a second coil through electromagnetic induction using variable turn ratios selected in discrete steps (e.g., with a tap changer) or by other methods, such as by utilizing power electronics to convert alternating current to direct current and back to alternating current (e.g., in a solid state transformer). In the illustrative embodiment, each transformer includes sensors150,152for reporting operational parameters, which may be embodied as any data indicative of conditions (temperatures in one or more portions of the transformer110,112, an electrical current in a portion of the transformer, a voltage in a portion of the transformer, etc.) of the transformer110,112at any given time. Each transformer110,112may additionally include a coolant fluid tank160,162which may be embodied as a container of a fluid (e.g., oil) used to cool one or more components of the corresponding transformer110,112. In operation, a controller140, which may be included in each transformer110,112and/or located in the control system120obtains measurements, from the sensors150,152, of operational parameters of the corresponding transformer110,112, applies those measurements to a mathematical model of the relationship between operational parameters of the transformer to determine whether the reported measurements share the same relationship (e.g., by calculating an expected temperature from a reported electrical current), accounting for potential noise in the measurements, and if the relationship from the reported measurements diverges from the relationship indicated in the model by a threshold amount (e.g., the reported temperature exceeds the expected temperature by a predefined threshold amount), determining that the transformer may be subject to a cyber-attack and performing a responsive action, such as generating an alert. As such, the system100may provide improved security against cyber-attacks as compared to typical electrical distribution systems. Referring now toFIG.2, the controller140may be embodied as any type of device (e.g., a computer) capable of performing the functions described herein, including determining a first measured value of a first operational parameter of the transformer based upon one or more signals received from one or more sensors of the transformer, determining a second measured value of a second operational parameter of the transformer based upon one or more signals received from the one or more sensors of the transformer, calculating a first expected value of the first operational parameter based on the second measured value of the second operational parameter and a model of the transformer that relates the first and second operational parameters, comparing the first measured value of the first operational parameter to the first expected value of the first operational parameter, and identifying when a difference between the first measured value and the first expected value exceeds a first threshold. As shown inFIG.2, the illustrative controller140includes a compute engine210, an input/output (I/O) subsystem216, communication circuitry218, and one or more data storage devices224. Of course, in other embodiments, the controller140may include other or additional components, such as those commonly found in a computer (e.g., display, peripheral devices, etc.). Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. The compute engine210may be embodied as any type of device or collection of devices capable of performing various compute functions described below. In some embodiments, the compute engine210may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable gate array (FPGA), a system-on-a-chip (SOC), or other integrated system or device. Additionally, in some embodiments, the compute engine210includes or is embodied as a processor212and a memory214. The processor212may be embodied as any type of processor capable of performing the functions described herein. For example, the processor212may be embodied as a microcontroller, a single or multi-core processor(s), or other processor or processing/controlling circuit. In some embodiments, the processor212may be embodied as, include, or be coupled to an FPGA, an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. The main memory214may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. In some embodiments, all or a portion of the main memory214may be integrated into the processor212. In operation, the main memory214may store various software and data used during operation such as operational parameters of a transformer110,112, thresholds, a mathematical model of the transformer110,112, applications, programs, libraries, and drivers. The compute engine210is communicatively coupled to other components of the controller140via the I/O subsystem216, which may be embodied as circuitry and/or components to facilitate input/output operations with the compute engine210(e.g., with the processor212and/or the main memory214) and other components of the controller140. For example, the I/O subsystem216may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem216may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the processor212, the main memory214, and other components of the controller140, into the compute engine210. The communication circuitry218may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network130between the controller140and another compute device (e.g., one or more compute devices in the control system120, etc.). The communication circuitry218may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication. The illustrative communication circuitry218includes a network interface controller (NIC)210. The NIC220may be embodied as one or more add-in-boards, daughter cards, network interface cards, controller chips, chipsets, or other devices that may be used by the controller140to connect with another compute device (e.g., one or more compute devices in the control system120, etc.). In some embodiments, the NIC220may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC220may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC220. In such embodiments, the local processor of the NIC220may be capable of performing one or more of the functions of the compute engine210described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC220may be integrated into one or more components of the controller140at the board level, socket level, chip level, and/or other levels. The one or more sensors222may be embodied as any type of devices configured to measure and report operational parameters of a corresponding transformer110,112. As such, the sensors222may include one or more temperature sensors capable of measuring cooling fluid temperatures at one or more locations in the transformer110,112, a temperature sensor capable of measuring an ambient temperature, electrical current sensors, voltage sensors, and/or other sensors. The one or more illustrative data storage devices224, may be embodied as any type of devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Each data storage device224may include a system partition that stores data and firmware code for the data storage device224. Each data storage device224may also include an operating system partition that stores data files and executables for an operating system. Additionally or alternatively, the controller140may include one or more peripheral devices226. Such peripheral devices226may include any type of peripheral device commonly found in a compute device such as a display or other output device and/or one more input devices, such as a touchscreen or buttons, forming a human-machine interface (HMI). Additionally, the peripheral devices226may include other components, such as a tap changer, for controlling operations of the corresponding transformer110,112. As described above, the transformers110,112and the control system120are illustratively in communication via the network130, which may be embodied as any type of wired or wireless communication network, including global networks (e.g., the Internet), local area networks (LANs) or wide area networks (WANs), cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), or any combination thereof. Referring now toFIG.3, the controller140, in operation, may perform a method300of detecting a cyber-attack against one or more electrical distribution devices (e.g., the transformers110,112). In the illustrative embodiment, the method300begins with block302in which the controller140determines whether to perform detection of cyber-attacks. In the illustrative embodiment, the controller140may determine to proceed with performing detection of cyber-attacks if the controller140is in communication with one or more sensors150of the corresponding electrical distribution device to be monitored (e.g., the transformer110) and has access to a mathematical model indicative of a relationship between operational parameters of the electrical distribution device (e.g., in the memory214of the controller140). In other embodiments, the controller140may determine whether to detect cyber-attacks based on other factors. Regardless, in response to a determination to detect cyber-attacks, the method300advances to block304in which the controller140determines one or more measured operational parameters of an electrical distribution device with one or more corresponding sensors (e.g., the sensors150). In doing so, and as indicated in block306, the controller140determines measured operational parameters of a transformer (e.g., the transformer110). Further, and as indicated in block308, in determining the measured operational parameters, the controller140receives, from the corresponding sensors (e.g., the sensors150), a signal indicative of values of the corresponding operational parameters. As indicated in block310, the controller140may determine a measured electrical current of the electrical distribution device (e.g., the transformer110). Additionally or alternatively, and as indicated in block312, the controller140may determine a measured voltage of the electrical distribution device (e.g., the transformer110). In some embodiments, the controller140may determine a magnitude and phase angle of an operational parameter of the electrical distribution device, as indicated in block314. Additionally or alternatively, the controller140may determine a measured temperature of the electrical distribution device, as indicated in block316. For example, and as indicated in block318, the controller may determine an ambient temperature associated with the electrical distribution device. As indicated in block320, the controller140may also determine a coolant fluid temperature (e.g., a temperature of the coolant fluid in the coolant fluid tank160). In doing so, the controller140may determine a coolant fluid temperature associated with the top of the coolant fluid tank160, as indicated in block322. The controller140may additionally or alternatively determine a coolant fluid temperature associated with the bottom of the coolant fluid tank160, as indicated in block324. As indicated in block326, the controller140may determine a measured change or rate of change of operational parameters. In some embodiments, before proceeding to the next operations in the method300, the controller140may wait for one or more operational parameters to stabilize (e.g., for the rate of change to meet a predefined threshold, for a measured operational parameter to vary within a predefined range, etc.), as indicated in block328. For example, and as indicated in block330, the controller140may wait for the ambient temperature to stabilize. Subsequently, the method300advances to block332ofFIG.4, in which the controller140determines an expected value of one or more operational parameters of the electrical distribution device (e.g., the transformer110) from the measured operational parameters (e.g., from block304) and a model indicative of a relationship between operational parameters of the electrical distribution device. In doing so, the controller140may determine an expected electrical current value, as indicated in block334. Additionally or alternatively, the controller140may determine an expected voltage value, as indicated in block336. As indicated in block338, the controller140may determine an expected magnitude and phase angle of one or more operational parameters. Additionally or alternatively, the controller140may determine an expected temperature value, as indicated in block340. For example, and as indicated in block342, the controller140may determine an expected winding hot spot temperature. Additionally, the controller140may determine real time load losses as indicated in block344. In some embodiments, the controller140may determine an expected power balance from an input bus and an output bus of the electrical distribution device (e.g., the transformer110), as indicated in block346. Further, and as indicated in block348, the controller140may determine an expected change or rate of change of an operational parameter. In some embodiments, the controller140may determine a residual value, which may be embodied as a number indicative of an effect of noise on a measurement of an operational parameter of the electrical distribution device (e.g., the transformer110). Subsequently, the method300advances to block352ofFIG.5in which the controller determines whether one or more of the measured operational parameter(s) satisfy corresponding threshold value(s). Referring now toFIG.5, in determining whether one or more of the measured operational parameter(s) satisfy corresponding threshold value(s), the controller140may determine whether the operational parameter(s) satisfy a corresponding expected value (e.g., from block332), as indicated in block354. In some embodiments, the controller140may adjust the expected value (e.g., from block332) as a function of a harmonic loss contribution, as indicated in block356. In embodiments in which the controller140determines a residual value (e.g., in block350), the controller140determines whether the residual value is within a predefined upper bound and lower bound (e.g., within an expected range), as indicated in block358. The controller140may, in some embodiments, perform a comparison of the measured operational parameters to one or more threshold values over a predefined time interval, as indicated in block360. In some embodiments, the controller140may compare a measured proportionality of input and output power and real time load losses, as indicated in block362. Subsequently, in block364, the controller140determines whether the threshold is satisfied. If so (e.g., a measured value is within a predefined range of a corresponding expected value), the method300loops back to block302in which the controller140determines whether to continue detection of cyber-attacks. Otherwise, the method300advances to block366, in which the controller140performs a responsive action to a suspected cyber-attack. In performing a responsive action, the controller140may generate an alert indicative of a suspected cyber-attack, as indicated in block368. In doing so, the controller140may generate an alert that indicates the measured operational parameter does not satisfy the corresponding threshold, as indicated in block370. As indicated in block372, the controller140may generate an alert that indicates the amount by which the threshold is not satisfied (e.g., a measured temperature exceeds the expected temperature by 10 degrees, etc.). The alert may also identify the electrical distribution device to which the alert pertains (e.g., by a serial number, a media access control address (MAC), an internet protocol (IP) address, a geographical location, etc.), as indicated in block374. The controller140may send the alert (e.g., data indicative of the alert) to a remote compute device (e.g., a compute device in the control system120), as indicated in block376. Additionally or alternatively, the controller140may display the alert in a user interface (e.g., with an HMI connected to the controller140), as indicated in block378. As indicated in block380, the controller may perform a corrective action (e.g., by adjusting a tap changer setting, by deactivating the electrical distribution device to await inspection by a technician, etc.) to mitigate one or more effects of the suspected cyber-attack. Subsequently, the method300loops back to block302to again determine whether to continue detection of cyber-attacks. Referring now toFIG.6, the controller140may utilize a static model600of a transformer (e.g., the transformer110) with a tap changer connected between buses b and k. Potential cyber-attacks against the transformer110may include an attacker issuing a malicious control command to raise or lower the tap setting akmor an attacker injecting fabricated sensor measurements (e.g., a temperature of the coolant fluid and a load ratio). The result of the first type of attack may create an over or under voltage whereas the second type of attack may disrupt the transformer operation or condition-based estimations of the transformer's usable lifetime. The model600can be converted to a “pi” model with constants A, B, and C using the following equations: A=akmykm(Equation 1) B=akm(akm−1)ykm(Equation 2) C=(1−akm)ykm(Equation 3) The resulting model700is shown inFIG.7. Further, the model may be integrated with the physical system around the transformer, such as a substation or larger system, as shown in the model800ofFIG.8. In some embodiments, as described above, the controller140may provide security against cyber-attacks based on transformer sensor measurements, such as electrical current and voltage. For example, a cyber attacker may inject malicious (i.e., false) current and voltage measurements into the sensors150on each side of the transformer110. More specifically, referring back to the model800ofFIG.8, an attacker could corrupt both the voltage and current measurements at bus m. To guard against such an attack, the controller140may test the consistency of the measurements with the performance equations of the transformer110. The relationship between the voltages and currents can be expressed as follows: Ikm=(A+B)Ek+(−A)Em(Equation 4) Imk=(−A)Ek+(A+C)Em(Equation 5) Provided that there is no fault within the transformer110, the algebraic sum of both the primary and secondary currents is theoretically equal to zero, as shown in the following equation: 0=Ikm+Imk(Equation 6) In practice, equations (4)-(6) will not be satisfied due to the presence of noise in the measurements and minor changes to the transformer parameters. As such, the controller140may calculate residual quantities representing the mismatches according to the following equations: d1=Ikm−(A+B)Ek+(A)Em(Equation 7) d2=Imk+(A)Ek−(A+C)Em(Equation 8) d3=Ikm+Imk(Equation 9) The relationships of equations (7)-(9) may be expressed in the following matrix relationship: [d1d2d3]=[-(A+B)A10A-(A+C)010011][EkEmIkmImk](Equation10) Further, the controller140may set thresholds for the residual values that define the maximum acceptable values for the residuals (e.g., an upper bound and a lower bound), as follows: [d1d2d3]T=[d1Td2Td3T](Equation11) An attack on one of the measurements would appear as a violation of the thresholds (e.g., the measurements would fall outside the upper and lower bounds). More specifically, one or more of the elements would exceed the corresponding threshold according to the following equation: [d1d2d3]>[d1Td2Td3T]forsomeelementi,i=1,2,3(Equation12) The controller140, in such embodiments, identifies the violation of one of the elements of the residual matrix as a potential cyber-attack. The controller140may further determine which measurement is falsified. In doing so, the controller140may apply the following rules. First, a falsification of the primary current measurement Ikmwould cause the residuals d1and d3to exceed their corresponding thresholds. Second, a falsification of the secondary current measurement Imkwould cause the residuals d2and d3to exceed their thresholds. Third, a falsification of the primary voltage measurement Ekwould cause the residuals d1and d2to exceed their thresholds. Fourth, a falsification of the secondary voltage measurement Emwould cause the residuals d1and d2to exceed their thresholds. In embodiments in which a transformer (e.g., the transformer110) is a phase shifting transformer, the controller140may provide security against cyber-attacks to the transformer110as described herein. Phase shifting transformers are typically used to control active power flows by changing the phase angle between the voltages across the transformer. Referring now toFIG.9, in a model900that may be utilized by the controller140to detect a cyber-attack against the transformer110, a turns ratio tkmis embodied as a ratio of the complex voltages Epand Ek. The turns ratio is defined as a complex quantity with magnitude akmand angle φkm. As such, the complex voltage at an internal point p is expressed in the following equation: tkm=EpEk=akmejφkm(Equation13) Accordingly, the angle of the voltage at point p is shifted by the phase angle of the transformer as follows: θp=θk+φkm(Equation 14) Vp=akmVk(Equation 15) The physical relationship of the primary and secondary currents may be expressed in terms of the complex terms ratio, as follows: IkmImk=-tkm*=-akme-jφkm(Equation16) The currents may be expressed in terms of the voltages and the impedance of the transformer as follows: Ikm=−tkmykm(Em−Ep) (Equation 17) Imk=ymk(Em−Ep) (Equation 18) The currents can also be expressed in terms of the terminal voltages of the transformer110, as follows: Ikm=akm2ykmEk−tkmykmEm(Equation 19) Imk=−tkmykmEk+ykmEm(Equation 20) d1=Ikm−akm2ykmEk+tkmykmEm(Equation 21) d2=Imk+tkmykmEk−ykmEm(Equation 22) d3=Ikm+Imk(Equation 23) The equations (21)-(23) may be expressed in matrix form, as follows: [d1d2d3]=[-akm2ykmtkmykm10tkmykm-ykm010011][EkEmIkmImk](Equation24) The controller140may apply thresholds (e.g., upper and lower bounds) for the residual values as follows: [d1d2d3]T=[d1Td2Td3T](Equation25) A falsification of the measurements would result in a violation of threshold values. More specifically, one or more of the elements d1, d2, or d3would exceed its corresponding threshold, as follows: [d1d2d3]>[d1Td2Td3T]forsomeelementi,i=1,2,3(Equation26) Accordingly, the controller140may apply the four rules described above to determine which measurement or measurements (e.g., the primary current Ikm, the secondary current Imk, the primary voltage Ek, or the secondary voltage Em) have been falsified in a cyber-attack. In some embodiments, the controller140may provide security against cyber-attacks on sensor measurements using local complementary sensor confirmation. For example, in the attack scenario described above, confirmation of a change in the electrical current can be obtained by comparing the measurements of complementary sensors150(e.g., a temperature at the top of the coolant fluid tank and a temperature at the bottom of the coolant fluid tank160). The controller140may also obtain one or more temperatures that are not directly measured, but instead are calculated from other temperatures. For example, the controller140may calculate a winding hot spot temperature in the transformer110as a function of temperature measurements from the top and/or bottom of the coolant fluid tank160. For fluid-cooled transformers, calculation of the winding hot spot temperature may depend on electrical current, ambient temperature, the type of cooling fluid, and physical parameters of the transformer provided from design calculations and/or test data. Other calculated temperature values may be provided from a combination of measured sensor data, subscribed information, and/or transformer physical properties. The controller140may provide an additional layer of security against cyber-attacks by verifying changes in related measured temperature values and calculated temperature values. Current-dependent calculated temperature values would be affected by falsified electrical current data while directly measured temperature values may not be falsified. Accordingly, a calculated value such as the winding hot spot temperature may be compared to the temperature at the top of the coolant fluid tank160in terms of either absolute temperature rise over a given time interval or an instantaneous rate of change with respect to the measurement time interval for the measured and calculated values. The controller140may utilize the following equations in such a process: TRise,calc|Ipu,Ta∝TRise,meas(Equation 27) In the above equation, TRise,calcand TRise,measrepresent generic related measured and calculated temperatures. The TRise,calcalso specifies a given per unit current, Ipu, and ambient temperature, Ta, for comparison purposes. The calculation is then dependent upon per unit current, Ipu, and ambient temperature Ta. The confirmation process based on local complementary sensors may include performing the evaluation over a variable time span, depending on the polling interval of temperature and current sensors (e.g., the sensors150). Alternatively, a limit may be taken to express a set of temperature characteristics over an instantaneous timeframe, as follows: dTRise,calcdt|Ipu,Ta∝dTRise,measdt(Equation28) An example of the above relationship is given as follows for the proportional relationship between a calculated winding hotspot temperature, TRise,calc|Ipu, Ta=ΘH, and a measured temperature, TRise,meas=ΘTOfrom the top of the coolant fluid tank160: ΘH=[(ΔΘH,R·Ipu2·ΔΘH,exp-ΘH,i)·(1-e602·tw)+ΘH,i]+ΘTO(Equation29)ΔΘH,U=[(ΔΘH,R·Ipu2·ΔΘH,exp-ΘH,i)·(1-e602·tw)+ΘH,i](Equation30)ΘH=ΔΘH,U+ΘTO(Equation31) In the above equations, ΘHrepresents a calculated winding hotspot temperature at the present time instant, ΘH,irepresents a calculated winding hotspot temperature at a previous time instant, ΔΘH,Rrepresents a hotspot differential temperature at rated load, Ipurepresents per unit current, ΔΘH,exprepresents a hotspot differential temperature exponent, and τwrepresents a hotspot time constant in minutes. In some embodiments, the controller140additionally analyzes a harmonic content of the electrical current. The harmonic content is an additional component of the electrical current that has a frequency larger than that of the nominal current (e.g., 50 or 60 Hz) that sums together with the nominal or fundamental frequency component in terms of superposition. Typically, the frequencies are at integer multiples of the nominal or fundamental component. However, non-linearity of the system may lead to frequencies between the integer multiples. The harmonic components can contribute additional generated heat as certain types of losses within the transformer, such as eddy current losses, which are proportional to the square of the frequency. In some embodiments, the controller140may apply a threshold of 5% total harmonic distortion, the ratio of fundamental current to other harmonic components. That is, if harmonics are above the threshold, an additional factor for computation may be determined (e.g., pursuant to the Institute of Electrical and Electronics Engineers (IEEE) standard C57.110 or other related standards) such that the calculated temperature is scaled by the additional losses due to harmonics. An example calculation for winding hotspot rise over the temperature of the top of the coolant fluid tank160is given below: Θg=Θg-R×(PLL(pu)PLL-R(pu))0.8[C](Equation32)Θg=Θg-R×(1+FHL×PEC-R(pu)1+PEC-R(pu))0.8[C](Equation33) In the above equations, Θgrepresents the adjusted winding hotspot rise over the temperature of the top of the coolant fluid tank160in degrees Celsius, Θg-Ris the rated winding hotspot rise over the temperature of the top of the coolant fluid tank160in degrees Celsius, PLL(pu) is the per unit power loss under load calculated with harmonic loss contribution, and PLL-R(pu) is the per unit loss under rated conditions. Additionally, FHLis a defined harmonic loss factor for winding eddy current losses and PEC-Ris the per unit winding eddy current loss under rated conditions. Here, the conditions of the above equations could be taken as the rate winding hotspot rise over the temperature of the top of the coolant fluid tank160, Θg-R. In other embodiments, the controller140may determine a lifetime estimate of the transformer110(e.g., an estimate of the usable lifetime of the transformer110). In such embodiments, the controller140performs a comparison of the calculated estimated lifetime over a given timespan to the measured temperature value for proportionality, instead of the generic calculated temperature. The controller140may also utilize threshold limits on estimated lifetime accrual within a given timespan, based on either future current and temperature rise limits or historical data. This alternative condition is expressed as follows in terms of both an instantaneous or absolute time interval approach: FaaRise,calc|Ipu,Ta,Thotspot∝TRise,meas(Equation34)dFaaRise,calcdt|Ipu,Ta,Thotspot∝dTRise,measdt(Equation35) In the above equations, Faa expresses the output of an age acceleration factor or calculated lifetime estimate over a given time. It is important to note that the temperature measurement from a the sensors150may be sampled less frequently than the current or voltage measurements (e.g., intervals of several seconds for temperature versus intervals of several milliseconds for current and/or voltage). However, as cyber-attacks grow increasingly complex, the incubation or infiltration period of an attack may be days or weeks long. Referring now toFIG.10, the controller140may perform a method1000for detecting cyber-attacks based on local complementary sensor confirmation, using the operations described above. In the illustrative embodiment, the method1000begins with block1002, in which the controller140enables the algorithm after a temperature associated with the transformer110has stabilized under load conditions (e.g., the temperature varies by 4 degrees or less). Subsequently, the method1000advances to block1004in which the controller140obtains the measured temperature Tmeas(e.g., by polling the sensors150) and corresponding calculated temperature(s), Tcalc. Afterwards, the controller140verifies a consistent ambient temperature (e.g., an average ambient temperature of less than 40 degrees Celsius), as indicated in block1006. In block1008, the controller140determines whether a relatively high amount of harmonic distortion is present (e.g., greater than 5%). If so, the method1000advances to block1010in which the controller140calculates the contribution of harmonics to temperature rise. Subsequently, or if relatively high harmonics are not present, the method advances to concurrently perform blocks1012and1016. In block1012, the controller140calculates a rise of the measured temperature over a given time interval, or an instantaneous temperature rise. In block1016, the controller140calculates a rise of calculated temperature values over a given time interval, or an instantaneous rise in the calculated temperature values. Subsequently, the controller140determines, in block1014, whether the temperature rise is greater than zero and determines, in block1018, whether the calculated temperature rise is greater than zero. The controller140combines the results of blocks1014and1018in block1020with an AND operation. That is, the controller140determines whether there was a change in both the measured temperature and the calculated temperature. If so, the method1000advances to block1022in which the controller140verifies the temperature change direction for the measured and calculated temperatures. Subsequently, the method1000advances to blocks1024and1030ofFIG.11. Referring now toFIG.11, in block1024, the controller140determines whether both the measured temperature and the calculated temperature increased. If so, the method1000advances to block1026, in which the controller140determines that a load current actually did increase. If not, the method1000advances to block1028, in which the controller140generates an alarm indicative of an inconsistent load condition (e.g., a potential cyber-attack). In block1030, which may be performed concurrently with block1024, the controller140determines whether the measured temperature and calculated temperature both decreased. If so, then the controller140, in the illustrative embodiment, confirms that there was a load current decrease, as indicated in block1032. Otherwise, or if the controller140determined in block1020that the measured temperature and the calculated temperature did not both change, the method advances to block1034in which the controller generates an alarm of an inconsistent load condition (e.g., a potential cyber-attack). Referring now toFIGS.12and13, a normal operating condition and a condition where falsified current may be detected by the controller140are shown. The normal operating conditions are those in which load is applied to a transformer (e.g., the transformer110) and a subsequent temperature rise is identified, due to rate current followed by an increase in the current over the rated conditions. The temperature rise is reflected both in the temperature1340at the top of the coolant fluid tank160, TRise,meas=ΘTOand the calculated winding hotspot temperature1320, TRise,calc|Ipu, Ta=ΘH. The relationship follows equations (29) and (31), in which a standard defined rise over the temperature at the top of the coolant fluid tank160is calculated by the controller140to provide an estimated temperature of the winding hotspot. As shown in the plot1200ofFIG.12, the current1210has risen over time, resulting in the increased measured and calculated temperatures in the plot1300ofFIG.13. Referring now toFIGS.14and15, an example attack scenario may involve an attacker causing falsified current measurements to be reported. The falsified current measurements1410in the plot1400indicate a current that would be expected under the rated load of the transformer. However, the transformer is actually in an overload condition, as indicated by the current values1420. As such, the purpose of the cyber-attack is to prevent the transformer from initiating a cooling process. Example thermal limits1510,1530are given for both the measured temperature of the top of the coolant fluid tank160and the calculated winding hotspot temperature, based on standards for the given transformer insulation materials. After the temperature change has stabilized, the false injected current value1410is shown as an input to the calculated winding hotspot temperature1520and the relationship provided in equations (29) and (31) no longer remains true, due to the increase in the measured temperature1540while the calculated temperature1520remains unchanged. When evaluating the above logic with the provided attack scenario, the output of the calculated temperature value will not change, while the measured output will increase, leading to the generation of an alarm condition indicating inconsistent load conditions. In some embodiments, the controller140performs operations to secure a transformer against cyber-attacks in which temperature sensor measurements are faked. Normally, the output power of a transformer is less than its input power. Generally, the difference is the amount of power converted into heat by core loss, winding losses, and stray losses. A combination of radiation and convection dissipate the heat from the exposed surfaces of the transformer. As such, the controller140, in some embodiments, may check the consistency of the characteristic temperature measurements with the input and output power of the transformer (e.g., the transformer110), based on the following equations: Ptotal lossm∝Chotspotm|Toil temp∝Coil tempm|Tambient(Equation 36) Pin-out2=Σt1t2Ek(t)·Ik(t)·dt−Σt1t2Em(t)·Im(t)·dt(Equation 37) Pin-outm≅Ptotal lossm(Equation 38) In the above equations, Pin-outmrepresents the measured proportionality of input and output power of the mthtransformer at different points of temperature or load current and Coil tempmand Chotspotmindicate an official characteristic of input and output power of the mthtransformer given by a measured temperature (e.g., from the sensors150) or information provided by a weather service or other source. By using the above equations, the transformer110may detect an abnormal behavior (e.g., a cyber-attack) if the measured proportion of input and output power is not consistent with the official characteristic, when the measured temperature of the transformer is normal (e.g., the expected temperature, given the present load). As an example, the total losses of the system can be expressed as follows: Ptotal lossm=PLL=P(Chotspotm)+FHL(Im)×PEC+FHL-STR(Im)×POSL(Equation 39) Ptotal lossm=PLL=Im2·R(Chotspotm)+FHL(Im)×PEC+FHL-STR(Im)×POSL(Equation 40) In the above equations, PLLrepresents real time load losses calculated from I2R or load losses, eddy current loses (PEC), and other stray losses (POSL). The load losses are a function of the temperature-dependent winding resistance value (e.g., values defined in IEEE C57.12.90) and measured current. Eddy current and other stray losses are given as a function of a harmonic loss coefficient, such as defined in IEEE C57.111, and are dependent on load current. The characteristic temperature used in estimation of the real time losses can be determined from IEEE C57.91, IEC 60076-7, or a similar standard, from either ambient temperature or a characteristic coolant fluid temperature, as shown in equations (31) and (33). The following equations represent a set of example power-dependent temperature characteristics, according to IEEE C57.91, from ambient temperature: K=Im(Δt)/Im,rated(Equation41)R=PNL,R/PLL,R(Equation42)τoil,rateddΘoildt=(1+RK21+R)nΔΘoil,rated-(Θoil-Θambient)(Equation43)Coiltempm=τwinding,rateddΘhotspotdt=(1+RK21+R)mΔΘhotspot,rated-(Θhotspot-Θoil)(Equation44) In the above equations, I(Δt) is representative of the transformer load current for a given time interval, Θambientis representative of the ambient temperature, PNL,Ris representative of rated no load losses at a specific ambient temperature (e.g., 75 C), and PLL,Rrepresents rated load losses calculated from I2R, eddy losses, and other stray losses. One output is Toil(Δt), which is representative of the transformer oil temperature (i.e., the temperature of the coolant fluid) derived from measured top or bottom temperatures in the coolant fluid tank160, or a top/bottom relationship equation. Another output is τhotspot(Δt), which is representative of the transformer hotspot temperature. The transformer110may perform a comparison of Pin-outmand Ptotal lossmwithin given tolerances to determine whether the power balance given from measured voltage and current values (e.g., from the sensors150) match the total loss calculations from the measured temperature(s) and other dependencies. Referring now toFIG.16, a method1600utilizing the above equations for detecting a cyber-attacks based on transformer temperature sensor measurements begins with block1602, in which the controller140enables the algorithm after the temperature has stabilized to normal load conditions (e.g., the temperature varies by 4 degrees or less). Subsequently, the controller140determines the power balance (Pin-out) and characteristic temperature (Ctemp) quantities, as indicated in block1604. Afterwards, the controller140verifies that the ambient temperature associated with the transformer (e.g., the transformer110) is consistent (e.g., that the ambient temperature has an average below 40 degrees Celsius), as indicated in block1606. In block1608, the controller determines whether a relatively high amount of harmonic distortion (e.g., greater than 5%) is present. If so, the method1600proceeds to block1610in which the controller140calculates the contribution of the harmonics to the temperature rise, similar to block1010ofFIG.10. Subsequently, or if the controller140determined in block1608that relatively high harmonic distortion is not present, the method1600advances to blocks1612and1614, which the controller140may perform concurrently. In block1612, the controller140calculates the expected power balance from the input and output buses. In block1614, the controller140calculates the expected characteristic temperature of the coolant fluid in the coolant fluid tank160from measured temperatures (e.g., Toiland Tambient). In block1616, the controller140calculates the expected hotspot characteristic temperature from the coolant fluid temperature (e.g., the oil temperature Toil). Further, in block1618, the controller140calculates power total loss conditions using the given temperature. In block1620, the controller140calculates the expected coolant fluid characteristic temperature from the ambient temperature, and in block1622, the controller140calculates the expected hotspot characteristic temperature using the calculated expected coolant fluid characteristic temperature. Additionally, the controller140calculates the power total loss conditions at the given temperature, in block1624. In block1626, the controller140determines whether an equal power balances and power total loss is present. If so, the method1600advances to block1628, in which the controller140confirms that consistent temperature measurements are present. Otherwise, the method1600advances to block1630in which the controller140generates an alarm indicative of inconsistent temperature measurements (e.g., a potential cyber-attack). While certain illustrative embodiments have been described in detail in the drawings and the foregoing description, such an illustration and description is to be considered as exemplary and not restrictive in character, it being understood that only illustrative embodiments have been shown and described and that all changes and modifications that come within the spirit of the disclosure are desired to be protected. There exist a plurality of advantages of the present disclosure arising from the various features of the apparatus, systems, and methods described herein. It will be noted that alternative embodiments of the apparatus, systems, and methods of the present disclosure may not include all of the features described, yet still benefit from at least some of the advantages of such features. Those of ordinary skill in the art may readily devise their own implementations of the apparatus, systems, and methods that incorporate one or more of the features of the present disclosure. | 48,220 |
11943237 | DETAILED DESCRIPTION It will be readily understood that the instant components, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of at least one of a method, apparatus, non-transitory computer readable medium and system, as represented in the attached figures, is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments. The instant features, structures, or characteristics as described throughout this specification may be combined or removed in any suitable manner in one or more embodiments. For example, the usage of the phrases “example embodiments”, “some embodiments”, or other similar language, throughout this specification refers to the fact that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. Thus, appearances of the phrases “example embodiments”, “in some embodiments”, “in other embodiments”, or other similar language, throughout this specification do not necessarily all refer to the same group of embodiments, and the described features, structures, or characteristics may be combined or removed in any suitable manner in one or more embodiments. Further, in the diagrams, any connection between elements can permit one-way and/or two-way communication even if the depicted connection is a one-way or two-way arrow. Also, any device depicted in the drawings can be a different device. For example, if a mobile device is shown sending information, a wired device could also be used to send the information. In addition, while the term “message” may have been used in the description of embodiments, the application may be applied to many types of networks and data. Furthermore, while certain types of connections, messages, and signaling may be depicted in exemplary embodiments, the application is not limited to a certain type of connection, message, and signaling. Example embodiments provide methods, systems, components, non-transitory computer readable media, devices, and/or networks, which provide malicious peer identification for database block sequences. In one embodiment the application utilizes a decentralized database (such as a blockchain) that is a distributed storage system, which includes multiple nodes that communicate with each other. The decentralized database includes an append-only immutable data structure resembling a distributed ledger capable of maintaining records between mutually untrusted parties. The untrusted parties are referred to herein as peers or peer nodes. Each peer maintains a copy of the database records and no single peer can modify the database records without a consensus being reached among the distributed peers. For example, the peers may execute a consensus protocol to validate blockchain storage transactions, group the storage transactions into blocks, and build a hash chain over the blocks. This process forms the ledger by ordering the storage transactions, as is necessary, for consistency. In various embodiments, a permissioned and/or a permissionless blockchain can be used. In a public or permission-less blockchain, anyone can participate without a specific identity. Public blockchains can involve native cryptocurrency and use consensus based on various protocols such as Proof of Work (PoW). On the other hand, a permissioned blockchain database provides secure interactions among a group of entities which share a common goal but which do not fully trust one another, such as businesses that exchange funds, goods, information, and the like. This application can utilize a blockchain that operates arbitrary, programmable logic, tailored to a decentralized storage scheme and referred to as “smart contracts” or “chaincodes.” In some cases, specialized chaincodes may exist for management functions and parameters which are referred to as system chaincode. The application can further utilize smart contracts that are trusted distributed applications which leverage tamper-proof properties of the blockchain database and an underlying agreement between nodes, which is referred to as an endorsement or endorsement policy. Blockchain transactions associated with this application can be “endorsed” before being committed to the blockchain while transactions, which are not endorsed, are disregarded. An endorsement policy allows chaincode to specify endorsers for a transaction in the form of a set of peer nodes that are necessary for endorsement. When a client sends the transaction to the peers specified in the endorsement policy, the transaction is executed to validate the transaction. After validation, the transactions enter an ordering phase in which a consensus protocol is used to produce an ordered sequence of endorsed transactions grouped into blocks. This application can utilize nodes that are the communication entities of the blockchain system. A “node” may perform a logical function in the sense that multiple nodes of different types can run on the same physical server. Nodes are grouped in trust domains and are associated with logical entities that control them in various ways. Nodes may include different types, such as a client or submitting-client node which submits a transaction-invocation to an endorser (e.g., peer), and broadcasts transaction-proposals to an ordering service (e.g., ordering node). Another type of node is a peer node which can receive client submitted transactions, commit the transactions and maintain a state and a copy of the ledger of blockchain transactions. Peers can also have the role of an endorser, although it is not a requirement. An ordering-service-node or orderer is a node running the communication service for all nodes, and which implements a delivery guarantee, such as a broadcast to each of the peer nodes in the system when committing transactions and modifying a world state of the blockchain, which is another name for the initial blockchain transaction which normally includes control and setup information. This application can utilize a ledger that is a sequenced, tamper-resistant record of all state transitions of a blockchain. State transitions may result from chaincode invocations (i.e., transactions) submitted by participating parties (e.g., client nodes, ordering nodes, endorser nodes, peer nodes, etc.). Each participating party (such as a peer node) can maintain a copy of the ledger. A transaction may result in a set of asset key-value pairs being committed to the ledger as one or more operands, such as creates, updates, deletes, and the like. The ledger includes a blockchain (also referred to as a chain) which is used to store an immutable, sequenced record in blocks. The ledger also includes a state database which maintains a current state of the blockchain. This application can utilize a chain that is a transaction log which is structured as hash-linked blocks, and each block contains a sequence of N transactions where N is equal to or greater than one. The block header includes a hash of the block's transactions, as well as a hash of the prior block's header. In this way, all transactions on the ledger may be sequenced and cryptographically linked together. Accordingly, it is not possible to tamper with the ledger data without breaking the hash links. A hash of a most recently added blockchain block represents every transaction on the chain that has come before it, making it possible to ensure that all peer nodes are in a consistent and trusted state. The chain may be stored on a peer node file system (i.e., local, attached storage, cloud, etc.), efficiently supporting the append-only nature of the blockchain workload. The current state of the immutable ledger represents the latest values for all keys that are included in the chain transaction log. Since the current state represents the latest key values known to a channel, it is sometimes referred to as a world state. Chaincode invocations execute transactions against the current state data of the ledger. To make these chaincode interactions efficient, the latest values of the keys may be stored in a state database. The state database may be simply an indexed view into the chain's transaction log, it can therefore be regenerated from the chain at any time. The state database may automatically be recovered (or generated if needed) upon peer node startup, and before transactions are accepted. Some benefits of the instant solutions described and depicted herein include a novel ability to detect a malicious orderer peer or peripheral peer in a permissioned blockchain network, and a batch process to detect malicious behavior over a group of sequentially consecutive blocks. It is not possible for the disclosed processes to be implemented on a traditional database instead of a blockchain, because the present processes utilize distributed consensus for new block commitment and shared ledgers to store committed blocks. Traditional databases lack these processes and structures. The present application creates a functional improvement to computer functionality utilizing the blockchain by detecting a peer that may be acting maliciously to corrupt data or store modified data within a blockchain network. Although permissioned blockchain networks are sometimes thought of as inherently secure, it is still possible for corrupted blocks to be stored to a blockchain during the commitment phase. By detecting corrupted or changed blocks. Network security and integrity is preserved. No new data is stored in blocks of a blockchain. In a blockchain network, each peer includes a shared ledger, which includes an ordered sequence of transactions that are usually grouped into blocks for efficient dissemination. In some permissioned blockchain networks, there are a set of peers dedicated to order and batch the transactions into blocks (hereafter “ordering peers” or “orderers”) and distribute them to the remaining peers (hereafter “peripheral peers”) of the blockchain network. The distribution is done by having the peripheral peers pull the blocks from the ordering peers, having the ordering peers send the blocks to the peripheral peers, by an approach that involves one of both the previous approaches, or by an approach that involves a previous approach, but also by having the peripheral peers send each other the blocks and not receiving them directly from the ordering peers. In most blockchain networks, the transactions sent by the blockchain network participants are signed with asymmetric cryptography, such as (but not limited to) RSA or ECDSA signature schemes. Since the ordering peers lack private keys, they cannot mutate the transactions sent to be ordered while preserving the validity of the signature, because that requires significant computational power. A threat model to permissioned blockchain networks may consider either malicious ordering peers or malicious peripheral peers. Malicious ordering peers may collude with other ordering peers, present different blocks to peripheral peers, and even collude with peripheral peers (depending on their threat model) on altering their protocol. There is not an upper bound on the percentage of malicious ordering peers. Peripheral peers may be considered honest if they cooperate fully with other peripheral peers, and adhere to the protocols. Peripheral peers may be considered malicious if they may run a modified version of the protocol and attempt to prevent honest peers from discovering that they do not conform to the protocol the honest peers execute. In Hyperledger Fabric v0.6, the architecture was built around a small core of validator peers connected in a full mesh topology which executed a byzantine consensus protocol (i.e PBFT), and all peers periodically exchanged block hashes via a checkpoint mechanism. The validator peers could use these hashes if they collected f+1 identical hashes of a given sequence number to safely synchronize their own ledger from other peers that published the corresponding hashes after doing backward hash chain validation to ensure the integrity of the block sequence fetched from remote validator peers. The assumption in Hyperledger Fabric v0.6 was that f is a complete upper bound on the number of byzantine validator peers, so if a validator node received f+1 attestations of the same hash for a given block sequence, it will not reach a forked world state. However, Hyperledger Fabric's v0.6 architecture is fundamentally different than the architecture the present application describes, namely it is a homogenous one and as such it doesn't fit use cases in which there are many peers deployed around a central core of peers which cuts the blocks. In such a layout, the peripheral peers cannot use the mechanism implemented in Hyperledger Fabric v0.6 as a black box, since even if there is an upper bound on the percentage of the malicious peripheral peers, collecting f+1 attestations of the same block hash Himeans nothing because since the blocks aren't cut by peripheral peers, there could be a different group of f+1 peripheral peers which collected attestations on a different block hash H(j)≠H(i) and thus a chain fork would occur without detection. Additionally, if the Hyperledger Fabric v0.6 mechanism would have been used by the peripheral peers, a fork could still occur without early detection. The Hyperledger Fabric v0.6 protocol and implementation did not relay information transitively between peers but had a point-to-point dissemination only. Thus, it is not practical in large scale deployments, while the present application suggests the use of a gossip protocol combined with signed messages. This allows propagating data in an efficient and scalable manner and adds an ability to detect and expose peripheral peers which voted for two different hashes, and thus disincentivizes such behavior. The goal of the present application is to detect forking attacks, even if all ordering peers are malicious, as long as they don't coerce the peripheral peers into adopting different blocks. While the ordering peers cannot mutate the transactions that would appear in the blocks (since they lack the private keys of the transaction submitters), they can still harm the integrity of the blockchain by causing chain forks by sending different blocks to different peripheral peers. This may sabotage client applications that read the transactions in the shared ledger by making different clients see a divergent view of the blockchain data. With a crash fault tolerant (CFT) ordering service, the ability to detect state divergence is a very critical and important property. It may allow publishing an alerting event for an external monitoring system. Moreover, byzantine fault tolerant (BFT) ordering services may mitigate the risk of the chain fork since there may be no guarantee of a number of adversarial ordering service peer growth beyond a threshold of one third of total ordering service network size. The disclosure herein introduces methods for detecting attempts of a chain fork by peers of the blockchain network100, under several assumptions. First, the blocks created by the non-malicious (honest) ordering peers each have a monotonically increasing and successive sequence. The first block is sequenced 0, the next one created is sequenced 1, and so forth. By not creating a block of a specific sequence, a malicious ordering node cannot create a chain fork, because it is equivalent to an honest ordering node that has a crash failure and then recovers. Second, the ordering peers cryptographically sign the blocks they output to the network and each block carries with it an identifier of the node such as (but not limited to) an x509 certificate. The signature may be verified using the identifier, and the identifier itself can be verified by some identity infrastructure such as (but not limited to) a certificate authority validation chain starting at the identifier of the node and ending at a certificate authority that the verifier node trusts. Third, the peripheral peers may all use the same cryptographic hash function H. FIG.1Aillustrates a block diagram of a system for processing a new block, according to example embodiments. Referring toFIG.1A, the network100includes an orderer peer104, a peripheral peer108, other peripheral peers112, and a shared ledger128. The network100is a permissioned blockchain network, such as a Hyperledger Fabric blockchain network. The protocol herein uses thresholds of a total node count of the blockchain network100, which implies that the peers are countable and how many peers there are is a known number. In a public blockchain network such as bitcoin, the number of nodes there are is never known, and a threshold on the percentage of byzantine nodes (denoted herein as f) doesn't exist because a malicious party can join as many nodes as it wants into the blockchain network by generating identities for its nodes, and run more than f nodes on its own. The orderer peer104receives endorsed blockchain transactions from other peers within blockchain network100and gathers groups of transactions into new blocks116. Although one orderer peer104is illustrated, it should be understood there may be any number of orderer peers104in the blockchain network100. One orderer peer104is shown in order to simplify the description of the network100and aid in understanding of the disclosed processes. When the orderer peer104creates a new block116, it transfers the new block116to peripheral peers108,112of the blockchain network100. Peripheral peers108,112are all other peers of the blockchain network100that are not orderer peers104, and may be generally considered to be blockchain peers. In the disclosed processes, all peripheral peers108,112of the network100operate the disclosed processes in parallel. However, the description and processes focus on what happens within an individual peripheral peer (peripheral peer108) and how it cooperates with the other peripheral peers112in the network. It should therefore be understood that the operations discussed with respect to peripheral peer108are at the same time occurring in each of peripheral peers112. There may be any number of peripheral peers108,112in the blockchain network100. In response to receiving the new block116, the peripheral peer108calculates a hash120of the new block116. At the same time, the peripheral peer108requests hashes of the new block116from a majority of the peripheral peers112in the blockchain network100. Each of the other peripheral peers112do the same. The peripheral peer108receives the requested hashes128from the majority of other peripheral peers112, and compares the requested hashes128to the calculated hash120. If all of the requested hashes128are the same as the calculated hash120, then the new block116is valid and the peripheral peer108includes the new block as a committed block136stored to a shared ledger132of the blockchain network100. If one or more of the requested hashes128are not the same as the calculated hash120, then most likely the orderer peer104that created the new block116is possibly malicious. The peripheral peer108next verifies whether the orderer peer104is malicious by requesting the new block from each of the peripheral peers112that provided a requested hash128that miscompared with the calculated hash120. Those peripheral peers112provide the requested block124to the peripheral peer108, who then compares the requested block124to the new block116the peripheral peer108received. If the requested blocks124are not the same as the new block116, then the orderer peer104that created the new block116is a malicious orderer peer104. Honest peripheral peers108,112(that do not participate in the ordering of the transactions) have no incentive to lie to each about blocks they receive from the ordering peers104, because it is of interest to the peripheral peers108,112to have the same shared ledger132contents since they do not collude with the ordering peers104. Also, each peer could be asked to prove its assertion about a block by sending the block itself, and thus exposing its lie since the block is signed by the ordering node(s)104. Therefore, in any environment in which the peripheral peers108,112can lie about the blocks they receive from the ordering peers104, a separate protocol that requests a proof by receiving these blocks from peripheral peers108,112may be run alongside the protocol outlined herein. In the case of malicious peripheral peers108,112, they may lie to honest peers and possibly collude with the ordering peers104or other peripheral peers108,112to cause different honest peers to adopt blocks that contain different transactions or blocks with the same transactions but in different order. It is considered acceptable that malicious peers adopt different blocks than honest peers, but all honest peers must adopt the same blocks. The number of malicious peripheral peers108,112in the blockchain network cannot exceed a certain (publicly known) percentage of total peripheral peers. Specifically, an upper bound on the number of possible malicious peripheral peers108,112in the network is known and is denoted herein as f. The peripheral peers108,112all share a common number B that is known to everyone in the blockchain network100. Peers arrange blocks received from the ordering peers104in batches of size B. If a peer has not received a block bi but did receive a block bj for some j>i, it is guaranteed to receive bieventually from either an ordering peer104or a peripheral peer108,112. In addition, the peripheral peers108,112all share a common duration of time Toutthat is known to all entities within the blockchain network100. Data structures of the present application may reside in memory or on a disk or on any type of storage or a mix of storage devices. The peripheral peers108,112all maintain the blocks in a successive and continuous ordered list of blocks starting from 0 up to the latest block received. A hash of the blocks of the list are used as leaves in a merkle tree154, and each time a block is received and validated, the merkle tree154is reconstructed and recomputed at each peripheral peer108,112. The peripheral peers108,112continually notify each other of their latest validated and non-validated block sequences. A validated block is considered a block that has been stored to the shared ledger132and is a leaf in the merkle tree154, and a non-validated block is one that is not. Since blocks enter the shared ledger132in-order, if bjis a validated block, then ∀i∈{0. . . j}:biis also validated. Peers consult with a threshold count (hereafter t) of other peripheral peers108,112in order to withstand scenarios in which either an ordering peer104publishes different blocks to different peers, or malicious peripheral peers108,112(if applicable) do not conform to their protocol in order to cause honest peers to adopt divergent sets of blocks. The threshold count t that a peer needs to consult with depends on the threat model. For honest peripheral peers108,112, the threshold count t may include at least 50% of the total peripheral peers108,112out of all peripheral peers108,112, which means that including the peer itself, is at least 50%+1. For malicious peripheral peers108,112: (reminder: the upper bound on the malicious peer count is denoted as f=at least 50%+f of the total peripheral peer108,112count out of all peripheral peers108,112, which means that including the peer itself, is at least 50%+f+1. Peers do not need to directly communicate with each other to collect a threshold count of hashes, but can also sign the data they want to publish and propagate this among the peripheral peers108,112in a dissemination protocol such as (but not limited to) gossip, broadcast, etc. As described previously, peers arrange blocks received from the ordering service in batches of size B. If a peer hasn't received a block bi but did receive a block bj for some j>i, it is guaranteed to receive bi eventually from either an ordering peer104or a peripheral peer108,112. It follows from here that if a peer received all blocks in a batch of blocks bi·B, bi·B+1, . . . b(i+1)·B−1but a set of blocks bj, . . . bj+k s·t j+k<(i+1)·B−1, it will obtain the missing blocks eventually. However, if there is a consecutive set of indices that the last of them is (i+1)·B−1 that are missing, there is (obviously) no guarantee that the peer would receive them at all, since it may be that this is the last batch that is being received, and no new blocks have been created by any ordering peer104. In this case, then Toutdenotes the time limit a peer would wait until it would decide to perform step (2) of the protocol. The protocol itself is described algorithmically as follows:1. On reception of B blocks bi, bi+1, . . . bi+B−1by peripheral peer p: Consult t peripheral peers (denote the set of peers Q) of their non-validated block sequences and validated block sequences.a. If t peers with a validated sequence of blocks b′i, b′i+1, ...b′i+B−1, weren't found, retrieve from t peers the hashes H(bi), H(bi+1) , . . . H(bi+B−1) that they have received, if applicable. If ∃j∈s·t H(bj)≠H(b′j), then, a fork has been detected. If such t hashes have not been collected—go back to step (1).b. As explained, if there is a node q∈Q with a sequence of blocks bi, bi+1, . . .bi+B−1it means it has constructed a merkle tree that includes these blocks in its lowest level (leaf level). Obtain from q the root hash RBqof a merkle tree that its leaves are the hashes H(bi), H(bi+1), . . . H(bi+B−1) (as explained, q maintains such a merkle tree, and since re rBsolely depends on bi, bi+1, . . . bi+B−1, it will stay the same no matter q's ledger height),c. Construct a temporary merkle tree having leaves that are bi, bi+1, . . . bi+B−1and its root is rB, and compare it with RBq∀q∈Q.i. If ∃q∈Q s·t rB≠RBq, it means there is some block bjs·t i≤j≤i+B−1 in p that is different than the blocks in q, and thus a chain fork has been successfully detected.ii. Else, consider bi, bi+1, . . . bi+B−1validated, commit them to the shared ledger, and update the merkle tree.2. If B blocks aren't received from the blockchain network100within a timely manner Tout:a. Let bi, bi+1, . . . bi+k, s·t k<B−1 be the last blocks received from the blockchain network100.b. Define blocks zi+k+1, . . . Zi+B−1as blocks that are missing from the last batch, and denote their hashes as 0.c. Perform the protocol of step (1) with the blocks while consulting t peripheral peers, with the blocks bi, bi+1, . . . bi+k, Zi+k+1, . . . Zi+B−1, but when contacting the other peers, specify the indices i+k, . . . i+B−1 in case the contacted peer q has received new blocks at the time of the query.Note: Step 1 may also be performed by sending a merkle tree root which its leaves are the hashes of the blocks b′i,b′i+1, . . . bi+B−1, but for simplicity and easy distinguishability between validated blocks that are committed and have a merkle tree built for them and non-validated blocks, the protocol doesn't use a merkle tree for step 1a. Also, a merkle tree method may be replaced with a cumulative hashing that is defined in the following way: H(H(bi)∥H(bi+1) . . .∥H(bB+i−1)) or by a similar method. As peripheral peers consult each other about their block hashes, they can either adopt their block hashes, or detect a chain forking attempt. The blocks may be modeled as values that peripheral peers input into the protocol and the output is either the blocks they propose or an event of a forking attempt. The disclosed protocol fulfills the properties of abortable consensus. Uniform validity teaches: “If a process decides v then some process previously proposed v”. Therefore, if a peripheral peer adopts a certain block, it means that there are t other peripheral peers that also proposed that block either in the past (it was validated) or in the current round (the block wasn't validated before by any peripheral peer). Agreement teaches: “Correct processes do not decide different values”. Therefore, assume in contradiction that there are 2 peripheral peers p and q such that p adopted block bi with hash Hp and q adopted block bi with hash hq. Since every peripheral peer consults with at least 50%+f other peripheral peers, there is an honest peripheral peer r that both p and q consulted with. From the assumption and the protocol, it follows that r communicated q its hash of bi is Hq and communicated p its hash of Bi is Hp. Therefore, r doesn't conform to the protocol and hence is malicious, which indicates a contradiction. Termination teaches: “Eventually all correct processes either decide or abort”. It may be easily derived from the protocol, that for each block a peripheral peer consults with other peers about, it is either informed about only the hash it computed for the block itself, or is informed about a different hash and then it aborts the protocol because a chain split attempt was detected. α-Abortability teaches: “There exists an α<1 such that for any failure pattern in which most of processes are correct, the probability that there exists some process that aborts in a run with the failure pattern is at most α”. Denote the number of peripheral peers as n and the amount of malicious peripheral peers as f such that 2f<nf, such that 2f<n. Let p be a peripheral peer. For simplicity of calculation, assume that it consults with exactly (instead of at least) n2+f other peripheral peers about the blocks it possesses. The probability of selecting only honest peripheral peers is: phonest=(n—f—1n2+f−1) (n−1n2+f−1) phonest=〈n-f-1n2+f-1〉〈n-1n2+f-1〉, and since every peripheral peer selects independently, the probability of all honest peripheral peers do not select any malicious peripheral peer for a given block batch is: (phonest)n−f, which means the probability αthat some process aborts in a run is at most: α=131(phonest)n−f. FIG.1Billustrates a block diagram of a system for processing a new block sequence, according to example embodiments. Referring toFIG.1B, the network150includes the same peers104,108,112and shared ledger132described with respect toFIG.1A, but represents a system150where a sequence of blocks170are processes instead of a single block116. A sequence of blocks170is a group of two or more consecutive blocks. To make the protocol more efficient, network bandwidth should be conserved and the amount of data transferred should preferably be minimized. To that end, the blocks are arranged into batches of some globally known batch size B, and for every such batch of blocks or sequence of blocks170, a merkle tree154is created, where the leaf nodes of the merkle tree154are the block hashes of the batch. In order for two peripheral peers108,112to compare all hashes of the blocks in the batch, it is only necessary to compare the root node hash158of the merkle tree154. Assuming the hash function of the merkle tree is collision resistant, all block hashes of the batch are the same if and only if the merkle tree root hashes158in possession of the two peripheral peers108,112are equal. This saves network bandwidth and is more efficient than comparing a block at a time as inFIG.1Abecause the same merkle tree root node hash158may be sent to all the peripheral peers108,112in the blockchain network150. However, there is a corner case that needs to be addressed: Since a blockchain increments blocks one by one, and not in batches, it may be that too much time (denoted as Toutherein) has passed, yet the blockchain network150may not have enough blocks to fill in a batch of B blocks. In such a case, the peripheral peers108,112simply fill in the remaining vacant places in the batch with hashes of zeros, compute a merkle tree root node hash158, and send it to the peers along with the indices that were zeros instead of actual block hashes. Each peripheral peer108,112includes a merkle tree154. The merkle tree154includes leaf nodes that each store a hash for a block. The merkle tree154has a root node, which stores a root node hash158for the entire merkle tree154. Thus, by comparing root node hashes158, a peripheral peer108,112may determine that two or more merkle trees154are identical. If the merkle trees154are constructed on hashes of blocks that have been validated (committed), then when receiving blocks that have not been validated—their hashes may be sent and consulted with peripheral peers108,112in the same manner as before. Otherwise, if the merkle trees154are constructed on hashes of blocks regardless of their validation, then when receiving blocks that have not been validated, the merkle tree154needs to be constantly updated along with the merkle tree154. The peer sends also the range of the consecutive prefix of blocks that are not zeros (in order for the other peripheral peer108,112to understand which leaves correspond to blocks that are not included in the merkle tree154construction). In response to receiving the sequence of blocks170, the peripheral peer108adds the sequence of blocks170to its own merkle tree154by calculating hashes for each block of the sequence of hashes170and storing the calculated hashes to leaf nodes of its own merkle tree154. At the same time, the peripheral peer108requests root node hashes158from a majority of the peripheral peers112in the blockchain network100, of their own merkle tree154. Recall that each peripheral peer108,112is executing the steps in parallel, including updating its own merkle tree154with hashes from the sequence of blocks170. The peripheral peer108receives the requested root node hashes162from the majority of other peripheral peers112, and compares the requested root node hashes128to its own root node hash158. If all of the requested root node hashes162are the same as the root node hash158of the peripheral peer108, then the sequence of blocks170is valid and the peripheral peer108includes the sequence of blocks170as committed blocks136stored to the shared ledger132of the blockchain network100. If one or more of the requested root node hashes162are not the same as the root node hash158of the peripheral peer108, then most likely the orderer peer104that created the sequence of blocks170is possibly malicious. The peripheral peer108next verifies whether the orderer peer104is malicious by requesting the sequence of blocks from each of the peripheral peers112that provided a requested root node hash162that miscompared with the root node hash158of the peripheral peer108. Those peripheral peers112provide the requested sequence of blocks166to the peripheral peer108, who then compares the requested sequence of blocks166to the sequence of blocks170the peripheral peer108received. If the requested sequence of blocks166are not the same as the sequence of blocks170, then the orderer peer104that created the sequence of blocks170is a malicious orderer peer104. FIG.2Aillustrates a blockchain architecture configuration200, according to example embodiments. Referring toFIG.2A, the blockchain architecture200may include certain blockchain elements, for example, a group of blockchain nodes202. The blockchain nodes202may include one or more nodes204-210(these four nodes are depicted by example only). These nodes participate in a number of activities, such as blockchain transaction addition and validation process (consensus). One or more of the blockchain nodes204-210may endorse transactions based on endorsement policy and may provide an ordering service for all blockchain nodes in the architecture200. A blockchain node may initiate a blockchain authentication and seek to write to a blockchain immutable ledger stored in blockchain layer216, a copy of which may also be stored on the underpinning physical infrastructure214. The blockchain configuration may include one or more applications224which are linked to application programming interfaces (APIs)222to access and execute stored program/application code220(e.g., chaincode, smart contracts, etc.) which can be created according to a customized configuration sought by participants and can maintain their own state, control their own assets, and receive external information. This can be deployed as a transaction and installed, via appending to the distributed ledger, on all blockchain nodes204-210. The blockchain base or platform212may include various layers of blockchain data, services (e.g., cryptographic trust services, virtual execution environment, etc.), and underpinning physical computer infrastructure that may be used to receive and store new transactions and provide access to auditors which are seeking to access data entries. The blockchain layer216may expose an interface that provides access to the virtual execution environment necessary to process the program code and engage the physical infrastructure214. Cryptographic trust services218may be used to verify transactions such as asset exchange transactions and keep information private. The blockchain architecture configuration ofFIG.2Amay process and execute program/application code220via one or more interfaces exposed, and services provided, by blockchain platform212. The code220may control blockchain assets. For example, the code220can store and transfer data, and may be executed by nodes204-210in the form of a smart contract and associated chaincode with conditions or other code elements subject to its execution. As a non-limiting example, smart contracts may be created to execute reminders, updates, and/or other notifications subject to the changes, updates, etc. The smart contracts can themselves be used to identify rules associated with authorization and access requirements and usage of the ledger. For example, the information226may include a new block or a new block sequence from an orderer peer, and may be processed by one or more processing entities (e.g., virtual machines) included in the blockchain layer216. The result228may include a request to other peripheral peers to provide hashes and blocks in order to make comparisons to detect malicious behavior. The physical infrastructure214may be utilized to retrieve any of the data or information described herein. A smart contract may be created via a high-level application and programming language, and then written to a block in the blockchain. The smart contract may include executable code which is registered, stored, and/or replicated with a blockchain (e.g., distributed network of blockchain peers). A transaction is an execution of the smart contract code which can be performed in response to conditions associated with the smart contract being satisfied. The executing of the smart contract may trigger a trusted modification(s) to a state of a digital blockchain ledger. The modification(s) to the blockchain ledger caused by the smart contract execution may be automatically replicated throughout the distributed network of blockchain peers through one or more consensus protocols. The smart contract may write data to the blockchain in the format of key-value pairs. Furthermore, the smart contract code can read the values stored in a blockchain and use them in application operations. The smart contract code can write the output of various logic operations into the blockchain. The code may be used to create a temporary data structure in a virtual machine or other computing platform. Data written to the blockchain can be public and/or can be encrypted and maintained as private. The temporary data that is used/generated by the smart contract is held in memory by the supplied execution environment, then deleted once the data needed for the blockchain is identified. A chaincode may include the code interpretation of a smart contract, with additional features. As described herein, the chaincode may be program code deployed on a computing network, where it is executed and validated by chain validators together during a consensus process. The chaincode receives a hash and retrieves from the blockchain a hash associated with the data template created by use of a previously stored feature extractor. If the hashes of the hash identifier and the hash created from the stored identifier template data match, then the chaincode sends an authorization key to the requested service. The chaincode may write to the blockchain data associated with the cryptographic details. FIG.2Billustrates an example of a blockchain transactional flow250between nodes of the blockchain in accordance with an example embodiment. Referring toFIG.2B, the transaction flow may include a transaction proposal291sent by an application client node260to an endorsing peer node281. The endorsing peer281may verify the client signature and execute a chaincode function to initiate the transaction. The output may include the chaincode results, a set of key/value versions that were read in the chaincode (read set), and the set of keys/values that were written in chaincode (write set). The proposal response292is sent back to the client260along with an endorsement signature, if approved. The client260assembles the endorsements into a transaction payload293and broadcasts it to an ordering service node284. The ordering service node284then delivers ordered transactions as blocks to all peers281-283on a channel. Before committal to the blockchain, each peer281-283may validate the transaction. For example, the peers may check the endorsement policy to ensure that the correct allotment of the specified peers have signed the results and authenticated the signatures against the transaction payload293. Referring again toFIG.2B, the client node260initiates the transaction291by constructing and sending a request to the peer node281, which is an endorser. The client260may include an application leveraging a supported software development kit (SDK), which utilizes an available API to generate a transaction proposal. The proposal is a request to invoke a chaincode function so that data can be read and/or written to the ledger (i.e., write new key value pairs for the assets). The SDK may serve as a shim to package the transaction proposal into a properly architected format (e.g., protocol buffer over a remote procedure call (RPC)) and take the client's cryptographic credentials to produce a unique signature for the transaction proposal. In response, the endorsing peer node281may verify (a) that the transaction proposal is well formed, (b) the transaction has not been submitted already in the past (replay-attack protection), (c) the signature is valid, and (d) that the submitter (client260, in the example) is properly authorized to perform the proposed operation on that channel. The endorsing peer node281may take the transaction proposal inputs as arguments to the invoked chaincode function. The chaincode is then executed against a current state database to produce transaction results including a response value, read set, and write set. However, no updates are made to the ledger at this point. In292, the set of values, along with the endorsing peer node's281signature is passed back as a proposal response292to the SDK of the client260which parses the payload for the application to consume. In response, the application of the client260inspects/verifies the endorsing peers signatures and compares the proposal responses to determine if the proposal response is the same. If the chaincode only queried the ledger, the application would inspect the query response and would typically not submit the transaction to the ordering node service284. If the client application intends to submit the transaction to the ordering node service284to update the ledger, the application determines if the specified endorsement policy has been fulfilled before submitting (i.e., did all peer nodes necessary for the transaction endorse the transaction). Here, the client may include only one of multiple parties to the transaction. In this case, each client may have their own endorsing node, and each endorsing node will need to endorse the transaction. The architecture is such that even if an application selects not to inspect responses or otherwise forwards an unendorsed transaction, the endorsement policy will still be enforced by peers and upheld at the commit validation phase. After successful inspection, in step293the client260assembles endorsements into a transaction and broadcasts the transaction proposal and response within a transaction message to the ordering node284. The transaction may contain the read/write sets, the endorsing peers signatures and a channel ID. The ordering node284does not need to inspect the entire content of a transaction in order to perform its operation, instead the ordering node284may simply receive transactions from all channels in the network, order them chronologically by channel, and create blocks of transactions per channel. The blocks of the transaction are delivered from the ordering node284to all peer nodes281-283on the channel. The transactions294within the block are validated to ensure any endorsement policy is fulfilled and to ensure that there have been no changes to ledger state for read set variables since the read set was generated by the transaction execution. Transactions in the block are tagged as being valid or invalid. Furthermore, in step295each peer node281-283appends the block to the channel's chain, and for each valid transaction the write sets are committed to current state database. An event is emitted, to notify the client application that the transaction (invocation) has been immutably appended to the chain, as well as to notify whether the transaction was validated or invalidated. FIG.3Aillustrates an example of a permissioned blockchain network300, which features a distributed, decentralized peer-to-peer architecture. In this example, a blockchain user302may initiate a transaction to the permissioned blockchain304. In this example, the transaction can be a deploy, invoke, or query, and may be issued through a client-side application leveraging an SDK, directly through an API, etc. Networks may provide access to a regulator306, such as an auditor. A blockchain network operator308manages member permissions, such as enrolling the regulator306as an “auditor” and the blockchain user302as a “client”. An auditor could be restricted only to querying the ledger whereas a client could be authorized to deploy, invoke, and query certain types of chaincode. A blockchain developer310can write chaincode and client-side applications. The blockchain developer310can deploy chaincode directly to the network through an interface. To include credentials from a traditional data source312in chaincode, the developer310could use an out-of-band connection to access the data. In this example, the blockchain user302connects to the permissioned blockchain304through a peer node314. Before proceeding with any transactions, the peer node314retrieves the user's enrollment and transaction certificates from a certificate authority316, which manages user roles and permissions. In some cases, blockchain users must possess these digital certificates in order to transact on the permissioned blockchain304. Meanwhile, a user attempting to utilize chaincode may be required to verify their credentials on the traditional data source312. To confirm the user's authorization, chaincode can use an out-of-band connection to this data through a traditional processing platform318. FIG.3Billustrates another example of a permissioned blockchain network320, which features a distributed, decentralized peer-to-peer architecture. In this example, a blockchain user322may submit a transaction to the permissioned blockchain324. In this example, the transaction can be a deploy, invoke, or query, and may be issued through a client-side application leveraging an SDK, directly through an API, etc. Networks may provide access to a regulator326, such as an auditor. A blockchain network operator328manages member permissions, such as enrolling the regulator326as an “auditor” and the blockchain user322as a “client”. An auditor could be restricted only to querying the ledger whereas a client could be authorized to deploy, invoke, and query certain types of chaincode. A blockchain developer330writes chaincode and client-side applications. The blockchain developer330can deploy chaincode directly to the network through an interface. To include credentials from a traditional data source332in chaincode, the developer330could use an out-of-band connection to access the data. In this example, the blockchain user322connects to the network through a peer node334. Before proceeding with any transactions, the peer node334retrieves the user's enrollment and transaction certificates from the certificate authority336. In some cases, blockchain users must possess these digital certificates in order to transact on the permissioned blockchain324. Meanwhile, a user attempting to utilize chaincode may be required to verify their credentials on the traditional data source332. To confirm the user's authorization, chaincode can use an out-of-band connection to this data through a traditional processing platform338. In some embodiments, the blockchain herein may be a permissionless blockchain. In contrast with permissioned blockchains which require permission to join, anyone can join a permissionless blockchain. For example, to join a permissionless blockchain a user may create a personal address and begin interacting with the network, by submitting transactions, and hence adding entries to the ledger. Additionally, all parties have the choice of running a node on the system and employing the mining protocols to help verify transactions. FIG.3Cillustrates a process350of a transaction being processed by a permissionless blockchain352including a plurality of nodes354. A sender356desires to send payment or some other form of value (e.g., a deed, medical records, a contract, a good, a service, or any other asset that can be encapsulated in a digital record) to a recipient358via the permissionless blockchain352. In one embodiment, each of the sender device356and the recipient device358may have digital wallets (associated with the blockchain352) that provide user interface controls and a display of transaction parameters. In response, the transaction is broadcast throughout the blockchain352to the nodes354. Depending on the blockchain's352network parameters the nodes verify360the transaction based on rules (which may be pre-defined or dynamically allocated) established by the permissionless blockchain352creators. For example, this may include verifying identities of the parties involved, etc. The transaction may be verified immediately or it may be placed in a queue with other transactions and the nodes354determine if the transactions are valid based on a set of network rules. In structure362, valid transactions are formed into a block and sealed with a lock (hash). This process may be performed by mining nodes among the nodes354. Mining nodes may utilize additional software specifically for mining and creating blocks for the permissionless blockchain352. Each block may be identified by a hash (e.g., 256 bit number, etc.) created using an algorithm agreed upon by the network. Each block may include a header, a pointer or reference to a hash of a previous block's header in the chain, and a group of valid transactions. The reference to the previous block's hash is associated with the creation of the secure independent chain of blocks. Before blocks can be added to the blockchain, the blocks must be validated. Validation for the permissionless blockchain352may include a proof-of-work (PoW) which is a solution to a puzzle derived from the block's header. Although not shown in the example ofFIG.3C, another process for validating a block is proof-of-stake. Unlike the proof-of-work, where the algorithm rewards miners who solve mathematical problems, with the proof of stake, a creator of a new block is chosen in a deterministic way, depending on its wealth, also defined as “stake.” Then, a similar proof is performed by the selected/chosen node. With mining364, nodes try to solve the block by making incremental changes to one variable until the solution satisfies a network-wide target. This creates the PoW thereby ensuring correct answers. In other words, a potential solution must prove that computing resources were drained in solving the problem. In some types of permissionless blockchains, miners may be rewarded with value (e.g., coins, etc.) for correctly mining a block. Here, the PoW process, alongside the chaining of blocks, makes modifications of the blockchain extremely difficult, as an attacker must modify all subsequent blocks in order for the modifications of one block to be accepted. Furthermore, as new blocks are mined, the difficulty of modifying a block increases, and the number of subsequent blocks increases. With distribution366, the successfully validated block is distributed through the permissionless blockchain352and all nodes354add the block to a majority chain which is the permissionless blockchain's352auditable ledger. Furthermore, the value in the transaction submitted by the sender356is deposited or otherwise transferred to the digital wallet of the recipient device358. FIG.4illustrates a system messaging diagram400for performing block integrity checking, according to example embodiments. Referring toFIG.4, the system diagram400includes an orderer peer410, a peripheral peer420and other peripheral peers430. The orderer peer410creates a new block415, and distributes the new block416to the peripheral peer420and other peripheral peers430. The peripheral peer420calculates a hash of the new block425A in parallel with the other peripheral peers430calculating a hash of the new block425N. The peripheral peer420transfers a request426to the other peripheral peers430to provide hashes of the new block416. As described with respect toFIG.1A, the other peripheral peers430conduct the same processes in parallel. Thus, the same processes performed by the other peripheral peers430are represented by an “N” suffix, while those processes performed by the peripheral peer420are represented by an “A” suffix. The request hashes426and requested hashes427are both represented with bidirectional arrows to additionally represent this parallelism. However, only the blocks executed by the peripheral peer420are specifically discussed herein. The other peripheral peers430constitute a majority of the peripheral peers in the network. In response to receiving the request426, each of the other peripheral peers430provides the requested hashes427to the peripheral peer420. The peripheral peer420compares the calculated hash to the requested hashes435A, and determines if all of the requested hashes are identical to the calculated hash440A. If all of the requested hashes are identical to the calculated hash440A, the peripheral peer420commits the new block to the blockchain network445A. Next, the peripheral peer420determines that all of the requested hashes are not identical to the calculated hash440A (i.e. a miscompared has occurred)450A. In this case, the peripheral peer420ceases committing new blocks to the blockchain network455A. FIG.5Aillustrates a flow diagram500of an example method of verifying a new block in a blockchain, according to example embodiments. Referring toFIG.5A, the method500may include one or more of the following steps. At block504, an orderer peer creates a new block, and distributes the new block to peers of the blockchain network. At block508, peripheral peers of the blockchain network calculate a hash of the new block. At block512, each of the peripheral peers obtain new block hashes from a majority of peripheral peers of the same blockchain network. At block516, each peripheral peer compares the calculated new block hash with the obtained hashes from the majority of peripheral peers. At block520, if any of the hashes miscompared, each of the peripheral peers that observed a miscompare obtains blocks corresponding to the miscompared hashes from those peripheral peers that supplied the miscompared hashes. At block524, the peripheral peer ceases committing blocks if any of the obtained blocks miscompared to the new block. This signifies that a malicious orderer peer created a bad block. FIG.5Billustrates a flow diagram530of an example method of verifying a sequence of blocks in a blockchain, according to example embodiments. Referring toFIG.5B, the method530may include one or more of the following steps. At block534, an orderer peer creates a sequence of blocks, and distributes the new sequence of blocks to peers of the blockchain network. At block538, peripheral peers of the blockchain network calculate hashes of the new sequence of blocks. At block542, each of the peripheral peers adds the calculated hashes to its own merkle tree, where each leaf node of the merkle tree stores a hash of a different block. At block546, each peripheral peer requests merkle tree root node hashes from a majority of other peripheral peers of the blockchain network. Each peripheral peer compares its own merkle tree root node hash to merkle tree root node hashes it receives from the other peripheral peers it sent the request to. At block550, if any of the root node hashes miscompared, each of the peripheral peers that observed a miscompare obtains sequences of blocks corresponding to the miscompared root node hashes from those peripheral peers that supplied the miscompared root node hashes. At block554, the peripheral peer ceases committing blocks if any of the obtained sequence of blocks miscompared to the new sequence of blocks. This signifies that a malicious orderer peer created a bad sequence of blocks. FIG.5Cillustrates a flow diagram560of an example method of preventing vulnerabilities in a blockchain, according to example embodiments. Referring toFIG.5C, the method560may include one or more of the following steps. At block564, crosslink transactions are submitted. A first crosslink transaction is submitted for addition to a first blockchain and a second corresponding crosslink transaction is submitted for addition to a second blockchain, e.g., by a computing device associated with a party or user of the first and second blockchains. For example, the first crosslink transaction may be submitted to nodes associated with blockchain A for addition to blockchain A and the second crosslink transaction may be submitted to nodes associated with blockchain B for addition to blockchain B. In some aspects, for example, the user may submit the corresponding crosslink transactions when the user anticipates that one of blockchains A and B may become quiescent in the future. In some aspects, for example, the corresponding crosslink transactions may be submitted at the same time. In some aspects, for example, the first and second crosslink transactions may be submitted to the first and second blockchains at about the same time, e.g., within a few seconds, minutes, or hours of each other. In some aspects, one of the crosslink transactions may first be submitted to the blockchain that has a higher rate of block additions followed by the submission of the other crosslink transaction to the blockchain that has a lower rate of block additions. For example, the busiest blockchain may receive the submission of the crosslink transaction first followed by the less busy blockchain. In some aspects, for example, once the crosslink transaction has been confirmed as present in a new block on the busiest blockchain, the corresponding crosslink transaction may be submitted to the less busy blockchain. In some aspects, for example, this may be reversed where the crosslink transaction may be submitted to the less busy blockchain first followed by the busiest blockchain second. At block568, 1stand 2ndblockchains are queried. A computing device of a user, e.g., the same user or another user of one or both of blockchains A and B, may query the first blockchain, e.g., blockchain A for the first crosslink transaction. This user may query blockchain A, for example, in response to blockchain B entering a period of quiescence. For example, the user may initially determine that blockchain B has entered a period of quiescence and may know or identify that blockchains A and B have been crosslinked through crosslink transactions. For example, the user may determine that blockchains A and B have been crosslinked by querying blockchain B to see if there were any past crosslink transactions and identifying blockchain A as a blockchain having had a corresponding crosslink transaction to one found in blockchain B. The computing device of the user may identify the second blockchain, e.g., blockchain B, based on the queried first crosslink transaction, for example, by viewing the ID to blockchain B (FIG.4). The computing device of the user may query the second blockchain for the corresponding second crosslink transaction based on the identification of the second blockchain based on the first crosslink transaction. At block572, the crosslink transactions are compared. If the second crosslink transaction is present in the second blockchain, the second crosslink transaction may be compared to the first crosslink transaction to determine whether the second crosslink transaction corresponds to the first crosslink transaction. For example, transaction digests may be accessed or decoded using a public key of the user that submitted the crosslink transactions. If the transaction digests are decoded using the same public key, the crosslink transactions may be validated since it has been confirmed that the same user submitted both crosslink transactions. At block576, a check is made to determine if the 2ndblockchain should be invalidated. The computing device of the user may determine based on a result of the query whether the corresponding second crosslink transaction is or is not present. If the second crosslink transaction is not present in the second blockchain, at least a portion of the second blockchain may be invalidated. For example, the lack of the second crosslink transaction may be an indication that at least part of the second blockchain has been modified or tampered with. In some aspects, the user may utilize prior corresponding crosslink transactions of the first and second blockchains to validate at least a portion of the blockchain. For example, if a crosslink transaction is missing for blockchain B, the user may instead try to validate at least the portion of blockchain B ending at the corresponding crosslink transaction. At block580, the 2ndblockchain is validated. If the second crosslink transaction is determined to correspond to the first crosslink transaction based on the comparison, the second blockchain may be validated. Alternatively, if the second crosslink transaction is determined to not correspond to the first crosslink transaction, the second blockchain may be invalidated. For example, if the public key does not decode one or both of the transaction digests, the user will know that at least one of the crosslink transactions was not submitted by the same user and therefore that the second crosslink transaction is invalid as evidence of security and integrity on the second blockchain and therefore at least a portion of the second blockchain is invalidated. FIG.6Aillustrates an example system600that includes a physical infrastructure610configured to perform various operations according to example embodiments. Referring toFIG.6A, the physical infrastructure610includes a module612and a module614. The module614includes a blockchain620and a smart contract630(which may reside on the blockchain620), that may execute any of the operational steps608(in module612) included in any of the example embodiments. The steps/operations608may include one or more of the embodiments described or depicted and may represent output or written information that is written or read from one or more smart contracts630and/or blockchains620. The physical infrastructure610, the module612, and the module614may include one or more computers, servers, processors, memories, and/or wireless communication devices. Further, the module612and the module614may be a same module. FIG.6Billustrates another example system640configured to perform various operations according to example embodiments. Referring toFIG.6B, the system640includes a module612and a module614. The module614includes a blockchain620and a smart contract630(which may reside on the blockchain620), that may execute any of the operational steps608(in module612) included in any of the example embodiments. The steps/operations608may include one or more of the embodiments described or depicted and may represent output or written information that is written or read from one or more smart contracts630and/or blockchains620. The physical infrastructure610, the module612, and the module614may include one or more computers, servers, processors, memories, and/or wireless communication devices. Further, the module612and the module614may be a same module. FIG.6Cillustrates an example system configured to utilize a smart contract configuration among contracting parties and a mediating server configured to enforce the smart contract terms on the blockchain according to example embodiments. Referring toFIG.6C, the configuration650may represent a communication session, an asset transfer session or a process or procedure that is driven by a smart contract630which explicitly identifies one or more user devices652and/or656. The execution, operations and results of the smart contract execution may be managed by a server654. Content of the smart contract630may require digital signatures by one or more of the entities652and656which are parties to the smart contract transaction. The results of the smart contract execution may be written to a blockchain620as a blockchain transaction. The smart contract630resides on the blockchain620which may reside on one or more computers, servers, processors, memories, and/or wireless communication devices. FIG.6Dillustrates a system660including a blockchain, according to example embodiments. Referring to the example ofFIG.6D, an application programming interface (API) gateway662provides a common interface for accessing blockchain logic (e.g., smart contract630or other chaincode) and data (e.g., distributed ledger, etc.). In this example, the API gateway662is a common interface for performing transactions (invoke, queries, etc.) on the blockchain by connecting one or more entities652and656to a blockchain peer (i.e., server654). Here, the server654is a blockchain network peer component that holds a copy of the world state and a distributed ledger allowing clients652and656to query data on the world state as well as submit transactions into the blockchain network where, depending on the smart contract630and endorsement policy, endorsing peers will run the smart contracts630. The above embodiments may be implemented in hardware, in a computer program executed by a processor, in firmware, or in a combination of the above. A computer program may be embodied on a computer readable medium, such as a storage medium. For example, a computer program may reside in random access memory (“RAM”), flash memory, read-only memory (“ROM”), erasable programmable read-only memory (“EPROM”), electrically erasable programmable read-only memory (“EEPROM”), registers, hard disk, a removable disk, a compact disk read-only memory (“CD-ROM”), or any other form of storage medium known in the art. An exemplary storage medium may be coupled to the processor such that the processor may read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (“ASIC”). In the alternative, the processor and the storage medium may reside as discrete components. FIG.7Aillustrates a process700of a new block being added to a distributed ledger720, according to example embodiments, andFIG.7Billustrates contents of a new data block structure730for blockchain, according to example embodiments. Referring toFIG.7A, clients (not shown) may submit transactions to blockchain nodes711,712, and/or713. Clients may be instructions received from any source to enact activity on the blockchain720. As an example, clients may be applications that act on behalf of a requester, such as a device, person or entity to propose transactions for the blockchain. The plurality of blockchain peers (e.g., blockchain nodes711,712, and713) may maintain a state of the blockchain network and a copy of the distributed ledger720. Different types of blockchain nodes/peers may be present in the blockchain network including endorsing peers which simulate and endorse transactions proposed by clients and committing peers which verify endorsements, validate transactions, and commit transactions to the distributed ledger720. In this example, the blockchain nodes711,712, and713may perform the role of endorser node, committer node, or both. The distributed ledger720includes a blockchain which stores immutable, sequenced records in blocks, and a state database724(current world state) maintaining a current state of the blockchain722. One distributed ledger720may exist per channel and each peer maintains its own copy of the distributed ledger720for each channel of which they are a member. The blockchain722is a transaction log, structured as hash-linked blocks where each block contains a sequence of N transactions. Blocks may include various components such as shown inFIG.7B. The linking of the blocks (shown by arrows inFIG.7A) may be generated by adding a hash of a prior block's header within a block header of a current block. In this way, all transactions on the blockchain722are sequenced and cryptographically linked together preventing tampering with blockchain data without breaking the hash links. Furthermore, because of the links, the latest block in the blockchain722represents every transaction that has come before it. The blockchain722may be stored on a peer file system (local or attached storage), which supports an append-only blockchain workload. The current state of the blockchain722and the distributed ledger722may be stored in the state database724. Here, the current state data represents the latest values for all keys ever included in the chain transaction log of the blockchain722. Chaincode invocations execute transactions against the current state in the state database724. To make these chaincode interactions extremely efficient, the latest values of all keys are stored in the state database724. The state database724may include an indexed view into the transaction log of the blockchain722, it can therefore be regenerated from the chain at any time. The state database724may automatically get recovered (or generated if needed) upon peer startup, before transactions are accepted. Endorsing nodes receive transactions from clients and endorse the transaction based on simulated results. Endorsing nodes hold smart contracts which simulate the transaction proposals. When an endorsing node endorses a transaction, the endorsing nodes creates a transaction endorsement which is a signed response from the endorsing node to the client application indicating the endorsement of the simulated transaction. The method of endorsing a transaction depends on an endorsement policy which may be specified within chaincode. An example of an endorsement policy is “the majority of endorsing peers must endorse the transaction”. Different channels may have different endorsement policies. Endorsed transactions are forward by the client application to ordering service710. The ordering service710accepts endorsed transactions, orders them into a block, and delivers the blocks to the committing peers. For example, the ordering service710may initiate a new block when a threshold of transactions has been reached, a timer times out, or another condition. In the example ofFIG.7A, blockchain node712is a committing peer that has received a new data new data block730for storage on blockchain720. The first block in the blockchain may be referred to as a genesis block which includes information about the blockchain, its members, the data stored therein, etc. The ordering service710may be made up of a cluster of orderers. The ordering service710does not process transactions, smart contracts, or maintain the shared ledger. Rather, the ordering service710may accept the endorsed transactions and specifies the order in which those transactions are committed to the distributed ledger720. The architecture of the blockchain network may be designed such that the specific implementation of ‘ordering’ (e.g., Solo, Kafka, BFT, etc.) becomes a pluggable component. Transactions are written to the distributed ledger720in a consistent order. The order of transactions is established to ensure that the updates to the state database724are valid when they are committed to the network. Unlike a cryptocurrency blockchain system (e.g., Bitcoin, etc.) where ordering occurs through the solving of a cryptographic puzzle, or mining, in this example the parties of the distributed ledger720may choose the ordering mechanism that best suits that network. When the ordering service710initializes a new data block730, the new data block730may be broadcast to committing peers (e.g., blockchain nodes711,712, and713). In response, each committing peer validates the transaction within the new data block730by checking to make sure that the read set and the write set still match the current world state in the state database724. Specifically, the committing peer can determine whether the read data that existed when the endorsers simulated the transaction is identical to the current world state in the state database724. When the committing peer validates the transaction, the transaction is written to the blockchain722on the distributed ledger720, and the state database724is updated with the write data from the read-write set. If a transaction fails, that is, if the committing peer finds that the read-write set does not match the current world state in the state database724, the transaction ordered into a block will still be included in that block, but it will be marked as invalid, and the state database724will not be updated. Referring toFIG.7B, a new data block730(also referred to as a data block) that is stored on the blockchain722of the distributed ledger720may include multiple data segments such as a block header740, block data750, and block metadata760. It should be appreciated that the various depicted blocks and their contents, such as new data block730and its contents. shown inFIG.7Bare merely examples and are not meant to limit the scope of the example embodiments. The new data block730may store transactional information of N transaction(s) (e.g., 1, 10, 100, 500, 1000, 2000, 3000, etc.) within the block data750. The new data block730may also include a link to a previous block (e.g., on the blockchain722inFIG.7A) within the block header740. In particular, the block header740may include a hash of a previous block's header. The block header740may also include a unique block number, a hash of the block data750of the new data block730, and the like. The block number of the new data block730may be unique and assigned in various orders, such as an incremental/sequential order starting from zero. The block data750may store transactional information of each transaction that is recorded within the new data block730. For example, the transaction data may include one or more of a type of the transaction, a version, a timestamp, a channel ID of the distributed ledger720, a transaction ID, an epoch, a payload visibility, a chaincode path (deploy tx), a chaincode name, a chaincode version, input (chaincode and functions), a client (creator) identify such as a public key and certificate, a signature of the client, identities of endorsers, endorser signatures, a proposal hash, chaincode events, response status, namespace, a read set (list of key and version read by the transaction, etc.), a write set (list of key and value, etc.), a start key, an end key, a list of keys, a Merkel tree query summary, and the like. The transaction data may be stored for each of the N transactions. In some embodiments, the block data750may also store new data762which adds additional information to the hash-linked chain of blocks in the blockchain722. The additional information includes one or more of the steps, features, processes and/or actions described or depicted herein. Accordingly, the new data762can be stored in an immutable log of blocks on the distributed ledger720. Some of the benefits of storing such new data762are reflected in the various embodiments disclosed and depicted herein. Although inFIG.7Bthe new data762is depicted in the block data750but could also be located in the block header740or the block metadata760. The block metadata760may store multiple fields of metadata (e.g., as a byte array, etc.). Metadata fields may include signature on block creation, a reference to a last configuration block, a transaction filter identifying valid and invalid transactions within the block, last offset persisted of an ordering service that ordered the block, and the like. The signature, the last configuration block, and the orderer metadata may be added by the ordering service710. Meanwhile, a committer of the block (such as blockchain node712) may add validity/invalidity information based on an endorsement policy, verification of read/write sets, and the like. The transaction filter may include a byte array of a size equal to the number of transactions in the block data750and a validation code identifying whether a transaction was valid/invalid. FIG.7Cillustrates an embodiment of a blockchain770for digital content in accordance with the embodiments described herein. The digital content may include one or more files and associated information. The files may include media, images, video, audio, text, links, graphics, animations, web pages, documents, or other forms of digital content. The immutable, append-only aspects of the blockchain serve as a safeguard to protect the integrity, validity, and authenticity of the digital content, making it suitable use in legal proceedings where admissibility rules apply or other settings where evidence is taken in to consideration or where the presentation and use of digital information is otherwise of interest. In this case, the digital content may be referred to as digital evidence. The blockchain may be formed in various ways. In one embodiment, the digital content may be included in and accessed from the blockchain itself. For example, each block of the blockchain may store a hash value of reference information (e.g., header, value, etc.) along the associated digital content. The hash value and associated digital content may then be encrypted together. Thus, the digital content of each block may be accessed by decrypting each block in the blockchain, and the hash value of each block may be used as a basis to reference a previous block. This may be illustrated as follows: Block 1Block 2. . .Block NHash Value 1Hash Value 2Hash Value NDigital Content 1Digital Content 2Digital Content N In one embodiment, the digital content may be not included in the blockchain. For example, the blockchain may store the encrypted hashes of the content of each block without any of the digital content. The digital content may be stored in another storage area or memory address in association with the hash value of the original file. The other storage area may be the same storage device used to store the blockchain or may be a different storage area or even a separate relational database. The digital content of each block may be referenced or accessed by obtaining or querying the hash value of a block of interest and then looking up that has value in the storage area, which is stored in correspondence with the actual digital content. This operation may be performed, for example, a database gatekeeper. This may be illustrated as follows: BlockchainStorage AreaBlock 1 Hash ValueBlock 1 Hash Value . . . Content......Block N Hash ValueBlock N Hash Value . . . Content In the example embodiment ofFIG.7C, the blockchain770includes a number of blocks7781,7782, . . .778Ncryptographically linked in an ordered sequence, where N≥1. The encryption used to link the blocks7781,7782, . . .778Nmay be any of a number of keyed or un-keyed Hash functions. In one embodiment, the blocks7781,7782, . . .778Nare subject to a hash function which produces n-bit alphanumeric outputs (where n is 256 or another number) from inputs that are based on information in the blocks. Examples of such a hash function include, but are not limited to, a SHA-type (SHA stands for Secured Hash Algorithm) algorithm, Merkle-Damgard algorithm, HAIFA algorithm, Merkle-tree algorithm, nonce-based algorithm, and a non-collision-resistant PRF algorithm. In another embodiment, the blocks7781,7782, . . . ,778Nmay be cryptographically linked by a function that is different from a hash function. For purposes of illustration, the following description is made with reference to a hash function, e.g., SHA-2. Each of the blocks7781,7782, . . . ,778Nin the blockchain includes a header, a version of the file, and a value. The header and the value are different for each block as a result of hashing in the blockchain. In one embodiment, the value may be included in the header. As described in greater detail below, the version of the file may be the original file or a different version of the original file. The first block7781in the blockchain is referred to as the genesis block and includes the header7721, original file7741, and an initial value7761. The hashing scheme used for the genesis block, and indeed in all subsequent blocks, may vary. For example, all the information in the first block7781may be hashed together and at one time, or each or a portion of the information in the first block7781may be separately hashed and then a hash of the separately hashed portions may be performed. The header7721may include one or more initial parameters, which, for example, may include a version number, timestamp, nonce, root information, difficulty level, consensus protocol, duration, media format, source, descriptive keywords, and/or other information associated with original file7741and/or the blockchain. The header7721may be generated automatically (e.g., by blockchain network managing software) or manually by a blockchain participant. Unlike the header in other blocks7782to778Nin the blockchain, the header7721in the genesis block does not reference a previous block, simply because there is no previous block. The original file7741in the genesis block may be, for example, data as captured by a device with or without processing prior to its inclusion in the blockchain. The original file7741is received through the interface of the system from the device, media source, or node. The original file7741is associated with metadata, which, for example, may be generated by a user, the device, and/or the system processor, either manually or automatically. The metadata may be included in the first block7781in association with the original file7741. The value7761in the genesis block is an initial value generated based on one or more unique attributes of the original file7741. In one embodiment, the one or more unique attributes may include the hash value for the original file7741, metadata for the original file7741, and other information associated with the file. In one implementation, the initial value7761may be based on the following unique attributes:1) SHA-2 computed hash value for the original file2) originating device ID3) starting timestamp for the original file4) initial storage location of the original file5) blockchain network member ID for software to currently control the original file and associated metadata The other blocks7782to778Nin the blockchain also have headers, files, and values. However, unlike the first block7721, each of the headers7722to772Nin the other blocks includes the hash value of an immediately preceding block. The hash value of the immediately preceding block may be just the hash of the header of the previous block or may be the hash value of the entire previous block. By including the hash value of a preceding block in each of the remaining blocks, a trace can be performed from the Nth block back to the genesis block (and the associated original file) on a block-by-block basis, as indicated by arrows780, to establish an auditable and immutable chain-of-custody. Each of the header7722to772Nin the other blocks may also include other information, e.g., version number, timestamp, nonce, root information, difficulty level, consensus protocol, and/or other parameters or information associated with the corresponding files and/or the blockchain in general. The files7742to774Nin the other blocks may be equal to the original file or may be a modified version of the original file in the genesis block depending, for example, on the type of processing performed. The type of processing performed may vary from block to block. The processing may involve, for example, any modification of a file in a preceding block, such as redacting information or otherwise changing the content of, taking information away from, or adding or appending information to the files. Additionally, or alternatively, the processing may involve merely copying the file from a preceding block, changing a storage location of the file, analyzing the file from one or more preceding blocks, moving the file from one storage or memory location to another, or performing action relative to the file of the blockchain and/or its associated metadata. Processing which involves analyzing a file may include, for example, appending, including, or otherwise associating various analytics, statistics, or other information associated with the file. The values in each of the other blocks7762to776Nin the other blocks are unique values and are all different as a result of the processing performed. For example, the value in any one block corresponds to an updated version of the value in the previous block. The update is reflected in the hash of the block to which the value is assigned. The values of the blocks therefore provide an indication of what processing was performed in the blocks and also permit a tracing through the blockchain back to the original file. This tracking confirms the chain-of-custody of the file throughout the entire blockchain. For example, consider the case where portions of the file in a previous block are redacted, blocked out, or pixelated in order to protect the identity of a person shown in the file. In this case, the block including the redacted file will include metadata associated with the redacted file, e.g., how the redaction was performed, who performed the redaction, timestamps where the redaction(s) occurred, etc. The metadata may be hashed to form the value. Because the metadata for the block is different from the information that was hashed to form the value in the previous block, the values are different from one another and may be recovered when decrypted. In one embodiment, the value of a previous block may be updated (e.g., a new hash value computed) to form the value of a current block when any one or more of the following occurs. The new hash value may be computed by hashing all or a portion of the information noted below, in this example embodiment.a) new SHA-2 computed hash value if the file has been processed in any way (e.g., if the file was redacted, copied, altered, accessed, or some other action was taken)b) new storage location for the filec) new metadata identified associated with the filed) transfer of access or control of the file from one blockchain participant to another blockchain participant FIG.7Dillustrates an embodiment of a block which may represent the structure of the blocks in the blockchain790in accordance with one embodiment. The block, Blocki, includes a header7721, a file774i, and a value7761. The header7721includes a hash value of a previous block Blocki−1and additional reference information, which, for example, may be any of the types of information (e.g., header information including references, characteristics, parameters, etc.) discussed herein. All blocks reference the hash of a previous block except, of course, the genesis block. The hash value of the previous block may be just a hash of the header in the previous block or a hash of all or a portion of the information in the previous block, including the file and metadata. The file774iincludes a plurality of data, such as Data 1, Data 2, . . . , Data N in sequence. The data are tagged with metadata Metadata 1, Metadata 2, . . . , Metadata N which describe the content and/or characteristics associated with the data. For example, the metadata for each data may include information to indicate a timestamp for the data, process the data, keywords indicating the persons or other content depicted in the data, and/or other features that may be helpful to establish the validity and content of the file as a whole, and particularly its use a digital evidence, for example, as described in connection with an embodiment discussed below. In addition to the metadata, each data may be tagged with reference REF1, REF2, . . . , REFNto a previous data to prevent tampering, gaps in the file, and sequential reference through the file. Once the metadata is assigned to the data (e.g., through a smart contract), the metadata cannot be altered without the hash changing, which can easily be identified for invalidation. The metadata, thus, creates a data log of information that may be accessed for use by participants in the blockchain. The value776iis a hash value or other value computed based on any of the types of information previously discussed. For example, for any given block Blocki, the value for that block may be updated to reflect the processing that was performed for that block, e.g., new hash value, new storage location, new metadata for the associated file, transfer of control or access, identifier, or other action or information to be added. Although the value in each block is shown to be separate from the metadata for the data of the file and header, the value may be based, in part or whole, on this metadata in another embodiment. Once the blockchain770is formed, at any point in time, the immutable chain-of-custody for the file may be obtained by querying the blockchain for the transaction history of the values across the blocks. This query, or tracking procedure, may begin with decrypting the value of the block that is most currently included (e.g., the last (Nth) block), and then continuing to decrypt the value of the other blocks until the genesis block is reached and the original file is recovered. The decryption may involve decrypting the headers and files and associated metadata at each block, as well. Decryption is performed based on the type of encryption that took place in each block. This may involve the use of private keys, public keys, or a public key-private key pair. For example, when asymmetric encryption is used, blockchain participants or a processor in the network may generate a public key and private key pair using a predetermined algorithm. The public key and private key are associated with each other through some mathematical relationship. The public key may be distributed publicly to serve as an address to receive messages from other users, e.g., an IP address or home address. The private key is kept secret and used to digitally sign messages sent to other blockchain participants. The signature is included in the message so that the recipient can verify using the public key of the sender. This way, the recipient can be sure that only the sender could have sent this message. Generating a key pair may be analogous to creating an account on the blockchain, but without having to actually register anywhere. Also, every transaction that is executed on the blockchain is digitally signed by the sender using their private key. This signature ensures that only the owner of the account can track and process (if within the scope of permission determined by a smart contract) the file of the blockchain. FIGS.8A and8Billustrate additional examples of use cases for blockchain which may be incorporated and used herein. In particular,FIG.8Aillustrates an example800of a blockchain810which stores machine learning (artificial intelligence) data. Machine learning relies on vast quantities of historical data (or training data) to build predictive models for accurate prediction on new data. Machine learning software (e.g., neural networks, etc.) can often sift through millions of records to unearth non-intuitive patterns. In the example ofFIG.8A, a host platform820builds and deploys a machine learning model for predictive monitoring of assets830. Here, the host platform820may be a cloud platform, an industrial server, a web server, a personal computer, a user device, and the like. Assets830can be any type of asset (e.g., machine or equipment, etc.) such as an aircraft, locomotive, turbine, medical machinery and equipment, oil and gas equipment, boats, ships, vehicles, and the like. As another example, assets830may be non-tangible assets such as stocks, currency, digital coins, insurance, or the like. The blockchain810can be used to significantly improve both a training process802of the machine learning model and a predictive process804based on a trained machine learning model. For example, in802, rather than requiring a data scientist/engineer or other user to collect the data, historical data may be stored by the assets830themselves (or through an intermediary, not shown) on the blockchain810. This can significantly reduce the collection time needed by the host platform820when performing predictive model training. For example, using smart contracts, data can be directly and reliably transferred straight from its place of origin to the blockchain810. By using the blockchain810to ensure the security and ownership of the collected data, smart contracts may directly send the data from the assets to the individuals that use the data for building a machine learning model. This allows for sharing of data among the assets830. The collected data may be stored in the blockchain810based on a consensus mechanism. The consensus mechanism pulls in (permissioned nodes) to ensure that the data being recorded is verified and accurate. The data recorded is time-stamped, cryptographically signed, and immutable. It is therefore auditable, transparent, and secure. Adding IoT devices which write directly to the blockchain can, in certain cases (i.e. supply chain, healthcare, logistics, etc.), increase both the frequency and accuracy of the data being recorded. Furthermore, training of the machine learning model on the collected data may take rounds of refinement and testing by the host platform820. Each round may be based on additional data or data that was not previously considered to help expand the knowledge of the machine learning model. In802, the different training and testing steps (and the data associated therewith) may be stored on the blockchain810by the host platform820. Each refinement of the machine learning model (e.g., changes in variables, weights, etc.) may be stored on the blockchain810. This provides verifiable proof of how the model was trained and what data was used to train the model. Furthermore, when the host platform820has achieved a finally trained model, the resulting model may be stored on the blockchain810. After the model has been trained, it may be deployed to a live environment where it can make predictions/decisions based on the execution of the final trained machine learning model. For example, in804, the machine learning model may be used for condition-based maintenance (CBM) for an asset such as an aircraft, a wind turbine, a healthcare machine, and the like. In this example, data fed back from the asset830may be input the machine learning model and used to make event predictions such as failure events, error codes, and the like. Determinations made by the execution of the machine learning model at the host platform820may be stored on the blockchain810to provide auditable/verifiable proof. As one non-limiting example, the machine learning model may predict a future breakdown/failure to a part of the asset830and create alert or a notification to replace the part. The data behind this decision may be stored by the host platform820on the blockchain810. In one embodiment the features and/or the actions described and/or depicted herein can occur on or with respect to the blockchain810. New transactions for a blockchain can be gathered together into a new block and added to an existing hash value. This is then encrypted to create a new hash for the new block. This is added to the next list of transactions when they are encrypted, and so on. The result is a chain of blocks that each contain the hash values of all preceding blocks. Computers that store these blocks regularly compare their hash values to ensure that they are all in agreement. Any computer that does not agree, discards the records that are causing the problem. This approach is good for ensuring tamper-resistance of the blockchain, but it is not perfect. One way to game this system is for a dishonest user to change the list of transactions in their favor, but in a way that leaves the hash unchanged. This can be done by brute force, in other words by changing a record, encrypting the result, and seeing whether the hash value is the same. And if not, trying again and again and again until it finds a hash that matches. The security of blockchains is based on the belief that ordinary computers can only perform this kind of brute force attack over time scales that are entirely impractical, such as the age of the universe. By contrast, quantum computers are much faster (1000s of times faster) and consequently pose a much greater threat. FIG.8Billustrates an example850of a quantum-secure blockchain852which implements quantum key distribution (QKD) to protect against a quantum computing attack. In this example, blockchain users can verify each other's identities using QKD. This sends information using quantum particles such as photons, which cannot be copied by an eavesdropper without destroying them. In this way, a sender and a receiver through the blockchain can be sure of each other's identity. In the example ofFIG.8B, four users are present854,856,858, and860. Each of pair of users may share a secret key862(i.e., a QKD) between themselves. Since there are four nodes in this example, six pairs of nodes exists, and therefore six different secret keys862are used including QKDAB, QKDAC, QKDAD, QKDBC, QKDBD, and QKDCD. Each pair can create a QKD by sending information using quantum particles such as photons, which cannot be copied by an eavesdropper without destroying them. In this way, a pair of users can be sure of each other's identity. The operation of the blockchain852is based on two procedures (i) creation of transactions, and (ii) construction of blocks that aggregate the new transactions. New transactions may be created similar to a traditional blockchain network. Each transaction may contain information about a sender, a receiver, a time of creation, an amount (or value) to be transferred, a list of reference transactions that justifies the sender has funds for the operation, and the like. This transaction record is then sent to all other nodes where it is entered into a pool of unconfirmed transactions. Here, two parties (i.e., a pair of users from among854-860) authenticate the transaction by providing their shared secret key862(QKD). This quantum signature can be attached to every transaction making it exceedingly difficult to tamper with. Each node checks their entries with respect to a local copy of the blockchain852to verify that each transaction has sufficient funds. However, the transactions are not yet confirmed. Rather than perform a traditional mining process on the blocks, the blocks may be created in a decentralized manner using a broadcast protocol. At a predetermined period of time (e.g., seconds, minutes, hours, etc.) the network may apply the broadcast protocol to any unconfirmed transaction thereby to achieve a Byzantine agreement (consensus) regarding a correct version of the transaction. For example, each node may possess a private value (transaction data of that particular node). In a first round, nodes transmit their private values to each other. In subsequent rounds, nodes communicate the information they received in the previous round from other nodes. Here, honest nodes are able to create a complete set of transactions within a new block. This new block can be added to the blockchain852. In one embodiment the features and/or the actions described and/or depicted herein can occur on or with respect to the blockchain852. FIG.9illustrates an example system900that supports one or more of the example embodiments described and/or depicted herein. The system900comprises a computer system/server902, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server902include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system/server902may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server902may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.9, computer system/server902in cloud computing node900is shown in the form of a general-purpose computing device. The components of computer system/server902may include, but are not limited to, one or more processors or processing units904, a system memory906, and a bus that couples various system components including system memory906to processor904. The bus represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system/server902typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server902, and it includes both volatile and non-volatile media, removable and non-removable media. System memory906, in one embodiment, implements the flow diagrams of the other figures. The system memory906can include computer system readable media in the form of volatile memory, such as random-access memory (RAM)910and/or cache memory912. Computer system/server902may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system914can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to the bus by one or more data media interfaces. As will be further depicted and described below, memory906may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of various embodiments of the application. Program/utility916, having a set (at least one) of program modules918, may be stored in memory906by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules918generally carry out the functions and/or methodologies of various embodiments of the application as described herein. As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method, or computer program product. Accordingly, aspects of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present application may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Computer system/server902may also communicate with one or more external devices920such as a keyboard, a pointing device, a display922, etc.; one or more devices that enable a user to interact with computer system/server902; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server902to communicate with one or more other computing devices. Such communication can occur via I/O interfaces924. Still yet, computer system/server902can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter926. As depicted, network adapter926communicates with the other components of computer system/server902via a bus. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server902. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. Although an exemplary embodiment of at least one of a system, method, and non-transitory computer readable medium has been illustrated in the accompanied drawings and described in the foregoing detailed description, it will be understood that the application is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications, and substitutions as set forth and defined by the following claims. For example, the capabilities of the system of the various figures can be performed by one or more of the modules or components described herein or in a distributed architecture and may include a transmitter, receiver or pair of both. For example, all or part of the functionality performed by the individual modules, may be performed by one or more of these modules. Further, the functionality described herein may be performed at various times and in relation to various events, internal or external to the modules or components. Also, the information sent between various modules can be sent between the modules via at least one of: a data network, the Internet, a voice network, an Internet Protocol network, a wireless device, a wired device and/or via plurality of protocols. Also, the messages sent or received by any of the modules may be sent or received directly and/or via one or more of the other modules. One skilled in the art will appreciate that a “system” could be embodied as a personal computer, a server, a console, a personal digital assistant (PDA), a cell phone, a tablet computing device, a smartphone or any other suitable computing device, or combination of devices. Presenting the above-described functions as being performed by a “system” is not intended to limit the scope of the present application in any way but is intended to provide one example of many embodiments. Indeed, methods, systems and apparatuses disclosed herein may be implemented in localized and distributed forms consistent with computing technology. It should be noted that some of the system features described in this specification have been presented as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom very large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, graphics processing units, or the like. A module may also be at least partially implemented in software for execution by various types of processors. An identified unit of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions that may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module. Further, modules may be stored on a computer-readable medium, which may be, for instance, a hard disk drive, flash device, random access memory (RAM), tape, or any other such medium used to store data. Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. It will be readily understood that the components of the application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments of the application. One having ordinary skill in the art will readily understand that the above may be practiced with steps in a different order, and/or with hardware elements in configurations that are different than those which are disclosed. Therefore, although the application has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent. While preferred embodiments of the present application have been described, it is to be understood that the embodiments described are illustrative only and the scope of the application is to be defined solely by the appended claims when considered with a full range of equivalents and modifications (e.g., protocols, hardware devices, software platforms etc.) thereto. | 113,276 |
11943238 | DETAILED DESCRIPTION The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions. A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured. I. Introduction and Architecture Overview FIG.1Aillustrates an example of a computing environment in which security and other services are provided. Using techniques described herein, a typical flood of alarms and false positives can be reduced to a trickle of high value, high context Alerts of real time attacks. Included inFIG.1Aare logical components of an example corporate network (e.g., belonging to a fictitious company hereinafter referred to as “ACME”). The corporate network comprises both self-hosted and third-party hosted resources. Using techniques described herein, zero-day and other attacks can be detected, in real-time, and disrupted/otherwise mitigated. Further, the techniques described herein can scale detection to tens of thousands of nodes without impacting performance, and without deploying kernel modules. Environment101includes a set of workload instances102-104hosted by a third party (e.g., cloud) provider. Example providers of such cloud-based infrastructure include Amazon (e.g., as Amazon Web Services), Google (e.g., as Google Cloud), and Microsoft (e.g., as Microsoft Azure). Environment101also includes a set of legacy computing systems106-108(e.g., a legacy database server, a legacy web server, etc. executing on hardware owned by ACME). Factors such as regulatory, performance, and cost considerations will impact how various embodiments of techniques described herein are deployed in various environments. The techniques described herein can also be used in other types of environments (e.g., purely self-hosted, purely third-party hosted, containerized, etc.). Each of systems102-108has an associated sensor/analytics component (e.g., Sensor112)—one per kernel. An example way of implementing Sensor112is using Go. The Sensor is configured to employ a variety of techniques to capture telemetry data. The analytics component can be collocated with the Sensor and can also be located remotely (e.g., on a different node). Telemetry and other data (e.g., as collected by the Sensor) are analyzed to generate events. Events (and combinations of events, as applicable) that match a Strategy pattern can be used to trigger real-time alerts and take other actions as applicable, such as strategically and automatically killing attacker connections and restarting workloads. Further, analytics can also be optionally performed cluster wide (i.e., across multiple workload instances/servers) by using an optional security server110configured to harvest alerts and perform analytics on information received from each of systems102-108. One example of a cross-node Strategy is a segfault, where, e.g., Apache is seen to crash five times. In a modern environment, e.g., with load balancers, connections may come in and Apache may crash in a variety of different locations. Attackers may intentionally try to spread out an attack across multiple nodes, hoping to hide among the occasional segfaults that happen in a production environment (and are sophisticated enough to avoid trying to segfault a single node 50 times). With a cross node strategy, Alerts can be accumulated until, e.g., a triggering threshold is reached (e.g., more than five crashes in an hour indicates the cluster is under attack). In this example, a local (to a single node) segfault strategy could be used to emit segfault events, and the cross node strategy could consume those events and generate its own (e.g., as thresholds are met), applying Tags to hosts indicating number of segfaults reported, etc. Other cross node strategies can also be used, e.g., with individual nodes providing information to security server110, and security server110including Strategies that leverage the alerts produced by multiple nodes. A second example of a cross node strategy is a lateral movement strategy (e.g., where a report of compromise associated with a first node and a connection reported from a second internal node), can be matched to indicate that the compromised node is communicating with a different node. Server110also can perform other tasks, such as providing an administrative console/dashboard, deploying configuration, etc. An example dashboard is shown inFIG.30. An example Alert interface is shown inFIG.31. An Example query interface is shown inFIG.32. In various embodiments, multiple security servers are deployed, e.g., for redundancy, performance, scaling, or segmentation reasons. Other infrastructure can also be optionally included in environment101, such as an Elasticsearch, Logstash, Kibana (“ELK”) Stack, third party logging service (e.g., Splunk), etc. Results generated using techniques described herein can be shared with such additional infrastructure instead of/in addition to sharing with server110, as applicable. Similarly, information can be shared from such additional infrastructure with infrastructure described herein as implementing the techniques described herein. Results generated using techniques described herein can also be placed in a durable storage such as Amazon S3. Sensors102-108and optional Security Server110are collectively referred to herein as a Platform. In various embodiments, the Platform comprises middleware using go-micro and is built in a moonrepo structure that simplifies deployment and optimizes infrastructure to support automation (including integration testing and load/performance testing). An example representation of a Platform (100) is shown inFIG.1B. As shown inFIG.1B, sensors (e.g., sensor136which is an embodiment of sensor112) collect security relevant telemetry from workload instances/nodes (e.g., node138which is an embodiment of workload instance102). Server140(an embodiment of server110) provides an API server/gateway142and embedded analytics144(an embodiment of analytics framework400) to consume telemetry and produce Alerts. One example type of API clients that can communicate with server140is a set of Responders126, each of which enacts a response action in response to an Alert. Other example types of API clients include a command line interface (124) and web application based console (122) for configuring various components of Platform100, displaying Alerts, Alert responses, and/or other contextual information. Connections to durable or other storage can be made through one or more data export APIs (128) as applicable. As shown inFIG.1B, backplane148includes a real-time messaging bus that connects Sensors (wherever they are deployed) to stream requested real-time events and, as applicable, historical events from an optional recorder146(also referred to herein as a “flight recorder”) configured to locally store Event and other information. Responders126, Sensors (e.g., Sensor136), and Recorders (e.g., Recorder146) are examples of backend services. In an example embodiment of Platform100, communications/connections122-128are made using GRPC/Websocket, and communications/connections130-134are made using a Pub/Sub Message Broker. Other techniques can also be used, as applicable. FIG.1Cillustrates an embodiment of a single binary API Server. Multiple API servers can be meshed with one another using an appropriate embedded or other communications tool (e.g., NATS or Pub/Sub), transmitting change logs to each other (resulting in eventual consistency with fast convergence). As illustrated inFIG.1C, in various embodiments, components of Platform100, such as Analytics144can be embedded in the API server (e.g., as Embedded Analytics152). Other functionality can also be embedded into an API server as applicable, such as Embedded NATS (154). Other components of the single binary API Server include a set of GRPC APIs that provide a frontend for external client facing APIs, and a Session Manager158that provides a façade over go-micro over NATS and raw NATS. An illustration of various logical roles provided by Platform100are shown inFIG.1D. CLI124and console122communicate, respectively, with API servers (e.g., API server150) using GRPC. API servers150and160communicate with one another using NATS clustering. Backend services162communicate with one another and API servers150and160using NATS or Pub/Sub as applicable. One way of implementing backend services162is as microservices defined using go-micro. A generic example of a go-micro microservice is illustrated inFIG.1E. Other frameworks can also be used in conjunction with techniques described herein as applicable. Registry164provides a pluggable service discovery library to find running services, keeping track of service instances, metadata, and version. Examples of information collected (e.g., on boot of a service) include: whether or not it is running in a container, network interfaces, underlying node kernel version and node hostname, container host, and any user defined metadata (e.g., from config or environment). Selector166provides a load balancing mechanism via a metadata service/server metadata. When a client makes a request to a service it will first query Registry164for the service, receiving a list of running nodes representing the service. Selector166will select one of the nodes to be used for querying. Multiple calls to Selector166allow balancing techniques to be used. Broker168is a pluggable message interface (e.g., for Pub/Sub) and can be used to provide command and control. Transport170is a pluggable interface over point to point transfer of messages. Client172provides a way to make RPC queries. It combines Registry164, Selector166, Broker168, and Transport170. It also provides retires, timeouts, use of context, etc. Server174is an interface to build a running microservice. It provides a way of serving RPC requests. In various embodiments, go-micro RPCs with selectors (relying on metadata) are used for individually calling RPCs on services and groups of services. Services that are configurable provide a Config method that takes their configuration as an argument and returns a response with a status code and any errors. Multiple components can be accomplished using go-micro selectors (e.g., selector166) to direct queries to multiple components. An example of a configuration that can be made is, “cause these four Responders to ignore remote interactive shell alerts.” Group commands can be sent using subscriptions to topics. A telemetry subscription example is “cause all sensors attached to this API server to start publishing data related to this subscription.” In various embodiments, a reserved namespace (e.g., capsule8.*) is used for topics. Four example topics include:capsule8.<service>.commands: asynchronous commands that a service takes (e.g., capsule8.sensors.commands)capsule8.alerts: all Alerts are published herecapsule8.alerts.responses: all Alert responses are published herecapsule8.events: all status and error notifications are published here. FIG.1Fillustrates an example data plane used by backend services. As shown, the data plane uses go-micro to negotiate NATS/PubSub for a topic and have services publish directly to it, avoiding framing costs. An API Client (e.g., CLI124, console122, etc.) initiates a request for Events from an API server (e.g., API server150), e.g., using GRPC API156(indicated by line176). Session manager158picks a NATS topic and sends it to one or more sensors, e.g., using go-micro (indicated by line178). Session manager158begins listening to the NATS topic, e.g., using direct NATS, e.g., provided by embedded NATS154(indicated by line180). The sensor sends a sub response, e.g., using go-micro (indicated by line182). The sensor also creates a subscription and begins sending telemetry to the NATS topic, e.g., using direct NATS (indicated by line184), the API server receives events as NATS messages (indicated by line186), and forwards them to the client, e.g., using GRPC (indicated by line188). FIG.1Gillustrates how various components of Platform100react as messages propagate. In the example shown, a Sensor, two Servers, a Responder, and an API client are shown. The Sensor sends telemetry on a NATS topic (190). The first server produces an Alert and broadcasts it on capsule8.alerts (192), resulting in the second server and responder receiving a copy of the Alert. The API client is subscribed to Alerts via the second server, and gets a copy of the Alert via GRPC (194). When the Responder receives the Alert, it takes action and broadcasts an Alert Response on capsule8.alerts.responses (196), resulting in the second server and first server receiving a copy of the Alert Response. The API Client is also subscribed to Alert.Responses via the second server and will also get a copy of the Alert Response via GRPC (198). II. Sensors In the following discussion, suppose Alice is a computer administrator of infrastructure100or portions thereof (e.g., in a security operations role). One way that Alice can deploy a Sensor (e.g., Sensor112) to a node (e.g., workload instance102) is by obtaining a package (e.g., a statically compiled binary) and installing it using whatever manner is appropriate to the existing deployment management infrastructure used by environment100(e.g., using “rpm-i”). The installer for Sensor112does not modify the kernel of workload instance102(i.e., Sensor112is deployed in userland) and places configuration/binary files in an appropriate location, such as/etc/capsule8 (for configuration files), /var/lib/capsule8 (for library files), and /user/local/bin (for binary files). Capabilities such as CAP_SYS_ADMIN and CAP_DAC_OVERRIDE are used (without root) to set kernel probes. As applicable, Sensor112can also be deployed as a container. A. Initialization Sensor112will be started at boot time and loads an appropriate configuration. If a Sensor is deployed in a container environment, it can be run as a daemon set. An example way that Sensor112can initialize itself is in accordance with the process shown inFIG.2. In particular,FIG.2illustrates actions taken from when the Sensor is started through when it begins handling subscription requests. As illustrated inFIG.2, the actions begin with a common path and then diverge depending on whether the Sensor was configured to work with security server110. 1. Common Steps a. Creating the Logger and Config After initial execution (202), a logger object is created (204). One example way to implement the logger is using a logrus instance. An optional local configuration is read (206), and if valid (208), a configuration object is created to store the configuration. b. Determining Metadata The Sensor then enumerates its environment and configuration for metadata (210). This metadata is later fed to the go-micro service and/or embedded analytics instance described in more detail below. Examples of metadata that can be collected include:Cloud metadata such as AZ and region the Sensor is running in,The container runtime that the Sensor is running in, if any,Reading files from the file system to determine OS,OS Version,Host networking interfaces,Underlying node hostname, andKernel version. Additionally, these properties can be defined as a set of key value pairs via the configuration or CAPSULE8_LABELS environment variable. Additional information on Environmental Variables is provided below. These properties are then stored internally in a go map of type map[string]string in which a normalized key value stores the value (e.g., CAPSULE8_SENSOR_ID) to store the Sensor ID. c. Go-Micro Service Initialization and Command Arg Processing The next major step is the configuration of the go-micro Service options. This can include NATS options and other PubSub options, as applicable, used by go-micro's Broker, Transport and Service Registry. This can be abstracted away using the micro_options package to facilitate using other PubSub or other go-micro supported transport protocols, such as consul or Google PubSub. A go-micro Service instance is initialized with the metadata (214) and the command line options are parsed (216), e.g., using go-micro's Flag module. In embodiments where the Sensor-Core is using glog, go-micro's command line processing is used to set go's flag packages options. The config can also be checked to see if an OpenTracing tracer should be created, and if so (212) added to the service so that it may be used for debugging. At this point, command line arguments are parsed using the go-micro Services' Init method. d. Sensor-Core Initialization & Unix Socket API After creating the service and parsing the command line options, the Sensor-Core is initialized (218). The Sensor-Core instance instruments the kernel, and provides telemetry. This is started with an ID that is shared with the go-micro service. Additionally the Sensor's API server is started on a Unix socket at/var/run/capsule8/sensor.sock. This server handles the API presented by the Open Source Sensor. After initializing the Sensor-core the Sensor checks its configuration to see if it is configured to run with a security server (e.g., security server110) (220). In various embodiments, this step is replaced by one that does instrumentation initialization based on the metadata values enumerated earlier. 2. Running without the Server By default Sensor112does not attempt to connect to server110. If the config option in use_analytics is set to true in/etc/capsule8/capsule8-sensor.yaml or the environment variable CAPSULE8_USE_ANALYTICS=true (222), then the embedded analytics is started (224). Otherwise, initialization is complete (226). a. Initializing Analytics If the embedded Analytics was enabled via configuration, the function startEmbeddedProtect is called. This function first reads the Analytics configuration file /etc/capsule8/capsule8-analytics.yaml (228). If it encounters an error parsing the file the error is returned and it is treated as a fatal error. After parsing the Analytics config, a standalone in a Sensor Analytics CommsClient instance is created, with a reference to the Sensor-Core instance, and the Analytics varz handler. The CommsClient is responsible for connecting the embedded Analytics to the Sensor and ensuring that Alerts and MetaEvents are emitted somewhere that they can be retrieved or published on the backend. Alerts are a special kind of Event created from one or more Events that entities may wish to receive notifications of. MetaEvents are Events of interest that are published back to the platform via the API. The CommsClient's constructor takes an argument for a NATS client in the event that this function was called with the server option (248). In that case the CommsClient would use that to publish the Alerts and MetaEvents to server110. Upon successful creation of the CommsClient, an instance of the Analytics Gateway404is created with a reference to the CommsClient and the Analytics config. The Gateway is responsible for creating and configuring Strategies used by the embedded Analytics. It provides an internal API for Strategies and also acts as an intermediary with a Platform API and Arbiter (e.g., Arbiter408). In the event that the Gateway failed to be created an error is returned by the function and treated as a fatal error. After the Gateway is created, its Run method is called to start the Analytics instance. An example way to do this is as a go routine with a recover so that any unexpected panics do not crash the whole Sensor. Instead, such errors will merely cancel the subscription of events used by the Analytics at which point it will be restarted, and resubscribe in the new instance, as all previous state will have been lost. These errors are logged to stderr. After starting the Analytics instance (232), the monitoring server is started and the Sensor waits for a SIGINT, or a SIGTERM to signal shutdown. At this point initialization without the server is finished (226). b. Starting the Monitoring Server The Sensor runs an embedded webserver to serve version (/version), healthcheck (/healthz), and metrics information (/varz). This server is started (230,242) on a port specified by the monitor_port config option in/etc/capsule8/capsule8-sensor.yaml or the environment variable CAPSULE8_MONITOR_PORT. If the Sensor was configured to run with a server, then the/healthz endpoint reports on its connected status returning a 200 HTTP status code if it is successfully connected to the Capsule8-Server or 500 otherwise. If the Sensor was configured to run without the server this returns a 200 status code as its health is essentially represented by whether or not the process is running. The/varz endpoint contains memory allocation statistics and other metrics specific to the Analytics. If the Sensor was configured to run without the embedded Analytics then these metrics will be blank. 3. Running with the Server The following section describes how the Sensor is initialized if it was configured to run with the server after the common initialization steps (202-220). Two ways to connect the Sensor (e.g., Sensor112) to the Security Server (e.g., Security Server110) are to set either the config option run_without_server or the environment variable CAPSULE8_RUN_WITHOUT_SERVER to the string true. a. Connecting to the Server Backend If the Sensor is configured to run with the server after completing the common initialization steps, it then creates an embedded NATS client (248). This is a wrapper around the official NATS golang client library that reads in the configuration object and updates the settings accordingly, for things like TLS certificates. The constructor for the embedded_nats client immediately attempts to connect to the Server specified by the nats.url config option (also ‘CAPSULE8NATS_SERVER’ env var) and returns an error if it is unable to connect. The Sensor by default attempts to connect three times to the specified Server, waiting 10 seconds in between attempts. If the Sensor is unable to connect to its Server, then this is considered a fatal error and the Sensor exits, logging that it could not connect (234). If the connection to the NATS server was successful then the embedded Analytics is started (238), if configured to do so (236). b. Starting the Handler and the Watchdog A request handler is created that contains all of the logic for handling telemetry API subscriptions from the Server (240). This provides the session handling and Telemetry serialization and bundling logic and tracks subscriptions. This is then passed to a WatchDog instance which supervises these subscriptions and any errors they may produce, logging the errors. The WatchDog contains one method and a reference to the Sensor-Core instance. This is used primarily to connect go-micro events on the topic capsule8.sensor.commands to the request handler which starts the subscriptions via the Sensor-Core instance tracking their subscriptions in the specified session. After the WatchDog is created, it is then registered to listen to the capsule8.sensor.commands topic (244). c. Starting the go-Micro Service The go-micro service (capsule8.sensor service) is then started when the .Run( ) method is called and executed until an error is encountered. This should not happen unless actively terminated. The Sensor is now started (246). B. Environmental Variables This section describes example environment variables and configuration file values used by embodiments of sensor112. By default, the Sensor looks in/etc/capsule8 for a capsule8-sensor.yaml file, an example of which is shown inFIG.3. Values from this file are read first and then values from environment variables override these values. Configuration file values are written as object.subobject. As one example, the following YAML entry: nats: url: nats://localhost:4222 is written as nats.url. Various example environmental variables and default values used by a Sensor (e.g., Sensor112) are as follows: ConfigurationVariable NameFile ValueTypeMeaningDefaultExampleCAPSULE8_CN/AstringAlternate/etc/capsule8/caCAPSULE8_CONFIGlocation andpsule8-ONFIG=/var/runname of thesensor.yaml/myconfig.yamlcapsule8-sensor.yaml fileCAPSULE8_LAservice.labelsstringa string of key“”CAPSULE8 LABELSvalue pairsBELS=“mtahost=true”separated by =metadata aboutthe sensor hostCAPSULE8_DEdebugbooleanwhether or notfalseCAPSULE8_DEBUGto enableBUG=truedebugging/profilingfeatures andloggingCAPSULE8NATnats.urlstringThe address ofnats://localhost:SURLthe Capsule84222Server's NATSinstanceCAPSULE8NATnats.max_reconintegernumber of times10000CAPSULE8NATSMAX_RECONnectsthe client shouldSMAX_RECONNECTSattempt toNECTS=3reconnect afterit's already beenconnectedCAPSULE8NATnats.reconnectbuintegeramount of data10CAPSULE8NATSRECONNECTfsizeinmbto buffer in theSRECONNECTBUFSIZEINMBevent of aBUFSIZEINMB=1disconnection inmegabytesCAPSULE8NATnats.reconnectintegernumber of10CAPSULE8NATSRECONNECT_waitseconds theSRECONNECT_WAITNATS clientWAIT=3should waitbetweenconnectionattemptsCAPSULE8NATnats.clientcertfilestring path toA TLS client“”CAPSULE8NATSCLIENTCERTx509 certificatecertificate toSCLIENTCERTFILEpresent to theFILE=/home/useCapsule8 Serverr/client.crtNATS instance(must be usedwithCAPSULE8NATSCLIENTCERTKEY_FILE)CAPSULE8NATnats.clientcertkestring path toThe path to the“”CAPSULE8NATSCLIENTCERTy_filex509 certificatekey for theSCLIENTCERTKEY_FILEkeycertificate inKEY_FILE=/hoCAPSULE8NATme/user/client.crtSCLIENTCERTFILE ( must beused withCAPSULE8NATSCLIENTCERTFILE)CAPSULE8NATnats.clientcacertstring path toAn additional“”CAPSULE8NATSCLIENTCACEx509 CATLS CASCLIENTCACERTcertificatecertificate to useRT=/usr/local/cato verify thes/myCA.crtclient. Bydefault thesystem CAs areusedCAPSULE8INIinitialreconnectaintegerthe number of3CAPSULE8INITIALRECONNEttemptstimes the sensorTIALRECONNECT_ATTEMPTSattempts toCT_ATTEMPTconnect to theS=8server beforegiving up atstartupCAPSULE8MOmonitor_portintegerTCP port to9010CAPSULE8MONITORPORTserve healthNITORPORT=9checks, version,999varz andprofilingendpointsCAPSULE8LISlisten_addrstringsocket addressunix://var/run/caCAPSULE8LISTENADDRfor the sensorpsule8/sensor.sockTENADDR=loctelemetryalhost:8443service to listenon (can be aunix socket)CAPSULE8EVEbundler.eventspintegernumber of1CAPSULE8EVENTSPER_MESSermessagetelemetry eventsNTSPER_MESSAGEto send to theAGE=250server at a time,useful formicrobatching/controllingnetwork impactof the sensorCAPSULE8EVEbundler.flush_tiduration stringmaximum“100 ms”CAPSULE8EVENTSFLUSH_TImeoutamount of timeNTSFLUSH_TIMEOUTTelemetryMEOUT=“250Events can stayms”buffered in thesensor beforebeing sent to theCapsule8 ServerCAPSULE8OPopentracing.tracstringa supported“”CAPSULE8OPENTRACINGTRer_typeopen tracingENTRACINGTRACER_TYPEimplementationACER_TYPE=jright now onlyaegerjaeger issupportedCAPSULE8OPopentracing.tracbooleanlog opentracingfalseCAPSULE8OPENTRACINGTRer_loginformation toENTRACINGTRACER_LOGstandard outACER_LOG=trueCAPSULE8USEuse_analyticsbooleanactivatetrueCAPSULE8USEANALYTICSembeddedANALYTICS=falseanalyticspackage(activate'sfurtherconfigurationfor analytics)CAPSULE8TRItrigger_onbooleanenable the eventtrueCAPSULE8TRIGGERONtriggerGGERON=falseCAPSULE8TRItrigger_intervaltime.Durationset the event10 sCAPSULE8TRIGGERINTERVALtrigger intervalGGERINTERVAL=1sCAPSULE8TRItrigger_syscallenum stringset the event“setxattr”CAPSULE8TRIGGERSYSCALLtrigger syscallGGERSYSCALL=setxattr C. Hard Resource Limits As applicable, Sensor112can be configured to stay under CPU/RAM/other resource thresholds. A hard stop can be used (e.g., at a certain amount of CPU or RAM usage) at which point throttling/dropping of data can be performed so that performance of the node being monitored is not adversely impacted. This section describes the design, implementation, and usage of the Sensor's hard resource limiting capabilities. One example way to enforce such limitations is by using Linux cgroups under the CPU and Memory subsystems. The cgroup the Sensor uses is called capsule8-sensor. The implementation uses a supervisor process which executes and monitors the actual sensor. This accomplishes multiple desired behaviors. First, this forces all routines of the Sensor process to reside in the cgroups. Since the supervisor process must be done as the root user, this design also allows for dropping privileges of the Sensor by executing the child process as a separate user. It also allows the supervisor process to restart the child sensor process when it exits and to monitor the sensor process for performance and violations. 1. Usage The resource configurations are read in from the sensor's configuration file. This is by default at/etc/capsule8/capsule8-sensor.yaml. The path can be changed using the configuration file with the CAPSULE8_CONFIG environment variable. The following section describes the hard resource limit configuration fields. a. Configuration The following are fields that can and should be set in the Sensor configuration file. They are also bound to environment variables.use_supervisor—A Boolean value specifying whether or not to use the supervisor, and therefore the hard resource limits.Environment Variable: CAPSULE8_USE_SUPERVISORExample: true, falseDefault: falseuse_resource_limits—A Boolean value specifying whether or not to use the hard resource limiter functionality of the supervisor.Environment Variable: CAPSULE8_USE_RESOURCE_LIMITSExample: true, falseDefault: falsememory_limit—The exact amount of memory that the Sensor process is allowed to consume. This is a string ending in G (gigabyte) or M (megabyte). A special value of “0” indicates no limit.Environment Variable: CAPSULE8_MEMORY_LIMITExample: 512M, 1G, 0Default: 256Mcpu_limit—The percentage of total CPU time that the Sensor will be allowed to be scheduled for. This is a float value with no suffix. A special value of 0 indicates no limit.Environment Variable: CAPSULE8_CPU_LIMITExample: 10.0, 15, 20.5, 0Default: 10.0sensor_user—The user that the Sensor process will run as. This is a string of the user name.Environment Variable: CAPSULE8_SENSOR_USERExample: myuser, root, grantDefault: capsule8log_cgroup_metrics—A Boolean value specifying whether or not to log cgroup metrics to stderr. This is on two minute intervals.Environment Variable: CAPSULE8_LOG_CGROUP_METRICSExample: true, falseDefault: false b. Verification One way to determine that the cgroup configuration is properly working is by using the “top” utility. When running, the memory and CPU usage of the Sensor process should be shown in the form of percentages of total resources. For CPU, the Sensor should never go above the configured CPU limit multiplied by the amount of cores on the machine (the shell utility nproc will print number of cores). For memory the percentage of the machine's total memory can be calculated, which is displayed in top in KiB by default. 2. Violations and Monitoring The cgroups for memory and CPU handle violations differently. When the sensor process runs out of memory it will be killed by the kernel and restarted by the supervisor process. The CPU cgroup uses a concept of periods and quotas. The period is a configured amount of time and the quota refers to a number of microseconds per period. The Sensor uses a period of one second and the quota is based on the configured percentage. When the Sensor process has used up its quota of CPU time it will be throttled, meaning it will not be scheduled on the CPU until the end of the period. Both of these will have effects on the Sensor's coverage of telemetry events. The cgroup exposes statistics about CPU throttling which are then exposed by the supervisor process via logs to stderr. This can be turned on via the log_cgroup_metrics configuration option. 3. Restarts When the Sensor child process exits for cgroup violations, or otherwise, the supervisor process will restart it. This event is logged to stderr. D. Analytics Framework FIG.4illustrates an embodiment of an analytics framework. The framework can be used by a Sensor (e.g., Sensor112) when analytics support has been configured (e.g., at222or236inFIG.2) and can also be used by other components in other embodiments (e.g., a standalone Analytics module working in conjunction with a telemetry source). CommsClient402contains logic for retrieving telemetry subscriptions and for publishing Alerts and Events of interest. It interfaces with an API server or other mechanism configured to generate Events. It gathers external data that a Factory can translate into/produce Events for consumption by other components of Framework400(and/or components of Platform100, as applicable). One way of implementing CommsClient402is as a callback paradigm, to allow Gateway404to register callbacks for when Events are received on different topics. CommsClient402is also responsible for discovering the API server, which it does via configuration406. Config406is a configuration package that provides a configuration registry built (in an example embodiment) on top of the Go configuration tool, Viper. Strategies or other components register their configuration within a section of a configuration instance from the configuration package. Gateway404is used to initiate each of the components of the framework and their respective configurations. It is at this initiation time that each component registers its default configuration values with the Config instance. After initialization is complete, Gateway404collects all of the Event types needed by each of the Strategies and components and creates a subscription with Platform100via CommsClient402. It is then used to route Events from the Comms instance to each of the Strategies and to relay any subsequent Alerts or Events of interest to the Arbiter. It then passes any Alerts from Arbiter408to the CommsClient instance to publish them on the given alerting topic. Gateway404also provides services for Strategies410to use so as to consolidate logic that would be used across multiple Strategies, such as Process Tree414, which consumes Events and maintains a map of process relationships using the Events. Event Utilities412is a set of utilities used by Gateway404that consume and enrich data that will be used by Strategies410, producing higher level Events and/or updates to durable storage. Event Utilities can take actions such as generating additional Events, Generating MetaEvents, and augmenting Process Tree414. Each Utility included in the set of Event Utilities412provides a single-source for state tracking that would otherwise need to be repeated across multiple Strategies, significantly reducing overhead. An example prototype for HandleEvent for each Utility is: HandleEvent(event *ProtectEvent, utilities *UtilityInterface) ([ ]ProtectEvent.Event, [ ]metaevent.Event, error). Examples of different Utilities included in Event Utilities412are as follows:EventCombinator: Combines call (enter) Events and return (exit) Events from telemetry for a variety of syscall and network events, in order to match caller arguments with the returned value, and to know that the call completed and how long it took. Every received XXXEnter Event is stored in ProcessTree414using its ProcessUUID, ThreadID, and XXXEventTag. When an XXXExit Event is received, the corresponding Enter event is retrieved using ProcessUUID, ThreadID and XXXEventTag, and then XXX Event (combined of XXXEnter and XXXExit) is returned. If two consecutive Enter events occur, the first is ignored. If an Exit occurs before an Enter the Event is also ignored. Examples of Events combined by EventCombinator include: DUP, DUP2, DUP3, Mprotect, Mmap, Brk, Connect, Accept, Bind, Sendto, and Recvfrom. An example state diagram for the EventCombinator Utility is shown inFIG.5.CurrentWorkingDirectory: Collects directory related events and manages the current-working-directory tags for processes in Process Tree414.Interactivity: Monitors for behaviors indicating that a process is TTY-aware or otherwise interactive, and applies a tag appropriately.Shell: Monitors executed programs to determine if they are shells, and tags them appropriately.Network Bound I/O: Uses tags to track the socket descriptors and use of descriptor mapping functions to determine if a process has its Standard Input/Output descriptors mapped to sockets. In particular, the Network Bound I/O Utility consumes FORK, DUP, DUP2, DUP3, Accept, Connect, and Close Events. It correlates I/O file descriptors to network file descriptors, and generates Compound Events and MetaEvents. An example state diagram for the Network Bound I/O Utility is shown inFIG.6.Network Event: Tracks network connections by consuming network related Event data and emitting higher-level network Events. The following is an example list of Events consumed (e.g., scalar sys call hooks), and can vary based on the amount of Sensor information available: sys_connect, sys_connect return, sys_accept, sys_accept return, sys_bind, sys_bind return, sys_listen, sys_listen return, and sys_close. Example logic for handling network information is as follows. From the sys_connect, the socket descriptor (and in the future sockaddr struct) is recorded. On return of sys_connect, one of the following would be generated: (1) if the connect was successful, emit a NETWORK_EVENT_CONNECT describing all information recorded; (2) if the connect was not successful, emit a NETWORK_EVENT_CONNECT_ATTEMPT describing information recorded and reason for failure. Both of these can also be a MetaEvent. From the sys_bind, the socket descriptor (and, as applicable, sockaddr struct) is recorded. On return of sys_bind: (1) if successful, the information would be stored for future tracking across sys_listen and sys_accept; (2) if not successful, emit a MetaEvent indicating a failed attempt to bind. From the sys_listen: on sys_listen success, emit a NETWORK_EVENT_LISTEN event, and matching MetaEvent. From the sys_accept record the socket descriptor (and, as applicable, record client sockaddr): on sys_accept emit a NETWORK_EVENT_ACCEPT describing the socket descriptor. From sys_close, retrieve the argument: if the argument is a socket descriptor emit a NETWORK_EVENT_CLOSE.Network Service: Observes calls to listen on a port, and tags the process as a network service.Privileges/UID: Tracks the user ID/group ID and related IDs for a process, along with Events to change those, and thus tracks if a process has gained privileges legitimately.Process Tree: A mechanism for keeping track of processes and their tags. Process Tree414is a special case Event Utility. It is the first component to consume Events, so that it can be aware of processes prior to any other component and be prepared for queries on those processes. Process Tree414is exposed to the other Event Utilities412and Strategies410through interface(s) that allow the other Event Utilities and Strategies so that they can query the tree and add information (e.g., tags) to the processes in the tree. Tags are used to associate information to processes. There are two types of tags: “Tags,” and “Private Tags.” Tags are used by almost all components to associate and query information about processes. Tags can be associated to a process in three ways: (1) Process only: this associates data to a process which is not inheritable by descendants (e.g., “has touched filesystem”); (2) Inheritable: an attribute which begins with this process and is inherited by its descendants during process creation (e.g., “Network Service” or “Interactive Shell”); and (3) Inherited: attributes which were inheritable at some point in the process's lineage. Private Tags are only exposed to Utilities themselves, so that they can store additional (potentially incomplete) state information without exposing it to other components. Private Tags are all process-only (no automatic propagation is performed by the Process Tree). Process Tree414includes a Gateway Interface416and a Strategy Interface418. Gateway Interface416is used by Gateway404and other components, such as Event Utilities412to perform special operations which are not exposed to Strategies. These operations include: private tagging, process lineage, and the ability to manipulate core Process Tree components. Strategy Interface418allows Strategies to query process tags and associate new tags.Stack Bounds: Tracks the recorded start/stop of the stack, updating it as the kernel may grow the stack, to determine acceptable bounds for the stack pointer (to detect exploitation). Strategies410represent the functional logic used to convert a stream of Events into a set of Alerts. Strategies follow a strategy pattern, and implement an interface that abstracts how a given Strategy handles Events via a HandleEvents method which can return a slice of Alerts, a slice of MetaEvents, or an error. Gateway404will call the HandleEvents method when it has received an Event from CommsClient402. Additionally, the interface defines the Events needed by the Strategy, and its configuration options. A Strategy registers its configuration options with Config406's registry when it is created. By default, all Strategies have at least one configuration option, which indicates whether the Strategy is enabled or not. A SetConfig method of the Strategy interface is called once at startup and then subsequently when a configuration message is received from Gateway404. Arbiter408provides logic for Alert filtering and is ultimately what determines whether an Alert should be emitted. It is a rule system and uses an Alert filter to discard Alerts generated by Strategies. An instance of Arbiter408is created when the analytics framework starts and a reference to it is held by Gateway404. During this startup phase, Arbiter408gets its configuration from the Config406. Arbiter408uses its own filter language which is configured via the Arbiter's configuration filters value. It expects one filter per string entry in the filters configuration value. Additional detail regarding various components of the analytics framework are provided in various sections below. III. Process Tree and Tags A. Process Tree As mentioned above, Process Tree414is both a core data structure and a utility in Event Utilities412. The Process Tree is used by other utilities in Event Utilities412, and by Strategies410to assign and retrieve process and program information, and to track program state via Tags. All Event Utilities and Strategies implement a HandleEvent callback function, and the Process Tree is special in that it is the first of any to have its HandleEvent callback called for all telemetry and Events. This is to ensure that it has pre-populated process and program structures before any other Event Utilities or Strategies attempt to query the Process Tree or set/retrieve Tags. Basic Process Tree structures and their relationships are depicted at a high level inFIG.7. The Process Tree tracks host information, container information, and process information in ProcessInfo structures. The Process element of the tree (702) is used to resolve a pointer to a corresponding ProcessInfo structure (704) from the process' unique ID/UUID and is stored in a timeout-driven hash map. Example ways of implementing a timeout-driven hash map include using a data structure such as a Van Emde Boas tree or radix tree in conjunction with timestamp management (referred to herein as a “timetree”). Example pseudo code for implementing a timeout-driven hash map is shown inFIG.8. The ProcessInfo structure tracks information about the process, which is treated separately from the program currently running in that process. The ProcessInfo entries in the hash map are keyed by each respective process' unique identifier string (e.g., process UUID). Event Utilities412and Strategies410access information about processes in the tree by specifying the respective process UUID when calling Process Tree functions. The timeout-driven hash map is used to expire entries if they have not been accessed within a given time window. Additional detail about the design and operation of these timeout-driven hash maps is provided below. Each ProcessInfo structure tracks information relevant to the process, such as PID, time of creation, and (among other things) a structure member Program (706) which is a pointer to a ProgramInfo structure (708) representing the currently running program. The ProgramInfo structure contains information about programs currently running in processes. This structure is where Tags (710) are stored, which are used for tracking the state of the program and the process in whole. The ProgramInfo structure is separate from the ProcessInfo structure because a process is not bound to host one program during its existence—calling the exec system call invokes a new program into the calling process (and processes may exec an arbitrary amount of times). B. Tags Event Utilities412and Strategies410can store and access state information related to processes and programs using Tags. Tags are accessed via a data structure called the TagMap, a pointer to which is stored in the ProgramInfo structure member named Tags (710). External operation and access to the TagMap is similar to a hash map, in that each is Tag in the TagMap keyed by a unique string, which is used to store and retrieve the tag. Tags are designed to support state transitions and propagations between processes and programs. For a structure to be a Tag it must conform to the Tag Interface, which defines that Tags must implement a Fork( ) and Exec( ) callback, each of which also returns a Tag (or nil), and are called by the Process Tree during Fork and Exec process events respectively. These events may trigger state transitions or propagations: a program Tag may want to create and a new or different Tag to a subprogram upon Exec, or create a copy of itself, or any number of possibilities. For example, since the default POSIX behavior is to propagate file descriptors on to subprocesses and subprograms, Tags can be used to track the state of file descriptors: on Fork( ), the file descriptor tag returns a copy of itself, and on Exec( ) it returns itself (the same pointer already referring to the tag). For another example, consider that a program tagged with an Interactive Shell Tag can label subprograms as Commands by returning a new Command Tag upon Exec( ). Tags can optionally also implement a callback for Exit( ), which is useful if multiple Tags share state across multiple processes. An abstraction of the internal structure of the TagMap and its relation to Tags is depicted inFIG.9. InFIG.9, TagMap902is shown as a data structure, while Tag904is shown as a code component. This is because a Tag is anything that matches the Interface requirement of implementing the Fork & Exec callback, and can have any other arbitrary structure members. TagMap902is an abstraction over a set of arrays and maps used for organizing and retrieving pointers to Tag structures. Most of this organization internally is for optimization, to only call handlers on Tags that exhibit non-default/specialized behavior on Fork and Exec. Tags are retrieved by name from the TagMap using a Lookup( ) function, which retrieves the tag from its own internal map (labeled inFIG.9as AllTags (906)). The TagMap itself also has handler functions for Exec( ) and Fork( ), which result in the TagMap returning a new TagMap—one which contains the resulting Tags returned from the calls to Exec( ) or Fork( ) on each of the Tags in the TagMap. This propagation does not necessarily call the Exec/Fork handler on each Tag. For optimization, some Tags can be declared as always returning NIL for Exec( ), Fork( ), or both, and by declaring them in this way, the TagMap knows to skip these callbacks for these Tags. One way to achieve this is by using Go's Interface typesystem which will enforce that Tags have callbacks for Exec( ) and Fork( ) implemented, but in a way that allows the TagMap to identify the implementation as being one which returns NIL and stores those Tags differently. The Tag structure is an interface Type which implements a callback for Fork( ) and Exec( ) callbacks, both of which themselves return a Tag interface (or NIL). Logic here determines if/how a Tag propagates between new processes (forks) or new programs (execs). If a Tag implements an Exit( ) callback, that callback is called by the Process Tree on Exit process events. C. Event Driven and Context-Aware Tag Propagation Examples 1. Interactive Shell Detecting that a shell program (e.g., /bin/bash) is being used interactively, as opposed to being used to execute a subprogram or script, is a capability that Platform100can provide. Additional detail on approaches to detecting such a shell is provided in more detail in Section IV below. By detecting interactive shells, users are able to write rules around the conditions in which an interactive shell is permitted by their security policies. To make the user experience less cumbersome, built in logic permits subsequent interactive shells to be executed as long as they descend from an alive instance of a permitted interactive shell. In various embodiments, this is tracked using a tag with specific propagation logic. In the following scenario, suppose that policy declares that/usr/sbin/sshd can execute/bin/bash. As shown inFIG.10A, when a user logs in, sshd (1002) forks off a child process (1004), which executes/bin/bash and becomes the user's shell session. If the user now chooses to run a different shell, e.g., via su, sudo, or otherwise, the authorized shell tag will determine if it should propagate based on the life status of the original authorized shell. This works by the shell tag being informed of certain process events. Whenever the Process Tree sees a process event, which is one of a Fork, Exec, or Exit, it determines if there are any tags associated with the process responsible for that event, and then determines if the tag expects to have a callback called for that event (tags might care about one, some, all, or no events). FIG.10Billustrates an example of tag propagation. In the case of the authorized shell tag, the following is an example of logic that can be used: On Fork & Exec Events:If the Tag is the original authorized shell Tag (1006), it returns a new authorized shell heir Tag (1008).If the Tag is an authorized shell heir Tag (1008), it will check its pointer back to the authorized shell Tag (1006), to determine if the original authorized shell process is still alive, and if so, it returns itself (literally its own pointer) for the Process Tree to propagate to the newly forked process (1010). If the original authorized shell is dead, it returns NIL. On Exit Event:The original authorized shell Tag updates its state to reflect that the original authorized shell is now dead, so that heir Tags can see this state should they receive fork/exec events. The logic is that the permission to execute subsequent shells is transitive so long as the original source of the permission exists. This behavior permits scenarios like a user transitioning to another subshell during their session, without making themselves accidentally vulnerable to allowing programs they started during their session to spawn interactive shells after they log out. As an example, suppose the user logged in to launch Apache, then logged out. The user would not want Apache to then be allowed to spawn interactive shells. In various embodiments, re-parenting is also provided for, and the same event-driven context-awareness remains present. 2. Original User Tag Process Tree414tracks UID (GID, etc.) using tags, which it updates based on specific events, in order to understand the state of the process and how/why its UID (or GID, etc.) might have changed. Such information can be particularly interesting when processes become root. By tracking the events related to legitimate privilege transitions, privilege-escalation by way of exploitation can be detected. For example, if a tag indicates that a process was UID1000, and was not currently engaged in any calls to setuid to change its UID, a Strategy (e.g., Strategy420) can alert that there has been an illegitimate transition to root. Additionally, the UID (GID, etc.) tags allow Process Tree414(and thus Strategies410) to track who was originally the user involved. If someone logs in as a UID associated with Mallory, sudo's to root, then su's to Alice and performs some other nefarious action, the fact that it was actually Mallory performing all of the actions (and not Alice) can be surfaced. 3. Alert Grouping Tags facilitate grouping of Alerts once there has been an Alert where the scope-of-damage is process-wide. Once there has been a security event where the whole process is deemed to be malicious (e.g., a blacklisted program, exploited service, or non-permitted shell), then any other Alert by that process or its children will inherit the same Alert grouping. This is desirable because these subprocesses and their events are necessarily associated with the initial malice. Even if there is an alert which is not necessarily process-wide (e.g., a connection being made to a non-whitelisted IP, which could be misconfiguration etc.), it can be grouped with others so long as a process-wide alert has previously been established. This allows analytics framework400to not only group Alerts, but also group Events (which themselves didn't necessarily trigger Alerts) with the Alert. D. Example Process FIG.11illustrates an embodiment of a process for generating an alert. In various embodiments, process1100is performed by analytics framework400. Process1100begins at1102when information about a process is received. As one example, telemetry information indicating that a process has forked is received from a Sensor at1002. At1104, at least a portion of the received information is used to modify a Process Tree. Examples of modifying the Process Tree include: adding nodes, adding tags, etc. At1106, a determination is made that a Strategy has been matched, and an Alert is generated. As one example of processing that can be performed at1106, a determination can be made that a user Mallory escalated privilege to a user Alice (e.g., matching a Privilege Escalation Strategy) and an Alert can be generated in response. Alerts can be sent to standard out, to a file that does log rotation, to a server, to S3, etc., based on configuration. Similarly, if configured, MTLS can be used, and certificates set up in conjunction with sensor installation/deployment. As mentioned above, use of a separate server (e.g., for analytics) is optional. Analytics can be performed locally to the sensor and/or remotely based on deployment/configuration. A variety of additional actions can be taken in response to Alert generation. As one example, a bot can be used to interrogate interactive shell users when Alerts are raised. Suppose that a determination is made (i.e., a Strategy is matched) that an interactive shell does not match a whitelist entry. An Alert is generated (1202), and the implicated user is queried (1204), e.g., by a Slackbot (as depicted inFIG.12) or other appropriate mechanism, to determine whether the user performed the action (or, e.g., the action was performed without the user's knowledge due to compromise of the user's account). If the user responds with “yes,” the user can be challenged to respond to a multi-factor authentication challenge (e.g., on their phone) at1206. If the user does not respond to the Slackbot, the user indicates that the activity was not performed by them, and/or the user fails the multi-factor authentication, challenge, a deadman switch approach can be used, e.g., terminating specific processes, all of the user's processes, shutting down any associated containers, terminating network access to the node the user is logged into, notifying an administrator, etc., based on severity/scope of the problem. IV. Interactive Shell Event Detection A. Introduction This section provides a detailed discussion of analytics framework400's ability to detect and tag interactive shells and commands. This capability is the basis for the Interactive Shell policy type, and for the Shell Command MetaEvents used by other mechanisms, such as the flight recorder. The following discussion is framed from the perspective of Gateway404, which is the component of framework400responsible for instantiating all Strategies410and Event Utilities412, including Process Tree414. Upon receiving telemetry events (or abstractions of telemetry events emitted by various Event Utilities412), Gateway404calls the HandleEvent callback function of the specific Event Utilities and Strategies which subscribe to that Event type. Process Tree414is unique in that it has its HandleEvent function called on all Events, to ensure it is informed of the processes and programs, and is prepared for queries on those processes and programs by other Event Utilities and Strategies. This section makes use of two types of diagrams. The first type are code component diagrams, which show abstractions of which function is called at which point in the process. The second type are visualizations of data structures affected by/used by the code to complete the operation. B. Detection of Interactivity 1. Pass Event to the Process Tree This walkthrough starts from the point of the Gateway having received telemetry for an Exec( ) Event, which denotes a program being invoked by a process via the execve system call.FIG.13is a code component diagram that depicts the Gateway's first action, which is to call into the ProcessTree's HandleEvent function. As shown at1302, a program exec telemetric record enters Event Utilities412and is passed to Process Tree414which creates a new ProcessInfo structure if necessary, and creates a new ProgramInfo for the respective process to represent the current state of the program executing. For this example, the program is “/bin/bash” and the path and arguments members of the newly created ProgramInfo structure reflect this information. As illustrated at1304, only the HandleEvent function is being called in this operation. Other Process Tree component functions depicted inFIG.13are included for context, and are called in later operations by other components. The HandleEvent function in Process Tree414begins by looking up the process' unique ID (e.g., “ABCD”), and if no ProcessInfo structure is present, a new one is created and populated in the hash map. After ensuring that a ProcessInfo object exists, Process Tree414then calls its own internal HandleExec( ) method, which then calls the ProcessInfo object's Exec( ) function, as illustrated inFIG.14(1402). Next, the ProcessInfo object accesses its *Program member (a ProgramInfo object instance), and calls its Exec( ) function. The ProgramInfo object's Exec( ) function returns a pointer to a new ProgramInfo object instance populated with information from the Exec( ) event. The ProgramInfo object then replaces its existing ProgramInfo pointer with the pointer to the new instance. FIG.15illustrates data structures involved in the ProcessTree from the call to HandleEvent down to the ProgramInfo structure. Note that the Tags section of the ProgramInfo structure reflects that there are currently no Tags (1502). This portion of the ProgramInfo structure will update as Tags are applied in later operations. 2. Pass Event to Other Event Utilities Once the Process Tree HandleEvent function returns back to Gateway404, it moves on to other components, as depicted by the component diagrams ofFIGS.16and17. As shown inFIG.16, the ProgramExec telemetry is passed to other Event Utilities which subscribe to that event type. The Shell Utility subscribes to the Exec Process Event, so its HandleEvent callback is called (1602). The Shell Utility has logic for identifying when telemetry indicates a program is a shell. For simplicity, the logic in this example matches on the program path of “/bin/bash” as being a known shell program. Upon detecting a shell, it calls the Process Tree function SetProcessTag (1702) to set a ShellTag on the process. FIG.18is a data structure diagram that visualizes the effect the SetProcessTago call has on the structures in the ProcessTree. CallingSetProcessTag (e.g., via: SetProcessTag(“ABCD”,ShellTag)) sets the Shell Tag (1802) on the programming actively running in the process specified by the UUID (which in this example is the UUID “ABCD”). 3. Context-Aware, Event-Driven Tag State Change FIG.19is a code component diagram. Suppose that at some point later (though nearly instantly as perceived by a human), a specific telemetric record from a finely tuned kprobe/tracepoint (described in more detail below) is received (1902). This telemetry indicates that the calling program is attempting to query information about its TTY (or determine if any is present). This subscription for this type of telemetry is made by the Shell Utility (1904), which is the only Event Utility to request and consume this telemetry in various embodiments (separate of the Process Tree itself, which consumes all telemetry). In an alternate embodiment, the timing of keystrokes (e.g., the submission of commands) is evaluated. Commands submitted automatically in response to a script will be received significantly faster than commands submitted by a human user. If keystroke timing indicates a human is entering commands, this is indicative of an interactive shell. FIG.20is a code component diagram. The Shell Utility now calls the Process Tree function GetProcessTag to determine if the process has the Shell Tag (2002). If the Shell Tag is found, the Shell Utility then calls the Shell Tag-specific function IsShell( ), which is used to determine if the currently running program is a shell (as contrasted with being the descendent of some other shell program). If the program is indeed a shell, the Shell Utility calls the Shell Tag-specific function SetInteractive( ), which modifies the state of the Shell Tag to reflect that it is interactive. The change in this Tag state is visually represented in the data structure diagram depicted inFIG.21(2102). 4. Tag Propagation & Shell Tag At the beginning of this section, a cascading sequence of calls was shown taking place when handling an Exec event. The sequence was (1) Gateway404calls Process Tree404's HandleEvent( ); (2) HandleEvent( ) calls HandleExec( ); (3) HandleExec calls the Exec( ) function of the corresponding ProcessInfo object; and (4) the ProcessInfo Exec( ) calls Exec( ) on the corresponding ProgramInfo object. This cascade goes further: the TagMap in the ProgramInfo also has an Exec( ) and Fork( ) callback, which is called on each Event respectively, and which returns a new TagMap containing Tags which exhibit Exec( ) or Fork( )propagation behavior. This propagation could be to propagate the same Tag, a copy of the Tag, a completely different Tag, or nothing at all (NIL). As illustrated inFIG.22, ShellTag.Forko returns a copy of itself, reflecting the same state. ShellTag.Exec( ) has more logic-based behavior: if the Shell Tag reflects the state of being interactive, it returns a ShellCommandTag, indicating that this newly executed program is the result of a command being issued at the shell. If the Shell Tag is not interactive, it returns a Shell Automation Tag indicating that the newly executed program is involved in some form of non-human-driven automation, such as a shell script. FIG.23illustrates how an existing ProgramInfo Exec( ) callback returns a new ProgramInfo structure on Exec( ) Events, containing Exec-propagated Tags (and as stated previously in this section, on this event the ProcessInfo structure member “Program” is updated to point to the new ProgramInfo). In this example, “ls” is the program being executed, and because it descends from the interactive shell, a new Tag is propagated to indicate it is a Shell Command (2302). FIG.24illustrates an embodiment of a process for detecting an interactive shell. In various embodiments, process2400is performed by analytics framework400. Process2400begins at2402when telemetry associated with an Exec( ) Event denotes a program being invoked via a process. At2404, a determination is made that the program is a shell. As explained above, one approach to determining whether the program is a shell is to use a Shell Utility (e.g., matching the program path of “/bin/bash” against a list of known shell programs). At2406, additional information associated with the process is received, such as by a particular record being received from a kprobe or tracepoint, or timing information associated with commands. This additional information is used (e.g., at2408) to determine that the shell is interactive. A variety of actions can be taken in response to determining that an interactive shell is executing. As one example, commands entered into the interactive shell can be tagged (e.g., as described above) as interactive shell commands. Such commands can then be surfaced (e.g., via Alerts), made queryable, etc., so that a real-time view of which commands are being executed in interactive shells can be made available. V. Events Events are platform100's internal data representation of data supplied to it from a source. This representation is aimed at simplifying analysis and to facilitate Strategy writing. A. Creating New Types of Events One example way to create new Events is as follows:1. Create a new .go file in the pkg/protectevents directory.2. Add a new Event type in the consts of pkg/protectevents/event_types.go, making sure that BaseEvent (e.g., *BaseEvent) is embedded in the new type.3. Write unit tests for any new methods provided by the new type (e.g., syscall name to number translation).4. Update the factories in pkg/protectevents/factory to create Events if necessary. B. Specific Event Types Events contain a BaseEvent defined in pkg/protectevent/events.go which defines most of the protectevent.Event interface. This contains common fields such as: Always Filled In?FieldTypeMeaning(Y/N)IdstringUnique IdentifierYfor this specificEventProcessUUIDstringUUID for theNprocess this eventoccurred inProcessPIDintthe PID/TGID ofYthe process thisevent occurred inThreadIDint32the thread ID of theYtaskUiduint32the user ID of theNtask/threadGiduint32the group ID of theNtask/threadEuiduint32the effective userNID of thetask/threadEgiduint32the effective groupNID of thetask/threadSuiduint32the saved user IDNof the task/threadSgiduint32the save group IDNof the task/threadFsUiduint32the file system userNID of thetask/threadFsGiduint32the file systemNgroup ID of thetask/threadContainerIDstringthe UUID of theYcontainer wherethis Event occurred(often a hex-encoded SHA256)ContainerNamestringthe string name ofNthe container wherethis Event occurredSensorIDstringthe UUID of theYSensor where thisevent occurred(meaning thecontainer wasrunning on the hostmonitored by thesensor with thegiven ID)ImageIDstringthe UUID of theNImage used to buildthe container wherethis event occurred(often a hex-encoded SHA256)ImageNamestringthe string name ofNthe Image used tobuild the containerwhere the EventoccurredSequenceNumuint64the sequenceYnumber for eventsemitted as part of asubscription.MonotimeNanosthe monotime fromYthe Sensor's host'sclock at the time ofthe Event Specific events assume these fields as well, where appropriate. The following are various example Events, as well as corresponding example JSON. 1. Container Created This Event represents when a Container is created but not yet started, on a host monitored by platform100. Event Type constant: protectevent.CONTAINER_EVENT_TYPE_CREATED Additional Fields: Always PresentFieldsTypeMeaning(Y/N)DockerConfigJSONstringThe Docker ConfigNJSON from theDocker SocketContains additionalinformationOCIConfigJSONstringThe ContainerNConfig JSON fromthe OCI compliantcontainer engine The JSON Factory checks the type field for a string value “cont-create” for container creation events. They include all of the BaseFields and can optionally include the DockerConfigJSON or OCIConfigJSON fields as strings: {“type”: “cont-create”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“monotime_nanos”: 58800000000,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4” } 2. Container Started This Event represents when a previously created Container is started on a host monitored by a Capsule8's platform. Event Type constant: protectevent.CONTAINER_EVENT_TYPE_RUNNING Additional Fields Always PresentFieldsTypeMeaning(Y/N)InitHostPidintThe PID of the initprocess in theContainer, in thehost namespace. The JSON Factory checks the type field for a string value “cont-create” for container creation events. They include all of the BaseEvent fields and can optionally include the DockerConfigJSON or OCIConfigJSON fields as strings: {“type”: “cont-start”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“monotime_nanos”: 58800000000,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“init_host_pid”: 2222} 3. Container Exited This Event represents when a previously started Container's Init PID has exited but the container's resources have not yet been reclaimed. Event Type constant: protectevent.CONTAINER_EVENT_TYPE_EXITED Additional Fields Always PresentFieldsTypeMeaning(Y/N)ExitCodeintThe ExitCode thatYthe Init ProcessExited with The JSON Factory checks the type field for a string value of “cont-create” for container exit events. They include all of the BaseEvent fields and include the field exit_code which indicates the integer exit code of the init process in the container: {“type”: “cont-exit”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“monotime_nanos”: 58800000000,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“exit_code”: 0} 4. Container Destroyed This Event represents when a previously exited Container's resources have been reclaimed and thus no longer exist. Event Type constant: protectevent.CONTAINER_EVENT_TYPE_DESTROYED Additional Fields Always PresentFieldsTypeMeaning(Y/N)None The JSON Factory checks the type field for a string value of “cont-destroy” for container reaped events. They include only the BaseEvent fields: {“type”: “cont-destroy”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“monotime_nanos”: 58800000000,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4” } 5. Process Fork This Event represents when a process in a monitored Container forks a new process. Event Type constant: protectevent.PROCESS_EVENT_TYPE_FORK Additional Fields Always PresentFieldsTypeMeaning(Y/N)ProcessUUIDstringProcessUUID inYbase event is filledinProcessPIDintthe PID of theYprocess that calledthe forkChildPIDintThe PID of theYChild process fromthe forkUpdateCWDstringThe CWD at theYtime of the fork The JSON Factory checks the type field for a string value of proc-fork. It contains the BaseEvent fields and all of the fields in the table above: {“type”: “proc-fork”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“monotime_nanos”: 58800000000,“process_pid”: 2222,“child_pid”: 3333} 6. Process Exec This Event represents when a process in a monitored Container calls the execve family of syscalls to start a new program. Event Type constant: protectevent.PROCESS_EVENT_TYPE_EXEC Additional Fields Always PresentFieldsTypeMeaning(Y/N)ProcessUUIDstringProcessUUID inYbase event is filledinProcessPIDintthe PID of theYprocess that calledthe forkProgramNamestringthe path of theYprogram beingexecuted The JSON Factory checks the type field for a string value of proc-exec. It contains the BaseEvent fields and all of the fields in the table above: {“type”: “proc-exec”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “a856880a77274d238a5a9d1057831dec”,“process_pid”: 3333,“filename”: “‘exit 1’”} 7. Process Exit This Event represents when a process in a monitored Container exits. Event Type constant: protectevent.PROCESS_EVENT_TYPE_EXIT Additional Fields Always PresentFieldsTypeMeaning(Y/N)ProcessUUIDstringProcessUUID inYbase event is filledinProcessPIDintthe PID of theYprocess that calledthe forkExitCodeintthe Return codeYreturned by theprocess that isterminating The JSON Factory checks the type field for a string value of proc-exit. It contains the BaseEvent fields and all of the fields in the table above: {“type”: “proc-exit”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “a856880a77274d238a5a9d1057831dec”,“process_pid”: 3333,“exit_code”: 1} 8. Syscall Enter This represents that a given system call is about to be executed in a monitored Container. This is used to get scalar arguments and other values but does not tell if the system call was successful. Event Type constant: protectevent.PROCESS_EVENT_TYPE_ENTER Additional Fields Always PresentFieldsTypeMeaning(Y/N)ProcessUUIDstringProcessUUID inYbase event is filledinProcessPIDintthe PID of theYprocess that calledthe forkNumberintthe syscall numberYNamestringthe name of theYsyscallArgs[ ]uint64the scalarYarguments to thesyscall The JSON Factory checks the type field for a string value of syscall. It contains the BaseEvent fields and all of the fields in the table above. One of the field's number or name may be omitted but not both. At the time of creation the JSON Event Factory will fill in both the name and number provided that one of the fields is present: {“type”: “syscall”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“process_pid”: 3333,“number”: 0,“name”: “sys_read”,“args”: [0, 1203740476025595838784, 20]} 9. Syscall Exit This represents that a given syscall is about to return in a monitored Container. This is used to get the return values of syscalls which can indicate if the syscall was successful. Event Type constant: protectevent.PROCESS_EVENT_TYPE_EXIT Additional Fields Always PresentFieldsTypeMeaning(Y/N)ProcessUUIDstringProcessUUID inYbase event is filledinProcessPIDintthe PID of theYprocess that calledthe forkNumberintthe syscall numberYNamestringthe name of theYsyscallRetint64the scalar returnYvalue for thesyscall The JSON Factory checks the type field for a string value of syscall-exit. It contains the BaseEvent fields and all of the fields in the table above. One of the field's number or name may be omitted but not both. At the time of creation the JSON Event Factory will fill in both the name and number provided that one of the fields is present: {“type”: “syscall-exit”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“process_pid”: 3333,“number”: 0,“name”: “sys_read”,“ret”: 23} 10. File Open This Event represents when a File is opened inside of a Container/Monitored Host. Event Type constant: protectevent.FILE_EVENT_TYPE_OPEN Additional Fields Always PresentFieldsTypeMeaning(Y/N)FilenamestringThe path to the fileYthat was openedOpenFlagsint32The flags passed toYthe open syscallOpenModeint32The mode the fileYwas opened with The JSON Factory checks the type field for a string value of file-open. It contains the BaseEvent fields and all of the fields in the table above: {“type”: “file-open”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“process_pid”: 2222,“filename”: “/tmp/foo.txt”,“flags”: 0,“mode”: 700} 11. File Close This Event represents the combined event of ENTER and EXIT for syscall close inside of a Container. Event Type constant: protectevent.FTLE_EVENT_CLOSE Additional Fields Always PresentFieldsTypeMeaning(Y/N)FDint32passed fileYDescriptor to beclosedRetint32Return value of theYsyscall close The JSON Factory checks the type field for a string value of syscall-close. It contains the BaseEvent fields and all of the fields in the table above: {“type”: “syscall-close”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“Fd”: 23,“Ret”: 0} 12. Syscall Dup This event represents the combined event of ENTER and EXIT for syscall DUP event inside of a Container. Event Type constant: protectevent.SYSCALL_EVENT_DUP Additional Fields Always PresentFieldsTypeMeaning(Y/N)OldFDint32passed fileYDescriptor to beYcopiedRetint32Return value of thesyscall dup/represents duplicatefile descriptor ofOldFD The JSON Factory checks the type field for a string value of syscall-dup. It contains the BaseEvent fields and all of the fields in the table above: {“type”: “syscall-dup”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“OldFd”: 23,“Ret”: 32} 13. Syscall DUP2 This Event represents the combined event of ENTER and EXIT for syscall DUP2 event inside of a Container. Event Type constant: protectevent.SYSCALL_EVENT_DUP2 Additional Fields Always PresentFieldsTypeMeaning(Y/N)OldFDint32passed fileYDescriptor to becopiedNewFDint32passed fileYDescriptor to beYcopied toRetint32Return value of thesyscall dup2/represents newdescriptor forOldFD The JSON Factory checks the type field for a string value of syscall-dup2. It contains the BaseEvent fields and all of the fields in the table above: {“type”: “syscall-dup2”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“OldFd”: 23,“NewFd”: 32,“Ret”: 0} 14. Syscall DUP3 This Event represents the combined event of ENTER and EXIT for syscall DUP3 event inside of a Container. Event Type constant: protectevent.SYSCALL_EVENT_DUP3 Additional Fields Always PresentFieldsTypeMeaning(Y/N)OldFDint32passed fileYdescriptor to becopiedNewFDint32passed fileYdescriptor to becopied toFlagsint32passed flags passedYfor new filedescriptorRetint32Return value of theYsyscall dup3/represents duplicatedescriptor forOldFD The JSON Factory checks the type field for a string value of syscall-dup3. It contains the BaseEvent fields and all of the fields in the table above: {“type”: “syscall-dup3”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“OldFd”: 23,“NewFd”: 32,“Flags”: 444,“Ret”: 0} 15. Type NetworkAddress This struct defines fields for NetworkAddress. Always PresentFieldsTypeMeaning(Y/N)FamilyNetworkAddressFaRepresents familyYmilyone ofUNKNOWN,IPV4, IPV6,LOCALAddressstringnetwork address ofYtype CIDR 16. Syscall Connect This Event represents combined event of ENTER and EXIT for syscall connect event inside of a Container. Event Type constant: protectevent.SYSCALL_EVENT_CONNECT Additional Fields AlwaysPresentFieldsTypeMeaning(Y/N)MonotimeNanosEnint64Timestamp ofYtersyscall connectenter eventFdint32passed socket fileYdescriptorNetworkAddrNetworkAddresspassed networkYaddressPortint32Passed port numberYRetint32Return value of theYsyscall connect The JSON Factory checks the type field for a string value of syscall-connect. It contains the BaseEvent fields and all of the fields in the table above: {“type”: “syscall-connect”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“Fd”: 23,“SockAddrPtr”: 40404040,“AddrLen”: 4,“Ret”: 0} 17. Syscall Accept This Event represents the combined event of ENTER and EXIT for syscall accept event inside of a Container. Event Type constant: protectevent.SYSCALL_EVENT_ACCEPT Additional Fields AlwaysPresentFieldsTypeMeaning(Y/N)MonotimeNanosEnint64Timestamp ofYtersyscall connectenter eventFdint32passed socket fileYdescriptorNetworkAddrNetworkAddresspassed networkYaddressPortint32Passed port numberYRetint32Return value of theYsyscall accept/newsocket descriptorfor connection The JSON Factory checks the type field for a string value of syscall-accept. It contains the BaseEvent fields and all of the fields in the table above: {“type”: “syscall-accept”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“Fd”: 23,“networkaddress”: {“family”: 0,“address”: “192.168.0.1./24” },“Ret”: 32} 18. Syscall Bind This Event represents the combined event of ENTER and EXIT for syscall bind event inside of a Container. Event Type constant: protectevent.SYSCALL_EVENT_BIND Additional Fields AlwaysPresentFieldsTypeMeaning(Y/N)MonotimeNanosEnint64Timestamp ofYtersyscall connectenter eventFdint32passed socket fileYdescriptorNetworkAddrNetworkAddresspassed networkYaddressPortint32Passed port numberYRetint32Return value of theYsyscall bind event inside of a Container. Event Type constant: protectevent.SYSCALL_EVENT_LISTEN Additional Fields AlwaysPresentFieldsTypeMeaning(Y/N)MonotimeNanosEnterint64Timestamp ofYsyscall connectenter eventFdint32passed socket fileYdescriptorBacklogint32passed maximumYnumber of pendingconnectionsRetint32Return value of theYsyscall listen/newsocket descriptorfor connection Example JSON: {“type”: “syscall-accept”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“process_uuid”: “394bd04468b541bdbe132a71de3671cb”,“Fd”: 23,“SockAddrPtr”: 40404040,“AddrLenPtr”: 41414141,“Ret”: 32} 20. KProbe SMEP SMAP This Event represents when the function native_write_cr4 was called in the kernel, by using a kprobe. It contains the new CR4 value (the first argument to that function) and is used to determine if the new value disables SMEP/SMAP in the CR4 register of a given processor. Event Type constant: protectevent.KPROBE_EVENT_SMEP_SMAP_TYPE Additional Fields Always PresentFieldsTypeMeaning(Y/N)NewCR4Valueuint64The Value of CR4Ythat will be set This type also defines helper functions to determine if the NewCR4Value disables SMEP and SMAP. They are:DisablesSMEP( ) boolDisablesSMAP( ) bool The JSON Factory checks the type field for a string value of smep-smap for container reaped events. They include only the BaseEvent fields and a cr4 field which is the integer value that the CR4 register would be set to: {“type”: “smep-smap”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“cr4”: 4226827} 21. KProbe AppArmor This Event represents when a KProbe has been used to scan Kernel Memory to check if AppArmor is enabled and enforcing its policies. It returns the value of the memory used for the configuration variable. Event Type constant: protectevent.KPROBE_EVENT_APP_ARMOR_TYPE Additional Fields Always PresentFieldsTypeMeaning(Y/N)None 22. KProbe SELinux This Event represents when a KProbe has been used to scan Kernel Memory to check if SELinux is enabled and enforcing its policies. It returns the values of the variables that control SELinux. Event Type constant: Additional Fields Always PresentFieldsTypeMeaning(Y/N)None 23. KProbe Stack Create This Event represents when the KProbe on arch_align_stack has fired, which means a program's stack has been created. Event Type constant: protectevent.KPROBE_EVENT_STACK_CREATE Additional Fields Always PresentFieldsTypeMeaning(Y/N)stackhighaddruint64The high boundYof the stack 24. KProbe Stack Expand This Event represents when the KProbe on expand_stack has fired, which means a program's stack has been expanded down. Event Type constant: protectevent.KPROBE_EVENT_STACK_EXPAND Additional Fields Always PresentFieldsTypeMeaning(Y/N)Stack_low_addruint64The new low boundYof the stack 25. KProbe Load Kernel Module This Event represents when the KProbe on do_init_module has fired, which means a new kernel module is being loaded. Event Type constant:protectevent.KPROBE_EVENT_LOAD_KERNEL_MODULE_TYPE Additional Fields Always PresentFieldsTypeMeaning(Y/N)Do_init_modulestringThe name of theYmodule_namemodule beingloaded 26. KProbe Permissions Modification This Event represents when the KProbe on sys_fchmodat has fired, which means a permissions change has been requested. Event Type constant: protectevent.KPROBE_EVENT_CHMOD_TYPE Additional Fields Always PresentFieldsTypeMeaning(Y/N)Sys_fchmodatf_stringThe name of theYnamefile/directorywhose permissionsare changedsysfchmodatmodeuint64The newYpermissions mask 27. Ticker This Event represents the state of the host clock on a given sensor. Event Type constant: protectevent.TICKEREVENTTYPE Additional Fields Always PresentFieldsTypeMeaning(Y/N)nanosecondsint64The number ofYnanoseconds sincethe UNIX epochaccording to thesensor's clocksecondsint64The number ofYseconds since theUNIX epochaccording to thesensor's clock The JSON Factory checks the type field for a string value “ticker” for container creation events. They include all of the BaseFields and the fields mentioned above. optionally includes the DockerConfigJSON or OCIConfigJSON fields as strings: {“type”: “ticker”,“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”,“monotime_nanos”: 58800000000,“nanoseconds”: 1257894000000000000,“seconds”: 1257894000} 28. Configuration Event This Event is used by the JSON Factory to simulate a Platform API message indicating a configuration change, since the platform provides configuration files for each strategy/component. Event Type constant: protectevent.CONFIG_EVENT Additional Fields Always PresentFieldsTypeMeaning(Y/N)topicstringThe string topicYname of theconfigurations, it iseffectively asection prefix. forall valuesvaluesstringa list of JavascriptYobjects with keys“name”, and“value”: as strings Example JSON: {“type”: “config”,“topic”: “strategy.dummy”,“values”: [{“name”: “enabled”, “value”: “true” }]} A more complicated example follows. Arbiter408requires that filters be applied. The below JSON example shows how to apply two filters: one that says only alerts from the container where the ID is 98e73918fad6, and one that says any Alerts that are have a priority lower than HIGH should be provided:{“type”: “config”,“topic”: “arbiter”,“values”: [{“name”: “enabled”, “value”: “true” },{“filters”: “container_id==98e73918fad6, priority <HIGH” }]} VI. Strategies Analytics framework400provides security monitoring capabilities through modular components called Strategies. As discussed above, a Strategy is a piece of detection logic that consumes Events and produces either Alerts or Events of Interest, in various embodiments. A. Overview Strategies are designed to provide diverse, overlapping layers of system security monitoring to cover the many facets of an attack. Attack categories and entire vulnerability classes are covered by detecting the low-level behaviors required to carry out an exploit or other security violation. Therefore, Strategies are geared towards low-level system monitoring, providing a lightweight mechanism for the observation and detection of behavioral events which may be indications of unauthorized or malicious behavior within an organization's environment. Individual Strategies are mechanisms for defining policies or sets of rules for a specific class of security-related Events which may occur on a system. A Strategy is defined by a policy type and a set of rules; each Strategy is responsible for monitoring a certain set of behaviors defined by its policy type and for raising an alert if any of these behaviors violate its rules. For instance, Strategies having a policy of type “connect” monitor network connections. If the rules specified in a given Strategy configuration match the behavior observed (e.g., if a connection to a blacklisted domain is performed), then an Alert is raised. Each Strategy will generate a single Alert when it detects a violation. Users may deploy multiple Strategies of the same policy concurrently, so as to have granular control of how many Alerts are to be generated and how these Alerts shall be generated for different contexts. For example, an unexpected outbound connection to an internal subnet could raise a lower-priority alert than an outbound connection to an unknown host on the Internet. The types of Events that Strategies monitor vary, ranging from common system operations (such as network connections and program execution) to Events which are explicitly impactful on security (such as unauthorized privilege escalation or the disabling of Linux Security Modules). Detection of low-level system Events is carried out by a Sensor; this Event stream is then sent to Analytics framework400, which processes the Events and extracts data to be used by the Strategies. Each Strategy receives Event data relevant to its policy type, determines if the operation observed matches any of its rules, and raises an Alert if necessary. B. Strategy Configuration The rules that are set for a Strategy define whether an observed behavior is authorized or unauthorized. For example, a strategy may be built upon the program policy, which monitors program execution, and have a set of rules that authorize commands issued only from a certain list of whitelisted programs. This Strategy will raise Alerts whenever a command is executed from a program not on this list. The nature of the rules that can be defined in various Strategies depends on the policy of the respective strategy. Thus, IP addresses present in rules might only be relevant for Strategies based on connect or sendto policies, whereas program/parent program names might only be relevant for policies involving process interactions. Below is an example of a configuration file that defines a single Strategy which monitors program execution: Program Execution Whitelist Strategy: policy: programalertMessage: Unauthorized Program Executioncomments: This strategy detects when an unauthorized program has issued a command.priority: Highenabled: truerules:ignore programName in $ProgramWhitelistdefault match ProgramWhitelist:type: pathsdescription: whitelist of authorized programslist:/bin/Is/bin/sh In the example above, a Strategy named “Program Execution Whitelist Strategy” is defined, which is of policy “program.” The message to appear in the generated Alert should this Strategy fire in the alertMessage field is also specified, as well as the priority of this Alert, priority. The Strategy rules denoting when and how this Strategy will fire alerts are listed under rules. The rules defined above are denoting that an Alert should always be fired unless the program name of the executing program is in the “ProgramWhitelist” list. The defined list “ProgramWhitelist” is a list of paths (as denoted by its type) of programs in the system that are allowed to execute commands. The aforementioned example is a simple use case of the supported policies. Different types of lists and policies are supported, and a configuration file may define any number of Strategies and lists. More information on list and Strategy configurations is defined later in this section. Strategies may refer to lists when constructing rules, and multiple Strategies may refer to the same list; any list referenced by a Strategy must be defined within the same configuration file. A Strategy rule refers to a list by prepending the list name with the special character $. A strategy is uniquely identified, in various embodiments, as a YAML key containing a policy sub-key. The top-level key, which essentially denotes the start of a Strategy configuration, also serves as the name of the Strategy for the given configuration (see “Program Execution Whitelist Strategy” in the previous example). 1. List Definitions Lists are defined to be used in Strategy rules, controlling when and how Alerts should be fired. a. List Types Configuration lists can have the following types, in various embodiments: i. names Name lists store strings. The following is an example of a valid name list: ContainerWhitelist:type: “names”description: “ignore all activity from these containers”list:“/test-container-001”: “this is a test container”“/test-container-002”: “this is a second test container” ii. hosts Host lists store IP addresses as CIDR blocks. Additionally, host lists have a required lookupDNS variable, which will resolve a domain name to an IP address if set to true. Users may add domain names to their host list only if lookupDNS is true; a host list with lookupDNS set to false and a domain name in its list will not be a valid list and return a configuration error. The following are examples of valid host lists: SampleHostlist_1:type: “hosts”lookupDNS: falselist:“1.0.0.0/8”“3.3.3.0/16” SampleHostlist_2:type: “hosts”lookupDNS: truelist:“www.capsule8.com”: “our website!” iii. paths Path lists store program or file paths. Paths may be written with the wildcard operator *, which will match on all paths that fit the pattern. The following is an example of a valid path list: AuthorizedProgramstype: “paths”description: whitelist of authorized programslist:“/bin/bash”“/bin/sh”“/usr/sbin/*”: “this will match all programs beginning with/usr/sbin/” iv. Numbers Number lists store integers. The following is an example of a valid number list: PortListtype: “numbers”list:80443 2. Strategy Definitions In addition to default fields, Strategies may have extra configuration options that are specific to the operation of their policy. These are documented in each policy's respective documentation file. a. Strategy Rules Each policy exposes a set of valid fields that can be used for the construction of higher-level rules in strategies. Thus, in each strategy's definition, the rules option determines how an alert will be generated upon the receipt of an event. I. MATCH AND IGNORE RULES Each rule begins with either the keyword match or ignore. If the predicate following a match rule is satisfied, that will result in an alert being fired by the respective strategy, whereas a predicate that satisfies an ignore rule will not. The only cases of rules that do not start with either a match or ignore directive are default rules, which are always defined at the end of any given ruleset, and specify what action should be taken if none of the predicates up to that point were satisfied. A ruleset may have any number of match or ignore rules, but must have at least a default match or a default ignore. A default match operates as a whitelist: the Strategy will alert on all cases, unless the behavior event satisfied a previous ignore rule. A default ignore operates as a blacklist: the Strategy will not alert on any cases, unless the behavior event satisfied a previous match rule. If a Strategy has multiple rules, all rules are evaluated in order beginning from the top of the list. If the rule predicate evaluates to true on a certain behavior event, further evaluation for that event stops and the Strategy either raises an alert (if it was a match rule) or ignores (if it was an ignore rule). Otherwise the Strategy proceeds to evaluation of the following rule, until it reaches the default match or the default ignore at the end of the ruleset. II. PREDICATES A predicate may be constructed from operations on valid event fields of each policy, or from operations on other predicates. Predicate operations on event fields take two operands: the field name and a value to be checked against. The value may be a literal (e.g., string, int) or a list. III. VALID EVENT FIELDS Each policy type has valid Event fields on which to filter behavioral Events. IV. EXAMPLES The following ruleset will alert on every event: rules:default match The following ruleset will not alert on any event: rules:default ignore The following two rulesets are equivalent: rules:ignore containerName==“test-container”default match rules:match containerName “test-container”default ignore These rules will raise an alert on every event except those where containerName is test-container (equivalent to whitelisting test-container). The following is an example of nested rules: rules:ignore programName==“/foo/bar/*”match programName “/foo/*”default ignore These rules effectively blacklist the directory/foo/, except for the subdirectory /foo/bar/within it. To match all containers whose names start with “gcp” and end with “_Europe”: rules:match containerName like{circumflex over ( )}gcp. *_Europe$default ignore To match all containers except those whose names start with “gcp” and end with “_Europe”, rules:match containerName not_like{circumflex over ( )}gcp.*_Europe$default ignore 3. Alerts The following is a sample Alert, generated by a strategy which alerts on execution of newly created files:{“timestamp”: “2018-10-03T15:31:41.849692582Z”,“scope”: “PROCESS”,“priority”: “HIGH”,“confidence”: 1,“notifications”: [}“timestamp”: “2018-10-03T15:31:41.849692582Z”,“name”: “NewFilesShouldNeverBeExecdByNginx”,“uuid”: “806d2e1b-ba46-453e-9458-d441a685d9e6”,“message”: “The program \“/usr/bin/nginx\” with PID 850174 in container \“/k8s_capsule8-server_capsule8-server-85b5dc8568-86c88_default_d8ee4a59-c35e-11e8-abbe-42010a800037_0 \“executed newly created file\”./privesc.sh\”. This action matched the “match parentProgramName==/usr/bin/nginx\” policy rule (where parentProgramName (/usr/bin/nginx)==/usr/bin/nginx).”}],“matched_rule”: “match parentProgramName==/usr/bin/nginx”,“matched_objects”: [{“matched_field”: “parentProgramName”,“matched_value”: “/usr/bin/nginx”,“matched_pattern”: “/usr/bin/nginx”,“matched_description”: “ ”}]“alert_group_id”: “ ”,“description”: “New File executed by web server”,“uuid”: “NewFilesShouldNeverBeExecdByNginx-c2c481cbe8370168041cc7ebf2dd5864fe25aa4abf6e4d4d66dce0bc03ed016d”,“location”: {“node_name”: “capsule8-sensor-4vfk7”,“container_id”:“f6e3e1da2878c27b35df874d486e701dab5b3f4776c8b578cde89dcfc90e4760”,“container_name”: “/k8s_capsule8-server_capsule8-server-85b5dc8568-86c88_default_d8ee4a59-c35e-11e8-abbe-42010a800037_0”,“image_id”: “14dbf0b16f71ae7736dbae04a023f8212c912178c318511cb96f9c603b501478”,“image_name”: “us.gcr.io/testing-playground-214818/capsule8-server@sha256:b521ea40d7cf311cc9f7bdfc01dd44f2b8542c88ef445ef9fec7487ed9caec12”,“sensor_id”: “7f61568de0df2240e20e7932781d782ae72d9db128f3ce826fd7bb59c1e25db4”},“process_info”: {“pid”: 873136,“ppid”: 850174,“pid_uuid”: “5900db11-aab6-481c-b8cc-fc6fa5a3b76a-873136-3005012116940388”,“name”: “./privesc.sh”,“args”: [“./privesc.sh”],“children”: null,“parent”: null,“cwd”: “ ”,“uid”: 0,“gid”: 0,“euid”: 100,“egid”: 65533,“suid”: 0,“sgid”: 1131636068,“fsuid”: 100,“fsgid”: 65533},“strategy_name”: “NewFilesShouldNeverBeExecdByNginx”,“policy_type”: “newFileExec”,“metadata”: {“arch”: “x86_64”,“container_runtime”: “not-found”,“in_container”: “false”,“kernel_release”: “4.4.0-134-generic”,“kernel_version”: “#160-Ubuntu SMP Wed Aug 15 14:58:00 UTC 2018”,“network_interface_br-aa43deac176c_flags”: “up|broadcast|multicast”,“network_interface_br-aa43deac176c_hardware_addr”: “02:42:26:50:f3:47”,“network_interface_br-aa43deac176c_index”: “4”,“network_interface_br-aa43deac176c_mtu”: “1500”,“network_interface_docker0_flags”: “up|broadcast|multicast”,“network_interface_docker0_hardware_addr”: “02:42:1b:7a:8e:6e”,“network_interface_docker0_index”: “3”,“network_interface_docker0_mtu”: “1500”,“network_interface_enp0s3_flags”: “up broadcast|multicast”,“network_interface_enp0s3_hardware_addr”: “02:fe:a1:ea:7d:d1”,“network_interface_enp0s3_index”: “2”,“network_interface_enp0s3_mtu”: “1500”,“network_interface_lo_flags”: “up|loopback”,“network_interface_lo_index”: “1”,“network_interface_lo_mtu”: “65536”,“network_interface_vethc468d59_flags”: “up|broadcast|multicast”,“network_interface_vethc468d59_hardware_addr”: “ca:de:c2:f7:27:79”,“network_interface_vethc468d59_index”: “17576”,“network_interface_vethc468d59 mtu”: “1500”,“node hostname”: “ubuntu-xenial”,“starttime”: “2018-10-18T19:22:13.620487107Z”,“uname_hostname”: “ubuntu-xenial”,“uname_os”: “Linux”}} The Strategy deployed to create an Alert on the above activity is shown below:NewFilesShouldNeverBeExecdByNginx:alertMessage: New File executed by web servercomments: New files should never be executed by a web server, this indicated potential compromise through a web shellenabled: truefileTimeout: 30policy: newFileExecpriority: Highrules:match parentProgramName==/usr/bin/nginxdefault ignore Upon detecting an attack or policy violation, Strategies generate output in the form of Alerts. Alerts can be grouped together by Strategies based on properties of interest (e.g., in case they belong to the same node or Process Tree). a. Scope The scope is intended to identify the “blast radius” of an attack, which aims to act as guidance for the amount of remediation or forensic response required. For example, if the Alert scope is “container,” it indicates that other containers on the same host are not affected, and so a response action could be as simple as destroying the offending container. Another example is if the scope is “process,” the impact is scoped to a single process, and killing that process would mitigate the attack or policy violation. In the case of a Strategy detecting kernel-level exploitation, the scope will be “node,” indicating that the entire node should be distrusted (where an appropriate response might be to bring the node offline). b. Location The Alert location describes the most specific entity that produced the Alert. Note that locations are container-aware and as such may have empty values for container/image when not in a containerized environment. For example, not all of the location fields will be present for alerts relating to kernel attacks, as those attacks apply to the node (existing outside of the container) and thus do not have corresponding container information. FieldDescriptionsensor_idThe ID of the Sensor running on the noderesponsible for the events described in theAlertcontainer_idThe ID of the container responsible forthe events described in the alertcontainer_nameThe name of the container described bycontainer_idimage_idThe ID of the container image for thecontainer described by container_idimage_nameThe name of the image described by theimage_id c. Process Information The process_info field describes the process and any parent processes of the process that generated the Alert. It is used as a further refinement of the Alert location and allows for further context to be included as part of the Alert. As with Alert location, note that the process_info fields will not be present for Alerts for which there is no associated process. FieldDescriptionpidThe process IDppidThe parent process IDpid_uuidUnique process identifier; as PIDs can bereused, this identifier uniquely describes aprocess instancenameThe program running in the process attime of AlertargsProgram arguments of the process thisAlert occurred inchildrenChild processes of the processparentParent process of this processcwdCurrent working directory of this processuidUser ID of the task/threadgidGroup ID of the task/threadeuidEffective user ID of the task/threadegidEffective group ID of the task/threadsuidSet user ID of the task/threadsgidSet group ID of the task/threadfsuidFile system user ID of the task/threadfsgidFile system group ID of the task/thread d. Notifications In Analytics framework400, multiple pieces of the system may modify or take actions related to a specific Alert. To accommodate this, the Alert format contains a notifications field to allow for updates to a specific Alert. At a minimum there is always one notification from the strategy that created the Alert. The information in the notifications field contains more detailed information about what action was taken and when. In the case of Strategies this is the initial reason that an Alert was created. Example notifications subfields include: FieldDescriptiontimestampnotification-specific Unix timestamp innanoseconds, generated by protect at timeof notification creationnamename of the notification (commonly thename of the strategy)uuidnotification-specific unique identifiermessagethe message text of the notification, mostcommonly for describing the specificdetails of an alert 4. Caveats a. YAML Special Characters Since configuration for Strategies follows the YAML specification, an assumption can be made that any characters with a special YAML functionality will be escaped with quotes whenever they are to be used in list or Strategy definitions. For instance, the following path list will result in an error, since, according to the YAML specification, *x is used as a reference, and the YAML parser will look for the appropriate anchor. FailingList:type: pathsdescription: this fails since x is treated as a reference to an anchorlist:*x Thus, if *x is to act as a wildcard for any path ending in x, the respective entry in the list should be “*x”. The same principle applies to the following:Any of the characters: , {, }, [,], , &, *, #, ?, |, −, <, >, =, !, %, @,Any of the control characters \0, \01, \02, \03, \04, \05, \06, \a, \b, \t, \n, \v, \f, \r, \0e, \0f, \10, \11, \12, \13, \14, \15, \16, \17, \18, \19, \x1a, \e, \x1c, \x1d, \x1e, \x1f, \N, _, \L, \PThe strings true and falseNull and ˜ In the same spirit, attention should be given to strings that could be parsed as numbers (e.g., 12e7, 3.4) and vice-versa, or strings that could be parsed as dates (2018-01-08). b. Regular Expression Matching Path comparisons follow the glob format. However, advanced cases requiring regular expression matching can be used using the like and not_like operators. Regular expressions used in the filter rules match the POSIX ERE (egrep) syntax and the match semantics follow the leftmost-longest convention. That is, when matching against text, the regexp returns a match that begins as early as possible in the input (leftmost), and among those it chooses a match that is as long as possible. There can be multiple leftmost-longest matches, with different submatch choices: among the possible leftmost-longest matches, in various embodiments, the one that a backtracking search would have found first is selected, per Golang's POSIX-compliant regular expression matching. C. Strategy Telemetry Collection Map 1. Introduction This section describes example data collected by Strategies and can be used to identify which Strategies may result in a higher rate of data collection (and thus also additional processing time/resource requirements) based on different workload types. The Sensor employs multiple methods of telemetry collection from different data sources based on data-source availability, which is usually dependent on kernel version and build options. The primary mechanisms used for telemetry collection are:Tracepoints: data-collection taps built into various kernel subsystems.Kprobes: on-demand collection “hooks” capable of being placed on almost any of the exported kernel symbols, and can be set to collect data on function entry or on function return, which are called Kretprobes. Kprobes allow for basic filters to be set to limit collection to occur only when the conditions of the filter match.Perf Counters: these are the hardware-enabled performance counters, accessed through the perf subsystem. Specifically, the performance counters for cache fetch and cache miss are used for detecting side-channel attacks. In various embodiments, the Sensor attempts to use the highest performing and modern collection mechanisms whenever possible, falling back on using older or less well performing data sources in the presence of less equipped kernels or unsupported mechanisms. For example, Tracepoints are used instead of Kretprobes to collect return-values from syscalls; however, Kretprobes must often be used for collecting other (non-syscall) return-values where Tracepoint support is not available. One approach attackers often use when attempting to compromise a server (e.g., workload instance102or legacy system106) is to exploit the server's kernel. One such technique is to disable Supervisor Mode Execution Prevention (SMEP) and/or Supervisor Mode Access Prevention (SMAP) by modifying the corresponding bit in the CR4 control register. Another such technique is, on servers protected with SELinux and/or AppArmor, for attackers to attempt to disable those protection modules. Yet another technique attackers use is to call sensitive kernel functions with a return-address in userland. Another technique includes accessing/reaching internal kernel functions or states without the requisite predecessor functions or states having taken place. An example is the behavior of calling a sensitive internal kernel function used to modify a process's privileges, which was not first preceded by calling the functions which determine if such authorization is granted. An analogy is that under normal circumstances, one should never observe a bank vault door opening if no one is signed into the bank/if no one has used the front door to first get inside the bank (as is the normal route). Each of these types of attacks, and similar types of attacks, can be detected by embodiments of analytics framework400. FIG.25illustrates an embodiment of a process for detecting use of a kernel exploit. In various embodiments, process2500is performed by analytics framework400. Process2500begins at2502when a set of Kprobes is monitored for occurrence (and, as applicable, particular arguments/data associated with those Kprobes). As one example, a function used to set the CR4 control register (e.g., native_write_cr4) is monitored for execution. Other examples of Kprobes are discussed throughout this Specification. At2504, a determination is made that a Strategy involving at least one of the Kprobes in the set has been matched. As one example, the SMEP/SMAP Strategy monitors for use of the function used to set the CR4 register. If the function is used, the Strategy will be met. At2506, a remedial action is taken in response to the determination made at2504. A variety of actions can be taken in response to determining that a kernel is being exploited. As one example, the node (e.g., workload instance102) can be segregated from network functionality. 2. Internal Sensor Telemetry Collection By default, the Sensor collects a set of limited telemetry to establish a core capability of system monitoring focused on processes and containers; this collection is independent of any externally requested or strategy-defined subscriptions. The sources for this data vary depending on kernel version and availability.FIG.26outlines example and fallback tracepoints and kprobes the Sensor uses for its core facilities, grouped by the purpose the telemetry serves. 3. Strategy Telemetry Subscriptions This section details example telemetry collected by various Strategies. a. Ptrace Strategy The Ptrace Strategy produces an alert when ptrace (or ptrace-related) functions are used in a non-whitelisted process. Ptrace is intended to provide debugging facilities on Linux, but can be used as a means of stealthy lateral movement between processes, such as in injecting malicious code into services such as SSH. This use of ptrace functions also serves as a signal that someone is using debugging functionality, which may violate production-environment policies.Kprobes: sys_ptrace, sys_process_vm_writevTracepoint (return): sys_ptrace b. Memory Protection Strategy The Memory Protection Strategy provides monitoring for attempts to exploit memory-mismanagement software vulnerabilities. Attempts to exploit these vulnerabilities to execute arbitrary code (also known as shellcode) commonly involve a step to modify the permissions on memory containing attacker-controlled data, so that the attacker-controlled data can be executed as program code.Kprobes: sys_mprotect, sys_brk, sys_sigaltstack, expand_stackKretprobes: arch_align_stackTracepoint (return): sys_mprotect, sys_brk c. Stack Pivot Strategy The stack pivot detection Strategy examines the stack pointer on certain syscalls and ensures that it is within normal stack bounds. Having a stack pointer reference an address outside the bounds of the stack is normally indicative of a stack pivot as part of an exploit chain.Kprobes: sys_execve, sys_mprotect, sys_sigaltstack, expand_stackKretprobes: arch_align_stack d. New File Exec The new file exec Strategy watches for execution of newly created files by non-whitelisted programs and if such created files are executed within the configured timeout.Kprobes: sys_execve, do_sys_open with a filter for O_CREAT e. Privilege Escalation The Privilege Escalation Strategy monitors for privilege escalation attacks that overwrite process privileges without going through a setuid or setgid call.Kprobes: sys_setuid, sys_setgid, sys_setreuid, sys_setregid, sys_setresuid, sys_setresgid, commit_creds, install_exec_credsKretprobes: install_exec_credsTracepoint (return): sys_setuid, sys_setgid, sys_setreuid, sys_setregid, sys_setresuid, sys_setresgid f. Sendto Blacklist/Whitelist The connection blacklist and whitelist strategies monitor calls to sendto( ) and sendmsg( ) (most commonly UDP) and compare the destination host to its configured blacklists or whitelists.Kprobes: sys_sendto, sys_sendmsgTracepoint (return): sys_sendto, sys_sendmsg g. Connect Blacklist/Whitelist The connection blacklist and whitelist Strategies monitor calls to connect( ) (most commonly TCP) and compare the destination host to its configured blacklists or whitelists.Kprobes: sys_connectTracepoint (return): sys_connect h. Program Blacklist/Whitelist The program execution blacklist and whitelist Strategies monitor program execution and compare the program name to its configured blacklists or whitelists.Kprobes: sys_execve i. Interactive Shell The Interactive Shell Strategy observes the execution of shell programs (such as /bin/sh, /bin/bash, etc.) and monitors for activity indicative of shell interactivity (vs being used to run a shell script, for example).Kprobes: sys_execve, sys_ioctl with FD=2 or FD=10 j. Remote Interactive Shell The Remote Interactive Shell Strategy is similar to the functionality of the Interactive Shell Strategy, but specifically monitors for interactive shells processing input/output from a network connection, such as the behavior exhibited by exploit payloads using mechanisms like bash's/dev/tcp to connect back to an attacker's machine.Kprobes: sys_execve, sys_dup, sys_dup2, sys_dup3, sys_bind, sys_connect, sys_ioctl with FD=2 or FD=10Tracepoint: sys_accept, sys_accept4Tracepoint (return): sys_dup, sys_dup2, sys_dup3, sys_accept, sys_accept4, sys_connect, sys_bind k. Kernel Payload The kernel payload Strategy observes sensitive kernel functions, to determine if a function is being called with a return-address in userland.Kprobes: prepare_creds, prepare_kernel_cred l. SMEP/SMAP The SMEP/SMAP Strategy monitors a function used to set the CR4 register, which is inlined in its legitimate uses, but still exported as a symbol, and which has become a popular target for disabling SMEP/SMAP.Kprobes: native_write_cr4 m. SELinux & AppArmor These two Strategies scan kernel memory to determine if these mechanisms have been disabled (if the strategies were configured to expect that these security mechanisms were on). The scanning of kernel memory for these strategies is done by a kprobe which is triggered by a specific syscall, which the sensor triggers periodically.Kprobes: sys_uname with filter for a magic cookie value as the function argument n. Kernel Module Loading The kernel module load Strategy allows whitelisting of which kernel modules can be loaded.Kprobes: do_init_module o. Spectre/Meltdown The Spectre/Meltdown strategy employs performance counters to detect side-channel attacks by examining cache fetch and cache miss ratios.PerfCounters: PerfEvent CacheReferences, PerfEvent CacheMiss, PerfEvent BranchMisses D. Strategy Examples 1. General Policies a. File Policy (Policy Identifier: File) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesfilePathpathsprogramNamepaths Description: This Strategy monitors calls to create files and generates Alerts for creation of file names in disallowed locations. An example configuration is presented below: File Policy Example:policy: fileenabled: truealertMessage: Blacklisted File Createdcomments: Example strategy using the file policypriority: Highrules:match filePath in $filepathlistdefault ignoretimeout: 10FilePathList:type: pathslist:/*/trustme: “example” A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “Blacklisted File Created”,“location”: {“container_id”: “98e73918fad6ce45d2f84f76b0e61d2bf789fe6cda74b24184918133c3a32863”,“container_name”: “/test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “TEST_IMAGE”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“matched_objects”: null,“matched_rule”: “ ”,“metadata”: null,“notifications”: [{“message”: “The program \“/usr/bin/trustme\” with PID 1001 in container \“/test-container\” created the file \“/usr/bin/trustme\”. This action matched the \“match filePath in $filepathlist\” policy rule (where filePath (/usr/bin/trustme) in /*/trustme (pattern description: example)).”,“name”: “File Policy Example”,“timestamp”: 1509474507990963973,“uuid”: “ZZZ”}],“policy_type”: “ ”,“priority”: “High”,“process_info”: {“args”: [ ],“children”: null,“cwd”: “ ”,“egid”: 15,“euid”: 12,“fsgid”: 15,“fsuid”: 12,“gid”: 15,“name”: “/usr/bin/trustme”,“parent”: null,“pid”: 1001,“pid_uuid”: “/usr/bin/trustme-YYY”,“ppid”: 0,“sgid”: 15,“suid”: 12,“uid”: 12},“scope”: “Process”,“strategy_name”: “File Policy Example”,“timestamp”: 1509474507990963973,“uuid”: “File Policy Example-XXX”} b. PTrace Policy (Policy Identifier: Ptrace) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesprogramNamepathsparentProgramNamepaths Description: This Strategy triggers an Alert if ptrace policy is violated by a non-whitelisted program. An example configuration is presented below: Ptrace Policy Example:policy: ptraceenabled: truealertMessage: Ptrace Invokedcomments: Example strategy using the ptrace policypriority: Highrules:ignore programName==/tmp/safe/*default match A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “Ptrace Invoked”,“location”: {“container_id”: “98e73918fad6ce45d2f84f76b0e61d2bf789fe6cda74b24184918133c3a32863”,“container_name”: “/test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “TEST_IMAGE”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“matched_objects”: null,“matched_rule”: “ ”,“metadata”: null,“notifications”: [{“message”: “The program \“/usr/bin/trustme\” with PID 1001 in container \“/test-container\” attempted to write memory in the process with PID 10032. This action matched the \“default match\” policy rule.”,“name”: “testPtracePolicy1”,“timestamp”: 1509474507990963973,“uuid”: “ZZZ”}],“policy_type”: “ ”,“priority”: “High”,“process_info”: {“args”: [ ],“children”: null,“cwd”: “ ”,“egid”: 15,“euid”: 12,“fsgid”: 15,“fsuid”: 12,“gid”: 15,“name”: “/usr/bin/trustme”,“parent”: null,“pid”: 1001,“pid_uuid”: “YYY”,“ppid”: 0,“sgid”: 15,“suid”: 12,“uid”: 12},“scope”: “Process”,“strategy_name”: “Ptrace Policy Example”,“timestamp”: 1509474507990963973,“uuid”: “XXX”} c. Permissions Modification Policy (Policy Identifier: Chmod) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesfilePathpathsfileMode—programNamepaths Additional configuration options for this policy are listed below: ConfigurationOptionTypeDefaultDescriptionsuidboolTrueAlert if set-user-idsetsgidboolFalseAlert if set-group-id setsvtxboolFalseAlert if sticky bitsetrusrboolFalseAlert if read byowner setwusrboolFalseAlert if write byowner setxusrboolFalseAlert ifexecute/search byowner (search fordirectories) setrgrpboolFalseAlert if read bygroup setwgrpboolFalseAlert if write bygroup setxgrpboolFalseAlert ifexecute/search bygroup setrothboolFalseAlert if read byothers setwothboolFalseAlert if write byothers setxothboolFalseAlert ifexecute/search byothers set Description: This Strategy produces an Alert if a permission change matching the rules set occurs. An example configuration is presented below: Permissions Modification Policy Example:policy: chmodenabled: truealertMessage: Permissions Modification Strategy Firedcomments: Example strategy using the chmod policypriority: Highrules:ignore programName==/tmp/safe/*default matchsuid: truesgid: falsesvtx: falserusr: falsewusr: falsexusr: falsergrp: falsewgrp: falsexgrp: falseroth: falsewoth: falsexoth: false A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“confidence”: “Max”,“description”: “Permissions Modification Strategy Fired”,“location”: {“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“notifications”: [{“actor_uuid”: “3ad2bfe3-8665-4d6b-a2d6-60238b05d02e”,“message”: “Permissions Modification Strategy Fired for \“testfile\”. New permissions: 4000 (suid). This action matched the \“default match\” policy rule.”,“name”: “Permissions Modification Policy Example”,“timestamp”: 12434343435}],“priority”: “High”,“process_info”: {“args”: [ ],“name”: “ ”“pid”: 22059,“pid_uuid”: “080a6767-9f37-4d70-b00d-015a9edf9099”,“ppid”: 0},“scope”: “Process”,“strategy_name”: “Permissions Modification Policy Example”,“timestamp”: 134334343,“uuid”: “Permissions-Modification-Policy-Example-”} d. Program Policy (Policy Identifier: Program) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesprogramNamepathsparentProgramNamepaths Description: This Strategy monitors program execution and compares the program name to its configured filters. It generates Alerts when a program matches an entry in one of the configured filters. An example configuration is presented below: Program Policy Example:policy: programenabled: truealertMessage: Unauthorized Program Executedcomments: Example strategy using the program policypriority: Highrules:default match A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below: {“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “Unauthorized Program Executed”,“location”: {“container_id”: “N/A”,“container_name”: “N/A”,“image_id”: “N/A”,“image_name”: “N/A”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“matched_objects”: null,“matched_rule”: “ ”,“metadata”: null,“notifications”: [{“message”: “The program (name unknown) with PID 0 executed the program \“/usr/bin/bash\”. This action matched the \“default match\” policy rule.”,“name”: “testProgramPolicy1”,“timestamp”: 1509474507990963973,“uuid”: “ZZZ”}],“policy_type”: “ ”“priority”: “High”,“process_info”: {“args”: [ ],“children”: null,“cwd”: “ ”,“egid”: 15,“euid”: 12,“fsgid”: 15,“fsuid”: 12,“gid”: 15,“name”: “/usr/bin/bash”,“parent”: null,“pid”: 1001,“pid_uuid”: “YYY/usr/bin/bash”,“ppid”: 0,“sgid”: 15,“suid”: 12,“uid”: 12},“scope”: “Process”,“strategy_name”: “Program Policy Example”,“timestamp”: 1509474507990963973,“uuid”: “XXX”} e. Sendto Policy (Policy Identifier: Sendto) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesremoteHosthostoutboundPortnumbersprogramNamepaths Description: This Strategy provides network-level IP-based policy monitoring for TCP connections, comparing the destination IP of outbound TCP connections against its configured filters. An example configuration is presented below: SendTo Example Policy:policy: sendtoenabled: truealertMessage: Sendto Blacklist Alertcomments: Example strategy using the sendto policypriority: Highrules:default match A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “Sendto Blacklist Alert”,“location”: {“container_id”: “98e73918fad6ce45d2f84f76b0e61d2bf789fe6cda74b24184918133c3a32863”,“container_name”: “/test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “TEST_IMAGE”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“matched_objects”: null,“matched_rule”: “ ”,“metadata”: null,“notifications”: [{“message”: “The program (name unknown) with PID 1001 in container \“/test-container\” communicated with 192.168.1.2 on UDP port 30030. This attempt was not successful. This action matched the \“default match\” policy rule.”,“name”: “SendTo Example Policy”,“timestamp”: 1509474507990963973,“uuid”: “ZZZ”}],“policy_type”: “ ”,“priority”: “High”,“process_info”: {“args”: [ ],“children”: null,“cwd”: “ ”,“egid”: 15,“euid”: 12,“fsgid”: 15,“fsuid”: 12,“gid”: 15,“name”: “ ”“parent”: null,“pid”: 1001,“pid_uuid”: “YYY”,“ppid”: 0,“sgid”: 15,“suid”: 12,“uid”: 12},“scope”: “Process”,“strategy_name”: “SendTo Example Policy”,“timestamp”: 1509474507990963973,“uuid”: “XXX”} f. Sensor Timeout Policy (Policy Identifier: sensorTimeout) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenames Configuration options for this policy are listed below: ConfigurationOptionTypeDefaultDescriptiontimeoutint1Minutes since lastevent from thesensor was received Description: This Strategy sends an Alert on sensor timeout. An example configuration is presented below: Sensor Timeout Example Policy:policy: sensorTimeoutenabled: truealertMessage: Sensor Timeout Alertcomments: Example strategy using the sensorTimeout policypriority: Highrules:ignore sensorId==aabbccddeeffdefault matchTimeout: 20 A sample generated alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “Sensor Timeout Alert”,“location”: {“container_id”: “test-container”,“container_name”: “test-container-name”,“image_id”: “test-image”,“image_name”: “test-image-name”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”“matched_objects”: null,“matched_rule”: “default match”,“metadata”: null,“notifications”: [{“message”: “The sensor has not received any process telemetry in the past \0014 minutes. This action matched the \“default match\” policy rule.”,“name”: “Sensor Timeout Example Policy”,“timestamp”: 1539804600351876590,“uuid”: “2a34c683-83e3-4653-97bc-4224b0baa757”}],“policy_type”: “sensorTimeout”,“priority”: “High”,“process_info”: {“args”: null,“children”: null,“cwd”: “ ”,“egid”: 0,“euid”: 0,“fsgid”: 0,“fsuid”: 0,“gid”: 0,“name”: “ ”,“parent”: null,“pid”: 0,“pid_uuid”: “ ”,“ppid”: 0,“sgid”: 0,“suid”: 0,“uid”: 0},“scope”: “Node”,“strategy_name”: “Sensor Timeout Example Policy”,“timestamp”: 1539727643594264759,“uuid”: “Sensor-Timeout-Example-Policy-XXX”} 2. Local Exploitation Policies a. BPF Protection Policy (Policy Identifier: Bpfexec) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesprogramNamepaths Description: This Strategy provides monitoring for attempts to call the BPF subsystem. An example configuration is presented below: BPF Example Policy:policy: bpfexecenabled: truecomments: Example strategy using the bpf policypriority: Mediumrules:ignore programName in $exampleWhitelistdefault matchalertMessage: BPF was calledexampleWhitelist:type: pathslist:/usr/share/bcc/tools/*: “bcc tools” A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “BPF was called”,“location”: {“container_id”: “N/A”,“container_name”: “N/A”,“image_id”: “N/A”,“image_name”: “N/A”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“matched_objects”: null,“matched_rule”: “ ”,“metadata”: null,“notifications”: [{“message”: “BPF was invoked! This action matched the \“default match\” policy rule.”,“name”: “BPF”,“timestamp”: 1509474507990963973,“uuid”: “ZZZ”}],“policy_type”: “ ”“priority”: “Medium”,“process_info”: {“args”: [ ],“children”: null,“cwd”: “/tmp/non-whitelisted_program”,“egid”: 0,“euid”: 0,“fsgid”: 0,“fsuid”: 0,“gid”: 0,“name”: “ ”“parent”: null,“pid”: 0,“pid_uuid”: “394bd04468b541bdbe132a71de3671cb”,“ppid”: 0,“sgid”: 0,“suid”: 0,“uid”: 0},“scope”: “Process”,“strategy_name”: “BPF Example Policy”,“timestamp”: 1509474507990963973,“uuid”: “XXX”} 3. Local and Post-Exploitation Policies a. AppArmor Policy (Policy Identifier appArmor) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenames Additional configuration options for this policy are listed below: ConfigurationOptionTypeDefaultDescriptiondefaultAppArmorStboolFalseIf true, any eventatethat either disablesAppArmor atstartup or disablesAppArmorenforcement willtrigger an alert. Iffalse, will onlytrigger alerts ifwhatever settingexisted at startup ismodified. Description: AppArmor is a Linux Security Module implementation and confines individual programs to a set of listed files and run-time capabilities. This Strategy will generate an alert if AppArmor settings are illegally modified. An example configuration is presented below: AppArmor Example Policy:policy: apparmorenabled: truealertMessage: AppArmor settings were modified!defaultAppArmorState: falsecomments: Example strategy using the apparmor policypriority: Highrules:default match A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “AppArmor settings were modified!”,“location”: {“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“notifications”: [{“message”: “The AppArmor security mechanism, which was previously enabled, has been disabled. This action matched the \“default match\” policy rule.”,“name”: “AppArmor Example Policy”,“timestamp”: 1536090770072590607,“uuid”: “7512f16f-9b3c-4b50-b53d-75d90d0f8468”}],“priority”: “High”,“process_info”: {“args”: null,“children”: null,“cwd”: “ ”,“egid”: 0,“euid”: 0,“fsgid”: 0,“fsuid”: 0,“gid”: 0,“name”: “ ”“parent”: null,“pid”: 0,“pid_uuid”: “ ”,“ppid”: 0,“sgid”: 0,“suid”: 0,“uid”: 0},“scope”: “Node”,“strategy_name”: “AppArmor Example Policy”,“timestamp”: 1536090770072588478,“uuid”: “Apparmor-Policy-Enabled-Sample-Config-cf30c7fd-c138-4957-bf87-6722afe5cd4a”} b. Kernel Module Loading Policy (Policy Identifier: Loadkernelmodule) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesprogramNamepathsuidnumbersgidnumberskernelModuleNamenames Description: This Strategy produces an Alert whenever a kernel module is loaded. An example configuration is presented below: Kernel Module Example Policy:policy: loadKernelModuleenabled: truealertMessage: A kernel module was loadedcomments: Example strategy using the loadKernelModule policypriority: Mediumrules:default match A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“confidence”: “Max”,“description”: “A kernel module was loaded”,“location”: {“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“notifications”: [{“actor_uuid”: “3ad2bfe3-8665-4d6b-a2d6-60238b05d02e”,“message”: “Kernel Module \“sampleModule\” was loaded. This action matched the \“default match\” policy rule.”,“name”: “Kernel Module Example Policy”,“timestamp”: 12434343435}],“priority”: “Medium”,“process_info”: {“args”: [ ],“name”: “ ”“pid”: 22059,“pid_uuid”: “080a6767-9f37-4d70-b00d-015a9edf9099”,“ppid”: 0},“scope”: “Node”,“strategy_name”: “Kernel Module Example Policy”,“timestamp”: 134334343,“uuid”: “4ba42670-4790-460e-b3cf-9f40ab3f197a”} c. Kernel Payload Policy (Policy Identifier kernelPayload) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesprogramNamepaths Additional configuration options for this policy are listed below: ConfigurationOptionTypeDefaultDescriptionextraLargeMemoryboolFalseWhether or notSystemshosts are using 5-level page tables Description: This Strategy detects when kernel functions commonly used by kernel-based exploits are called in unusual ways, in patterns that are unique to kernel exploitation. An example configuration is presented below: Kernel Payload Example Strategy:policy: kernelPayloadenabled: truealertMessage: Kernel Exploitationcomments: test strategy for the kernelPayload policypriority: Highrules:default matchextraLargeMemorySystems: false A sample generated alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “Kernel Exploitation”,“location”: {“container_id”: “N/A”,“container_name”: “N/A”,“image_id”: “N/A”,“image_name”: “N/A”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“matched_objects”: null,“matched_rule”: “ ”,“metadata”: null,“notifications”: [{“message”: “The kernel function prepare_kernel_cred with a return address in userland was invoked during the execution of the program \“/sbin/tcpping\” with PID 1001. This action matched the \“default match\” policy rule.”,“name”: “Kernel Payload Example Strategy”,“timestamp”: 1509474507990963973,“uuid”: “ZZZ”}],“policy_type”: “ ”,“priority”: “High”,“process_info”: {“args”: [ ],“children”: null,“cwd”: “ ”,“egid”: 15,“euid”: 12,“fsgid”: 15,“fsuid”: 12,“gid”: 15,“name”: “/sbin/tcpping”,“parent”: null,“pid”: 1001,“pid_uuid”: “YYY”,“ppid”: 0,“sgid”: 15,“suid”: 12,“uid”: 12},“scope”: “Node”,“strategy_name”: “Kernel Payload Example Strategy”,“timestamp”: 1509474507990963973,“uuid”: “XXX”} d. Privilege Escalation Policy (Policy Identifier: privilegeEscalation) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenames Description: This Strategy monitors for privilege escalation attacks that overwrite process privileges without going through a setuid or setgid call. If there is an attempt to set a privilege to root without a matching open privilege-related syscall, the strategy raises an alert. An example configuration is presented below: Privilege Escalation Strategy:policy: privilegeEscalationenabled: truealertMessage: Privilege Escalation Attemptcomments: Example strategy using the privilegeEscalation policypriority: Highrules:default match A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “Privilege Escalation Alert”,“location”: {“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“notifications”: [{“message”: “A privilege escalation exploit was detected in unknown program with PID 12 in container \“test-container\”. This action matched the \“default match\” policy rule.”,“name”: “Privilege Escalation Strategy”,“timestamp”: 1536602201654990745,“uuid”: “3ebf360f-f915-4422-afc1-3e7561779b5c”}],“priority”: “High”,“process_info”: {“args”: [“no open syscall for attempted set of uid, no open syscall for attempted set of gid,”],“children”: null,“cwd”: “ ”,“egid”: 0,“euid”: 1000,“fsgid”: 0,“fsuid”: 0,“gid”: 1000,“name”: “ ”,“parent”: null,“pid”: 12,“pid_uuid”: “394bd04468b541bdbe132a71de3671cb”,“ppid”: 0,“sgid”: 1000,“suid”: 1000,“uid”: 0},“scope”: “Process”,“strategy_name”: “Privilege Escalation Strategy”,“timestamp”: 1536602201654985695,“uuid”: “Privilege-Escalation-Strategy-”} e. Resource Limit Policy (Policy Identifier: Setrlimit) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesparentProgramNamepathsprogramNamepaths Description: This Strategy alerts when a process's resource limits are set to an unusually high value (for example, an unlimited stack size). This operation is performed by some exploits and may indicate an attempted privilege escalation exploit. This Strategy can be triggered by the “make” program. As such, it is recommended to include “ignore programName==/usr/*make” in the rules for this strategy to reduce false positives, or to disable this strategy on hosts that regularly perform software builds. An example configuration is presented below: SetRlimit Example Policy:policy: setrlimitenabled: truealertMessage: Resource Limit Policycomments: Example strategy using the setrlimit policypriority: Lowrules:default match A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below: {“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “Resource Limit Policy”,“location”: {“container_id”: “98e73918fad6ce45d2f84f76b0e61d2bf789fe6cda74b24184918133c3a32863”,“container_name”: “/test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “TEST_IMAGE”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“matched_objects”: null,“matched_rule”: “ ”,“metadata”: null,“notifications”: [{“message”: “The program \“/tmp/badprog\” with PID 1001 in container \“/test-container\” has increased its resource limits. This may be performed as part of an exploitation attempt. This action matched the \“default match\” policy rule.”,“name”: “Resource Limits”,“timestamp”: 1509474507990963973,“uuid”: “ZZZ”}],“policy_type”: “ ”“priority”: “Low”,“process_info”: {“args”: [ ],“children”: null,“cwd”: “ ”,“egid”: 15,“euid”: 12,“fsgid”: 15,“fsuid”: 12,“gid”: 15,“name”: “/tmp/badprog”,“parent”: null,“pid”: 1001,“pid_uuid”: “YYY”,“ppid”: 0,“sgid”: 15,“suid”: 12,“uid”: 12},“scope”: “Process”,“strategy_name”: “SetRlimit Example Policy”,“timestamp”: 1509474507990963973,“uuid”: “XXX”} f. SELinux Policy (Policy Identifier: Selinux) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenames Additional configuration options for this policy are listed below: ConfigurationOptionTypeDefaultDescriptiondefaultboolFalseIf true, any eventthat either disablesSELinux at startupor disablesSELinuxenforcement willtrigger an alert. Iffalse, will onlytrigger alerts ifwhatever settingexisted at startup ismodified. Description: Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies. This Strategy will generate an Alert if a kernel exploit illegally modified the SELinux settings. An example configuration is presented below: SELinux Example Strategy:policy: selinuxenabled: truealertMessage: SELinux Disabledcomments: Example strategy using the selinux policypriority: Highrules:ignore sensorId==aabbccddeeffdefault matchdefault: false A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“confidence”: “Max”,“description”: “SELinux Disabled”,“location”: {“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“notifications”: [{“actor_uuid”: “6f4d23d6-686b-4bf6-9401-86d339485f6a”,“message”: “The SELinux security mechanism, which was previously enabled, has been disabled. This action matched the \“default match\” policy rule.”,“name”: “SELinux Disabled”,“timestamp”: 1535569121667980508}],“priority”: “High”,“scope”: “Node”,“strategy_name”: “SELinux Example Strategy”,“timestamp”: 1535569121667976757,“uuid”: “Example-SELinux”} g. SMEP/SMAP Policy (Policy Identifier: smepSmap) Valid filter rule fields for this policy are listed below: TypeDescriptionsensorIdnames Description: The SMEP SMAP Strategy monitors for kernel exploitation attempts which involve disabling specific kernel memory protection mechanisms (as is common in kernel-based local-privilege-escalation exploits). The Supervisor-Mode-Execution-Prevention (SMEP) and Supervisor-Mode-Access-Prevention (SMAP) are mechanisms on modern CPUs to protect the kernel from exploitation techniques involving userland memory. This Strategy alerts on detection of kernel behavior disabling these protection mechanisms. An example configuration is presented below: SMEP SMAP Policy:policy: smepSmapenabled: truealertMessage: SMEP/SMAP was disabledcomments: Example strategy using the smepSmap policypriority: Highrules:ignore sensorId==aabbccddeeffdefault match A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“confidence”: “Max”,“description”: “SMEP/SMAP was disabled”,“location”: {“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“notifications”: [{“message”: “The SMEP/SMAP security mechanism, which was previously enabled, has been disabled. This action matched the \“default match\” policy rule.”,“name”: “SMEP/SMAP was disabled”,“timestamp”: 1535569121667980508,“uuid”: “6f4d23d6-686b-4bf6-9401-86d339485f6a”}],“priority”: “High”,“scope”: “Node”,“strategy_name”: “SMEP SMAP Policy”,“timestamp”: 1535569121667976758,“uuid”: “Default-SmepSmap-Config-Example”} h. Set Privilege Policy (Policy Identifier: setPrivilege) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesprogramNamepathsparentProgramNamepathsuidnumberstargetuid—targeteuid—targetsuid—targetfsuid—gidnumberstargetgid—targetegid—targetsgid—targetfsgid— Additional configuration options for this policy are listed below: ConfigurationOptionTypeDefaultDescriptionsetuidboolTrueAlert if setuid iscalledsetreuidboolTrueAlert if setreuid iscalledsetresuidboolTrueAlert if setresuid iscalledsetfsuidboolTrueAlert if setfsuid iscalledsetgidboolTrueAlert if setgid iscalledsetregidboolTruealert if setregid iscalledsetresgidboolTruealert if setresgid iscalledsetfsgidboolTrueAlert if setfsgid iscalled Description: This Strategy monitors calls to the setuid and setgid family of system calls used by processes to run with the privileges of a specific user or group. This can be used to alert on unusual usage of these system calls (e.g., usage as part of an exploit) as well as to monitor usage of privilege-altering commands such as “sudo”. An example configuration is presented below: SetPrivilegeTest:policy: setPrivilegeenabled: truealertMessage: set privilege alertcomments: testStratDescriptionpriority: Mediumrules:default matchsetuid: truesetreuid: truesetresuid: truesetfsuid: truesetgid: truesetregid: truesetresgid: truesetfsgid: true A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “set privilege alert”,“location”: {“container_id”: “98e73918fad6ce45d2f84f76b0e61d2bf789fe6cda74b24184918133c3a32863”,“container_name”: “/test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “TEST_IMAGE”,“node_name”: “ ”, “sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“matched_objects”: null,“matched_rule”: “ ”,“metadata”: null,“notifications”: [{“message”: “The program (name unknown) with PID 1001 in container \“/test-container\” made an unauthorized call to setuid to set uid: 1000. This action matched the “default match” policy rule.”,“name”: “SetPrivilegeTest”,“timestamp”: 1509474507990963973,“uuid”: “ZZZ”}],“policy_type”: “ ”,“priority”: “Medium”,“process_info”: {“args”: [ ],“children”: null,“cwd”: “ ”,“egid”: 10,“euid”: 10,“fsgid”: 10,“fsuid”: 10,“gid”: 10,“name”: “ ”,“parent”: null,“pid”: 1001,“pid_uuid”: “YYY/bin/bash”,“ppid”: 0,“sgid”: 10,“suid”: 10,“uid”: 10},“scope”: “Process”,“strategy_name”: “SetPrivilegeTest”,“timestamp”: 1509474507990963973,“uuid”: “XXX”} i. Spectre Meltdown Policy (Policy Identifier: spectreMeltdown) Valid filter rule fields for this policy are listed below: TypeDescriptionsensorIdnames Additional configuration options for this policy are listed below: ConfigurationOptionTypeDefaultDescriptioncacheMissRatiofloat640.97Maximum allowedThresholdratio of cache readsto cache misses Description: This Strategy monitors for spectre or meltdown attacks by monitoring hardware performance counters. If the cachemiss ratio and cachemiss-branchmiss ratio fall under a certain threshold derived through stocastic modeling (SVM), the Strategy raises an alert. An example configuration is presented below: Spectre Meltdown Policy:policy: spectreMeltdownenabled: truealertMessage: Spectre/Meltdown Exploit Detectedcomments: Example strategy using the spectreMeltdown policypriority: Highrules:ignore sensorId==aabbccddeeffdefault matchcacheMissRatioThreshold: 0.97 A sample generated alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“confidence”: “Max”,“description”: “Spectre/Meltdown Exploit Detected”,“location”: {“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“notifications”: [{“actor_uuid”: “6f4d23d6-686b-4bf6-9401-86d339485f6a”,“message”: “Spectre Meltdown Attack noticed on sensor ID 0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4. This action matched the \“default match\” policy rule.”,“name”: “Spectre/Meltdown Exploit Detected”,“timestamp”: 1535569121667980508}],“priority”: “High”,“scope”: “Node”,“strategy_name”: “Spectre Meltdown Policy”,“timestamp”: 1535569121667976755,“uuid”: “Default-Spectre-Config-1”} 4. Remote Exploitation Policies a. Connect Policy (Policy Identifier: Connect) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesremoteHosthostoutboundPortnumbersprogramNamepaths Description: This Strategy provides network-level IP-based policy monitoring for TCP connections. An example configuration is presented below: Connect Policy Example:policy: connectenabled: truealertMessage: Illegal Connection Attemptedcomments: Example strategy using the connect policypriority: Highrules:match remoteHost in $connecthostsdefault ignore CONNECTHOSTS:type: hostsdescription: Connectable Hostslist:192.168.1.0/24 A sample generated alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “Illegal Connection Attempted”,“location”: {“container_id”: “98e73918fad6ce45d2f84f76b0e61d2bf789fe6cda74b24184918133c3a32863”,“container_name”: “/test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “TEST_IMAGE”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“matched_objects”: null,“matched_rule”: “ ”,“metadata”: null,“notifications”: [{“message”: “The program \“/sbin/tcpping\” with PID 1001 in container \“/test-container\” communicated with 192.168.1.2 on TCP port 8080. This attempt was not successful. This action matched the “match remoteHost in $connecthosts” policy rule (where remoteHost (192.168.1.2) in 192.168.1.0/24).”,“name”: “testconnectpolicy1”,“timestamp”: 1509474507990963973,“uuid”: “ZZZ”}],“policy_type”: “ ”,“priority”: “High”,“process_info”: {“args”: [ ],“children”: null,“cwd”: “ ”,“egid”: 15,“euid”: 12,“fsgid”: 15,“fsuid”: 12,“gid”: 15,“name”: “/sbin/tcpping”,“parent”: null,“pid”: 1001,“pid_uuid”: “YYY”,“ppid”: 0,“sgid”: 15,“suid”: 12,“uid”: 12},“scope”: “Process”,“strategy_name”: “Connect Policy Example”,“timestamp”: 1509474507990963973,“uuid”: “testconnectpolicy1-XXX”} b. Interactive Shell Policy (Policy Identifier: interactiveShell) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesprogramNamepathsparentProgramNamepaths Additional configuration options for this policy are listed below: ConfigurationOptionTypeDefaultDescriptionalertOnIncompleteboolTrueGenerate an alertDataeven if some of thealert info is absent Description: This Strategy provides policy monitoring of interactive shell sessions (like/bin/bash). The premise for this Strategy is that security and operational best-practices generally discourage direct system shell interaction with containers running in production, such as logging-in over SSH to a production container. In addition to being generally discouraged, the presence of an interactive shell can also be an indicator of an attack, such as one delivering a payload that “pops” a shell for the attacker. This Strategy employs a whitelist of programs which are permitted to spawn interactive shells, and generates an alert if an interactive shell is executed by a non-whitelisted parent process. One caveat is that a whitelisted interactive-shell can spawn other interactive-shells without triggering an alert. The identification of permitted sub-shells is made by their relationship to a whitelist-permitted parent process. An example configuration is presented below: Interactive Shell Policy Example:policy: interactiveShellenabled: truealertMessage: An interactive shell was spawned!comments: Example of interactive shell policy with a whitelistpriority: Highrules:ignore parentProgramName in $authorizedprogramsdefault matchalertOnIncompleteData: true AUTHORIZEDPROGRAMS:type: pathslist:“/usr/sbin/sshd”: “ssh” A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“confidence”: “MediumHigh”,“description”: “An interactive shell was spawned!”,“location”: {“container_id”: “b56b34be-aba8-439e-b488-827cdd869446”,“container_name”: “container 1507908229.024124”,“image_id”: “4ba42670-4790-460e-b3cf-9f40ab3f197a”,“image_name”: “unit_test:1507908229.024146”,“sensor_id”: “2862d402-9814-4d14-9996-f4d97c675cd5”},“notifications”: [{“message”: “The interactive shell \“/bin/bash\” with PID 3 was executed by the program (name unknown). The current configuration of Capsule8 is to alert on interactive shells even if the parent program is unknown. This action matched the \“default match\” policy rule.”,“name”: “Interactive Shell Policy Example”,“timestamp”: 12434343435,“uuid”: “3ad2bfe3-8665-4d6b-a2d6-60238b05d02e”}],“priority”: “High”,“process_info”: {“args”: [“/bin/bash”],“name”: “/bin/bash”,“pid”: 3,“pid_uuid”: “cccc”,“ppid”: 2},“scope”: “Process”,“strategy_name”: “Interactive Shell Policy Example”,“timestamp”: 134334343,“uuid”: “3ad2bfe3-8665-4d6b-a2d6-60238b05d10b”} c. Memory Protection Policy (Policy Identifier: memoryProtection) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesprogramNamepaths Description: This Strategy provides monitoring for attempts to exploit memory-mismanagement software vulnerabilities. Attempts to exploit these vulnerabilities to execute arbitrary code (also known as shellcode) commonly involve a step to modify the permissions on memory containing attacker-controlled data, so that the attacker-controlled data can be executed as program code. This Strategy specifically monitors for attempts to modify heap memory to be executable, and if that behavior is observed, an Alert is generated informing which process is under attack. Alerts include related container information. An example configuration is presented below: MemProtect Example Policy:policy: memoryprotectionenabled: truecomments: Example strategy using the memoryprotection policypriority: Highrules:ignore programName in $exampleWhitelistdefault matchalertMessage: Memory Protection AlertexampleWhitelist:type: pathslist:“/tmp/whitelisted_program”: “example of whitelisted program” A sample generated Alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“alert_group_id”: “ ”,“confidence”: “Max”,“description”: “Memory Protection Alert”,“location”: {“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“node_name”: “ ”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“matched_objects”: null,“matched_rule”: “ ”,“metadata”: null,“notifications”: [{“message”: “A memory corruption exploit was detected in the program \“/tmp/non-whitelisted_program\” with PID 3333 in container \“test-container\”. This action matched the \“default match\” policy rule.”,“name”: “MemProtect Example Policy”,“timestamp”: 1509474507990963973,“uuid”: “ZZZ”}],“policy_type”: “ ”“priority”: “High”,“process_info”: {“args”: [ ],“children”: null,“cwd”: “ ”,“egid”: 0,“euid”: 0,“fsgid”: 0,“fsuid”: 0,“gid”: 0,“name”: “/tmp/non-whitelisted_program”,“parent”: null,“pid”: 3333,“pid_uuid”: “a856880a77274d238a5a9d1057831dec”,“ppid”: 0,“sgid”: 0,“suid”: 0,“uid”: 0},“scope”: “Process”,“strategy_name”: “MemProtect Example Policy”,“timestamp”: 1509474507990963973,“uuid”: “XXX”} d. New File Exec Policy (Policy Identifier: newFileExec) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesparentProgramNamepathsprogramNamepaths Additional configuration options for this policy are listed below: ConfigurationOptionTypeDefaultDescriptionfileTimeoutint30Minutes afterwhich newlycreated files can beexecuted withouttriggering an alert Description: This Strategy watches for execution of newly-created files by non-whitelisted programs. If such created files are executed within the configured timeout, the Strategy produces Alerts. This behavior is often associated with webshells. An example configuration is presented below: New File Exec Policy Example:policy: newFileExecenabled: truealertMessage: A file not previously present in the system was executedcomments: Example strategy using the newFileExec policypriority: Highrules:ignore programName==/tmp/safe/*default matchfileTimeout: 30 A sample generated alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“confidence”: “Max”,“description”: “A kernel module was loaded”,“location”: {“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“notifications”: [{“actor_uuid”: “3ad2bfe3-8665-4d6b-a2d6-60238b05d02e”,“message”: “Kernel Module \“sampleModule\” was loaded. This action matched the following policy rule: \“default match\””,“name”: “Kernel Module Example Policy”,“timestamp”: 12434343435}],“priority”: “Medium”,“process_info”: {“args”: [ ],“name”: “ ”“pid”: 22059,“pid_uuid”: “080a6767-9f37-4d70-b00d-015a9edf9099”,“ppid”: 0},“scope”: “Node”,“strategy_name”: “Kernel Module Example Policy”,“timestamp”: 134334343,“uuid”: “4ba42670-4790-460e-b3cf-9f40ab3f197a”} e. Remote Interactive Shell Policy (Policy Identifier: remoteInteractiveShell) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesremoteHosthostoutboundPortnumbersinboundPortnumbersuidnumbersgidnumbersprogramNamepathsparentProgramNamepaths Description: This Strategy is similar to the functionality of the Interactive Shell strategy, but specifically monitors for interactive shells processing input/output from a network connection, such as the behavior exhibited by exploit payloads using mechanisms like the bash shell's/dev/tcp to connect back to an attacker's machine. An example configuration is presented below: Remote Interactive Shell Strategy Example:policy: remoteInteractiveShellenabled: truealertMessage: Remote Interactive Shell Executedcomments: Example strategy using the remoteInteractiveShell policypriority: Highrules:ignore parentProgramName in $programlistdefault matchalertOnIncompleteData: true PROGRAMLIST:type: pathslist:/bin/baz A sample generated alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“confidence”: “Max”,“description”: “Remote Interactive Shell Executed”,“location”: {“container_id”: “b56b34be-aba8-439e-b488-827cdd869446”,“container_name”: “container_1507908229.024124”,“image_id”: “4ba42670-4790-460e-b3cf-9f40ab3f197a”,“image_name”: “unit_test:1507908229.024146”,“sensor_id”: “2862d402-9814-4d14-9996-f4d97c675cd5”},“notifications”: [{“actor_uuid”: “3ad2bfe3-8665-4d6b-a2d6-60238b05d02e”,“message”: “The interactive shell \“/bin/bash\” with PID 2 in container “container_1507908229.024124” was spawned with remote-control operation through an outbound connection to 192.168.0.1. This action matched the \“default match\” policy rule.”,“name”: “Remote Interactive Shell Strategy Example”,“timestamp”: 12434343435}],“priority”: “High”,“process_info”: {“args”: [“/bin/bash”],“name”: “/bin/bash”,“pid”: 2,“pid_uuid”: “080a6767-9f37-4d70-b00d-015a9edf9099”,“ppid”: 0},“scope”: “Process”,“strategy_name”: “Remote Interactive Shell Strategy Example”,“timestamp”: 134334343,“uuid”: “XXXX”} f. Stack Pivot Detection Policy (Policy Identifier: stackPivotDetection) Valid filter rule fields for this policy are listed below: TypeDescriptioncontainerIdnamescontainerNamenamessensorIdnamesimageIdnamesimageNamenamesprogramNamepaths Description: This Strategy examines the stack pointer on certain syscalls and ensures that it is within normal stack bounds. If it is not, it raises an Alert. The stack pointer being outside the bounds of the stack is normally indicative of a stack pivot as part of an exploit chain. An example configuration is presented below: Example Stack Pivot Policy:policy: stackPivotDetectionenabled: truealertMessage: Stack Pivot Detectedcomments: Example strategy using the stackPivotDetection policypriority: Mediumrules:default match A sample generated alert for the above configuration (dummy values shown where normally real pids/uuids/timestamps etc. would be present) is presented below:{“confidence”: “High”,“description”: “Stack Pivot Detected”,“location”: {“container_id”: “4cb5b14f2f6b8e02a3e57188e230d140f2a8880d236a5f21face723678a2c50a”,“container_name”: “test-container”,“image_id”: “7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560”,“image_name”: “alpine:3.6”,“sensor_id”: “0d76a2a9ede1bc3df805d26e90501af54b11eabe180e963c56d27f065d9243f4”},“notifications”: [{“actor_uuid”: “6f4d23d6-686b-4bf6-9401-86d339485f6a”,“message”: “A stack pivot was detected in the program with PID 3333 in container \“test-container\”. This action matched the \“default match\” policy rule.”,“name”: “Stack Pivot Detected”,“timestamp”: 1535569121667980508}],“priority”: “Medium”,“process_info”: {“args”: [ ],“name”: “ ”“pid”: 3333,“pid_uuid”: “a856880a77274d238a5a9d1057831dec”,“ppid”: 0},“scope”: “Process”,“strategy_name”: “Example Stack Pivot Policy”,“timestamp”: 1535569121667976755,“uuid”: “Example-StackPivot-Config-1”} VII. Alerts A. Getting Events from the Platform At startup, Gateway404's instance registers its HandleEvent function as a callback with CommsClient402's instance. It then gets the list of Events each Strategy needs by calling each Strategy's GetSubscription( ) method and combines them into a set to form a single telemetry subscription. Any time an Event is received via this telemetry subscription, it does the following. First, CommsClient402invokes its factory and creates a corresponding Event as a Platform Event. Second, it calls Gateway404's HandleEvent method as a callback. B. Passing Events to Strategies FIG.27illustrates an example flow for a HandleEvent method. When Gateway404's HandleEvent method is called, it first invokes all of its registered Event Utilities (2702). This is to allow them to update their internal state, as Strategies may use these utilities to determine whether an Alert should be emitted. After updating the state of registered utilities, it then calls each of the enabled Strategies' HandleEvent methods (2704). This method has the following signature in various embodiments: HandleEvent(event protectevent.Event) ([ ]alert.Alert, [ ]metaevent.Event, error) and is defined in pkg/strategy/interface.go. When invoked, the Strategy consumes the Event and produces one or more Alerts (2706) and/or one or more MetaEvents (2708). The MetaEvents produced by the call to a Strategy's HandleEvent method are sent back to analytics framework400to be published by invoking Gateway404's SendEventOfInterestToPlatform method, which sends it to CommsClient402. Comms Client402translates this to a protobuf message and publishes it on a topic supplied inside of the specific MetaEvent struct. Each Alert is sent back to analytics framework400by calling Gateway404's SendAlertToPlatform. This method does a few things. First, it applies any registered Alert Utilities to enrich the Alert data by calling their HandleAlert methods. Examples include things like adding process lineage, annotating an Alert with other metadata such as number of alerts for the sensor node, etc. This passes the Alert to Arbiter408, by calling its HandleAlert method. This either returns the Alert or nil. If Arbiter408returns the Alert, then it is sent back to CommsClient402which converts it to an API protobuf format and then publishes it on the Alerting topic. If CommsClient402is a standalone or Embedded Server Library Client, then the Alert has the ability to be printed to Stdout, a local file, or a BLOB storage provider. C. Alert Filtering and the Arbiter In order to keep false positives from becoming an issue, Arbiter408supports filtering in the form of Alert filter logic, as follows: A set of filters are passed as strings and then compiled into an evaluation format. When an Alert is passed to Arbiter408it evaluates each filter in the list against the Alert. If any of the filters match then the Alert is discarded. The filters themselves are described using the following Alert filter language. Arbiter408's Alert filter language works by evaluating the predicate rule against a given Alert. If any of the rules evaluate to true then the Alert is filtered. Rules start with an Alert field identifier that identifies the specific field in the Alert to compare. Rules may be combined into more compound statements using special “and” and “or” operators. 1. Value Types a. Alert Field Identifiers The Alert Field Identifiers type identifies a specific property of an Alert to check. The following are examples of supported fields: IdentifierAlert FieldType″container_id″alert.Location.ContainerIDstring (hex_string)″image_id″alert.Location.ImageIDstring (hex_string)″sensor_id″alert.Location.SensorIDstring (hex_string)″container_name″alert.Location.ContainerNamestring″image_name″alert.Location.ImageNamestring″program_name″alert.ProcessInfo.ProgamNamestring″strategy″alert.StrategyNamestring″priority″alert.Priorityint″confidence″alert.Confidencefloat64 b. String Strings are identifiers that are not keywords or operators, and do not need quotes. For example:foobarbazbasil Example Usage:container_name in foobar baz basilprogram_name==/bin/sh c. Hex String A hex string represents a string of only hexadecimal characters. It can be either 64 characters or 12 characters long. It is used to represent UUIDs that are commonly SHA256 hashes such as container IDs, image IDs, and sensor IDs. If a short form is specified only the first 12 characters of the specified field will be compared. For example:98e73918fad6ce45d2f84f76b0e61d2bf789fe6cda74b24184918133c3a32863 0d76a2a9ede1 Example Usage:sensor_id in 0d76a2a9ede1 7ef86f8e8b85container_id==0d76a2a9ede1 d. FLOAT64 The FLOAT64 type represents a 64-bit signed floating point number. It is used only with the alert confidence field. All operators are valid for the float. For example:3.145962 Example Usage:confidence>=0.95 and confidence <0.971245 e. Integer The integer type represents a 64-bit integer. This is used only for the priority field. Additionally there are special keywords such as LOW, MEDIUM, and HIGH which represent 1, 2, and 3. Example Usage:priority in HIGH LOWpriority <HIGH 2. Operators in: The “in” operator tests whether the Alert field's value is in the specified list of values. Example Usage:priority in HIGH LOWsensor_id in 0d76a2a9ede1 7ef86f8e8b85 not_in: The “not_in” operator tests whether the Alert field's value is not in the specified list of values. Example Usage:sensor_id not_in 0d76a2a9ede1 7ef86f8e8b85 ==: The equality operator (“==”) tests whether the Alert field's value is equal to the specified value. Example Usage:program_name==/bin/zsh !=: The negative equality operator (“!=”) tests whether the Alert field's value is not equal to the specified value. Example Usage:container_name !=steve >: The greater than operator (“>”) tests whether the Alert field's value is greater than the value specified. Example Usage:priority >1priority >LOW >=: The greater than or equal operator (“>=”) tests whether the Alert field's value is greater than or equal to the value specified. Example Usage:confidence >=0.90 <: The less than operator(“<”) tests whether the Alert field's value is less than the value specified. Example Usage:priority <3priority <HIGHconfidence <0.942 <=: The less than or equal operator (“<=”) tests whether the Alert field's value is less than or equal to the value specified. Example Usage:priority <=2priority <=MEDIUMconfidence <=0.942 OR: The OR operator joins two rules together into a single rule. This requires that one of the rules is joined by the OR operator to evaluate to true. Example Usage:container_name==steve or priority >LOW AND: The AND operator joins two or more rules together into a single rule. This requires that all of the rules joined by the AND operator evaluate to true. Example Usage:container_name==load_balancer:3.8and priority >LOW VIII. Query API In various embodiments, platform100provides a query API. The query API can be used for a variety of purposes, such as providing more context around Alerts in the form of high-level Events (also referred to herein as MetaEvents), providing a mechanism that an operator can use to see high-level Events leading up to an Alert, providing a mechanism for an operator to query other monitored hosts for MetaEvents, and to allow Sensors (e.g., Sensor112) to stay within their performance budgets. FIG.28Aillustrates an embodiment of a Sensor (e.g., Sensor112). Flush service2802is responsible for starting a gRPC stream to flush Sensors for MetaEvents (e.g., as request2804, that opens the gRPC stream). MetaEvents can be flushed as a “response,” (2806,2808) and/or flushed to an external mount (e.g., an S3 bucket2810) depending on Sensor configuration. Flight recorder2812is a ring buffer configured to hold a specified size limit of MetaEvents. Each entry in the Flight Recorder holds the following fields: Timestamp, EventType, and Payload. The Payload comprises Flatbuffer-encoded MetaEvents. Flusher2814is responsible for deciding where to flush the Extracted MetaEvents from flight recorder2812. FIG.28Billustrates an embodiment of a security server (e.g., security server110). Query service2852is responsible for handling requests to filter MetaEvents from Clients (e.g., via CLI124or Console122). Filtering is applied for a specified time range and supplied NYQL query string. Query Parser2854is configured to parse the Query Statement using NYQL's syntax and create/execute a Query Filter2856using the parsed Query data. The Query Filter component is responsible for figuring out to which Sensors the flush should be sent, filtering flushed MetaEvent responses, and determining if Mount Query component2858is needed. Mount Query component2858is responsible for querying a mounted drive (e.g., S3) for a dump of MetaEvents. Flusher service2860contains the gRPC Streaming endpoint for flushing flight recorder2812. The flush request starts a stream and receives a StartTime, EndTime, and EventType. FIG.29Aillustrates an example flow of requests and responses used in a scenario where no mount is configured for Sensors to store MetaEvents. FIG.29Billustrates an example flow of requests and responses used in a scenario where there is a mount configured for Sensors to store MetaEvents. In this configuration, an S3 mount is used as the external MetaEvent store. Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive. | 178,934 |
11943239 | DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS Overview Various embodiments provide tools and techniques for implementing call or data routing, and, more particularly, to methods, systems, and apparatuses for implementing fraud or distributed denial of service (“DDoS”) protection for session initiation protocol (“SIP”)-based communication. In various embodiments, a computing system may receive, from a first router in a network(s), first SIP data, the first SIP data indicating a request to initiate a SIP-based media communication session between a calling party at a source address in an originating network and a called party at a destination address in the network. In some embodiments, the SIP-based media communication may include, without limitation, at least one of a voice over Internet Protocol (“VoIP”) call, an IP-based video call, or an instant message over IP, and/or the like. The computing system may analyze the received first SIP data to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions. In some instances, the potential fraudulent or malicious actions may include, but are not limited to, at least one of one or more attacks to obtain fraudulent access, one or more denial of service (“DoS”) attacks, one or more DDos attacks, or one or more user datagram protocol (“UDP”) attacks (which may be a form of volumetric DDoS attack), and/or the like. Based on a determination that the received first SIP data does not comprise at least one abnormality indicative of potential fraudulent or malicious actions, the computing system may establish a media communication session between the calling party and the called party. On the other hand, based on a determination that the received first SIP data comprises at least one abnormality indicative of potential fraudulent or malicious actions, the computing system may reroute the first SIP data to a security deep packet inspection (“DPI”) engine. The security DPI engine may perform a deep scan of the received first SIP data to identify any known fraudulent or malicious attack vectors contained within the received first SIP data and to determine whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity. In the case that the deep scan of the received first SIP data does not reveal or identify any known fraudulent or malicious attack vectors contained within the received first SIP data, does not reveal or identify the calling party as a known malicious entity, and/or or does not reveal or identify the source address as being associated with a known malicious entity, then the security DPI engine may establish a media communication session between the calling party and the called party. On the other hand, in the case that the deep scan of the received first SIP data reveals or identifies at least one known fraudulent or malicious attack vector contained within the received first SIP data, then the security DPI engine may initiate one or more mitigation actions. According to some embodiments, the computing system and/or the security DPI engine may normalize all network traffic to the destination address (e.g., from source address) either after a predetermined period and/or after a predetermined number of SIP data checks showing no abnormalities indicative of potential fraudulent or malicious actions. These and other aspects of the fraud or DDoS protection system and method for SIP-based communication are described in greater detail with respect to the figures. The following detailed description illustrates a few exemplary embodiments in further detail to enable one of skill in the art to practice such embodiments. The described examples are provided for illustrative purposes and are not intended to limit the scope of the invention. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the described embodiments. It will be apparent to one skilled in the art, however, that other embodiments of the present invention may be practiced without some of these specific details. In other instances, certain structures and devices are shown in block diagram form. Several embodiments are described herein, and while various features are ascribed to different embodiments, it should be appreciated that the features described with respect to one embodiment may be incorporated with other embodiments as well. By the same token, however, no single feature or features of any described embodiment should be considered essential to every embodiment of the invention, as other embodiments of the invention may omit such features. Unless otherwise indicated, all numbers used herein to express quantities, dimensions, and so forth used should be understood as being modified in all instances by the term “about.” In this application, the use of the singular includes the plural unless specifically stated otherwise, and use of the terms “and” and “or” means “and/or” unless otherwise indicated. Moreover, the use of the term “including,” as well as other forms, such as “includes” and “included,” should be considered non-exclusive. Also, terms such as “element” or “component” encompass both elements and components comprising one unit and elements and components that comprise more than one unit, unless specifically stated otherwise. Various embodiments described herein, while embodying (in some cases) software products, computer-performed methods, and/or computer systems, represent tangible, concrete improvements to existing technological areas, including, without limitation, call routing technology, call routing management technology, data routing technology, data routing management technology, network management technology, communication fraud protection technology, DOS protection technology, and/or the like. In other aspects, certain embodiments, can improve the functioning of user equipment or systems themselves (e.g., call routing systems, call routing management systems, data routing systems, data routing management systems, network management systems, communication fraud protection systems, DOS protection systems, etc.), for example, by receiving, using a computing system and from a first router among a plurality of routers in a network, first session initiation protocol (“SIP”) data, the first SIP data indicating a request to initiate a SIP-based media communication session between a calling party at a source address in an originating network and a called party at a destination address in the network; analyzing, using the computing system, the received first SIP data to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions; based on a determination that the received first SIP data comprises at least one abnormality indicative of potential fraudulent or malicious actions, rerouting, using the computing system, the first SIP data to a security deep packet inspection (“DPI”) engine; performing, using the security DPI engine, a deep scan of the received first SIP data to identify any known fraudulent or malicious attack vectors contained within the received first SIP data and to determine whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity; and in response to the security DPI engine identifying at least one known fraudulent or malicious attack vector contained within the received first SIP data, initiating one or more mitigation actions; and/or the like. In particular, to the extent any abstract concepts are present in the various embodiments, those concepts can be implemented as described herein by devices, software, systems, and methods that involve specific novel functionality (e.g., steps or operations), such as, analyzing the received first SIP data to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions; based on a determination that the received first SIP data comprises at least one abnormality indicative of potential fraudulent or malicious actions, rerouting the first SIP data to a security DPI engine; performing a deep scan of the received first SIP data to identify any known fraudulent or malicious attack vectors contained within the received first SIP data and to determine whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity; and in response to the security DPI engine identifying at least one known fraudulent or malicious attack vector contained within the received first SIP data, initiating one or more mitigation actions; and/or the like, to name a few examples, that extend beyond mere conventional computer processing operations. These functionalities can produce tangible results outside of the implementing computer system, including, merely by way of example, optimized screening of SIP-based requests and data to identify potential fraudulent and/or malicious actions, optimized deep scans of SIP-based requests to identify potential fraudulent or malicious attack vectors, automatic routing or rerouting of SIP requests, and automatic initiation of mitigation actions, and/or the like, at least some of which may be observed or measured by customers and/or service providers. In an aspect, a method may comprise receiving, using a computing system and from a first router among a plurality of routers in a network, first session initiation protocol (“SIP”) data, the first SIP data indicating a request to initiate a SIP-based media communication session between a calling party at a source address in an originating network and a called party at a destination address in the network; analyzing, using the computing system, the received first SIP data to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions; based on a determination that the received first SIP data comprises at least one abnormality indicative of potential fraudulent or malicious actions, rerouting, using the computing system, the first SIP data to a security deep packet inspection (“DPI”) engine; performing, using the security DPI engine, a deep scan of the received first SIP data to identify any known fraudulent or malicious attack vectors contained within the received first SIP data and to determine whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity; and in response to the security DPI engine identifying at least one known fraudulent or malicious attack vector contained within the received first SIP data, initiating one or more mitigation actions. Merely by way of example, in some cases, the computing system may comprise at least one of a call server, a call controller, a call manager, a media gateway controller, a video call server, an instant messaging server, a network operations center (“NOC”), a centralized call server, a centralized call controller, a centralized call manager, a centralized media gateway controller, a centralized video call server, a centralized instant messaging server, a distributed computing-based call server, a distributed computing-based call controller, a distributed computing-based call manager, a distributed computing-based media gateway controller, a distributed computing-based video call server, a distributed computing-based instant messaging server, or a distributed computing-based NOC, and/or the like. In some instances, the SIP-based communication may comprise at least one of a voice over Internet Protocol (“VoIP”) call, an IP-based video call, or an instant message over IP, and/or the like. In some embodiments, the method may further comprise receiving, using the computing system, continually collected data from the plurality of routers in the network; wherein continually collecting data from the plurality of routers in the network may comprise receiving the first SIP data from the first router in a continual, periodic, or sporadic manner, and/or the like. In some cases, each of the plurality of routers may employ active data collection to collect network traffic data, wherein the collected network traffic data may comprise SIP data and non-SIP data, and wherein each of the plurality of routers may provide the SIP data as the continually collected data to the computing system. According to some embodiments, analyzing the received first SIP data may comprise analyzing, using the computing system, the received first SIP data based at least in part on information regarding packets sent from the source address, to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions, wherein the information regarding packets sent from the source address may comprise information including at least one of packet sizes of packets sent from the source address, frequency of packets sent from the source address, or types of packets sent from the source address, and/or the like. In some instances, the potential fraudulent or malicious actions may comprise at least one of one or more attacks to obtain fraudulent access, one or more denial of service (“DoS”) attacks, one or more distributed denial of service (“DDos”) attacks, or one or more user datagram protocol (“UDP”) attacks, and/or the like. In some embodiments, rerouting the first SIP data to the security DPI engine may comprise sending, using the computing system, routing updates to one or more routers, via a provisioning layer of the network, to route the first SIP data from the source address, wherein the routing updates may be constructed such that latency is minimized, available capacity of the security DPI engine is taken into account, and there is no single points of failure. Alternatively, or additionally, rerouting the first SIP data to the security DPI engine may comprise: identifying, using the computing system, one or more security DPI engines among a plurality of security DPI engines in the network to reroute the first SIP data; selecting, using the computing system, a first security DPI engine from the identified one or more security DPI engines, based at least in part on at least one of proximity of each of the one or more security DPI engines to the destination address, latency caused by rerouting data to each of the one or more security DPI engines, available capacity of each of the one or more security DPI engines, or identification of points of failure due to selection of each of the one or more security DPI engines; and rerouting, using the computing system, the first SIP data to the selected first security DPI engine. According to some embodiments, the computing system may comprise a data collection engine and a policy engine, wherein receiving the first SIP data may be performed by the data collection engine, and wherein analyzing and rerouting the received first SIP data may be performed by the policy engine. Merely by way of example, in some instances, the one or more mitigation actions may comprise at least one of: rerouting or blocking any suspect network traffic to the destination address from the source address; rerouting or blocking all network traffic to the destination address from the source address; rerouting or blocking any suspect network traffic to all destinations from the source address; rerouting or blocking all network traffic to all destinations from the source address; rerouting or blocking any suspect network traffic to all destinations from one or more source addresses associated with the calling party or associated with an entity associated with the source address; or rerouting or blocking all network traffic to all destinations from one or more source addresses associated with the calling party or associated with the entity associated with the source address; and/or the like. In some embodiments, at least one of identifying any known fraudulent or malicious attack vectors contained within the received first SIP data, determining whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity, or initiating the one or more mitigation actions may be performed using at least in part a machine learning engine. In some cases, the method may further comprise recording an event to the machine learning engine in response to at least one of identifying any known fraudulent or malicious attack vectors contained within the received first SIP data, determining whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity, or initiating the one or more mitigation actions, and/or the like. In some instances, the method may further comprise normalizing, using the computing system and after initiating the one or more mitigation actions, all network traffic to the destination address after at least one of a predetermined period or a predetermined number of SIP data checks showing no abnormalities indicative of potential fraudulent or malicious actions. According to some embodiments, the method may further comprise sending, using the computing system, a report to a system log in response to at least one of: rerouting the first SIP data to the security DPI engine, performing the deep scan of the received first SIP, initiating the one or more mitigation actions, recording an event to the machine learning engine, or normalizing the network traffic to the destination address, and/or the like. In another aspect, a system might comprise a computing system and a first security deep packet inspection (“DPI”) engine among a plurality of security DPI engines. The computing system may comprise a data collection engine; a policy engine; at least one first processor; and a first non-transitory computer readable medium communicatively coupled to the at least one first processor. The first non-transitory computer readable medium might have stored thereon computer software comprising a first set of instructions that, when executed by the at least one first processor, causes the computing system to: receive, using the data collection engine and from a first router among a plurality of routers in a network, first session initiation protocol (“SIP”) data, the first SIP data indicating a request to initiate a SIP-based media communication session between a calling party at a source address in an originating network and a called party at a destination address in the network; analyze, using the policy engine, the received first SIP data to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions; and based on a determination that the received first SIP data comprises at least one abnormality indicative of potential fraudulent or malicious actions, rerouting, using the policy engine, the first SIP data to the first security DPI engine. The first security DPI engine may comprise at least one second processor and a second non-transitory computer readable medium communicatively coupled to the at least one second processor. The second non-transitory computer readable medium might have stored thereon computer software comprising a second set of instructions that, when executed by the at least one second processor, causes the first security DPI engine to: perform a deep scan of the received first SIP data to identify any known fraudulent or malicious attack vectors contained within the received first SIP data and to determine whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity; and in response to the security DPI engine identifying at least one known fraudulent or malicious attack vector contained within the received first SIP data, initiating one or more mitigation actions. In some embodiments, the computing system may comprise at least one of a call server, a call controller, a call manager, a media gateway controller, a video call server, an instant messaging server, a network operations center (“NOC”), a centralized call server, a centralized call controller, a centralized call manager, a centralized media gateway controller, a centralized video call server, a centralized instant messaging server, a distributed computing-based call server, a distributed computing-based call controller, a distributed computing-based call manager, a distributed computing-based media gateway controller, a distributed computing-based video call server, a distributed computing-based instant messaging server, or a distributed computing-based NOC, and/or the like. Merely by way of example, in some cases, the one or more mitigation actions may comprise at least one of: rerouting or blocking any suspect network traffic to the destination address from the source address; rerouting or blocking all network traffic to the destination address from the source address; rerouting or blocking any suspect network traffic to all destinations from the source address; rerouting or blocking all network traffic to all destinations from the source address; rerouting or blocking any suspect network traffic to all destinations from one or more source addresses associated with the calling party or associated with an entity associated with the source address; or rerouting or blocking all network traffic to all destinations from one or more source addresses associated with the calling party or associated with the entity associated with the source address; and/or the like. According to some embodiments, at least one of identifying any known fraudulent or malicious attack vectors contained within the received first SIP data, determining whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity, or initiating the one or more mitigation actions may be performed using at least in part a machine learning engine. In some embodiments, one of the first set of instructions, when executed by the at least one first processor, further causes the computing system to or the second set of instructions, when executed by the at least one second processor, causes the first security DPI engine to: normalize, after initiating the one or more mitigation actions, all network traffic to the destination address after at least one of a predetermined period or a predetermined number of SIP data checks showing no abnormalities indicative of potential fraudulent or malicious actions. Various modifications and additions can be made to the embodiments discussed without departing from the scope of the invention. For example, while the embodiments described above refer to particular features, the scope of this invention also includes embodiments having different combination of features and embodiments that do not include all of the above-described features. Specific Exemplary Embodiments We now turn to the embodiments as illustrated by the drawings.FIGS.1-6illustrate some of the features of the method, system, and apparatus for implementing call or data routing, and, more particularly, to methods, systems, and apparatuses for implementing fraud or distributed denial of service (“DDoS”) protection for session initiation protocol (“SIP”)-based communication, as referred to above. The methods, systems, and apparatuses illustrated byFIGS.1-6refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments. The description of the illustrated methods, systems, and apparatuses shown inFIGS.1-6is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments. With reference to the figures,FIG.1is a schematic diagram illustrating a system100for implementing fraud or distributed denial of service (“DDoS”) protection for session initiation protocol (“SIP”)-based communication, in accordance with various embodiments. In the non-limiting embodiment ofFIG.1, system100may comprise a calling device105that is associated with an originating party110aamong a plurality of originating parties110a-110n(collectively, “originating parties110” or “calling parties110” or the like) at corresponding source addresses115a-115n(collectively, “source addresses115” or the like) in an originating network(s)120. In some instances, the calling device105(also referred to as a “user device105” or the like) may include, but is not limited to, at least one of a telephone105a, a mobile phone105b, a smart phone105c, a tablet computer105d, or a laptop computer105e, and/or the like. System100likewise may comprise a called device125that is associated with a destination party130aamong a plurality of destination parties130a-130n(collectively, “destination parties130” or “called parties130” or the like) at corresponding destination addresses135a-135n(collectively, “destination addresses135” or the like) in a network(s)140. In some instances, the called device125(also referred to as a “user device125” or the like), similar to calling device105, may include, but is not limited to, at least one of a telephone125a, a mobile phone125b, a smart phone105c, a tablet computer105d, or a laptop computer105e, and/or the like. According to some embodiments, networks120and140may each include, without limitation, one of a local area network (“LAN”), including, without limitation, a fiber network, an Ethernet network, a Token-Ring™ network, and/or the like; a wide-area network (“WAN”); a wireless wide area network (“WWAN”); a virtual network, such as a virtual private network (“VPN”); the Internet; an intranet; an extranet; a public switched telephone network (“PSTN”); an infra-red network; a wireless network, including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the Bluetooth™ protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks. In a particular embodiment, the networks120and140may include an access network of the service provider (e.g., an Internet service provider (“ISP”)). In another embodiment, the networks120and140may include a core network of the service provider and/or the Internet. System100may further comprise a plurality of routers145a-145n(collectively, “routers145” or the like) that receives network traffic into network(s)140from at least one of calling devices105, source addresses115a-115n, and/or originating network(s)120, while sending network traffic from network(s)140to at least one of calling devices105, source addresses115a-115n, and/or originating network(s)120. System100may further comprise a plurality of routers150a-150n(collectively, “routers150” or the like) that relays network traffic within network(s)140to at least one of called devices125and/or destination addresses135a-135n, while receiving network traffic from at least one of called devices125and/or destination addresses135a-135n. Routers145and routers150may otherwise be similar, if not identical, to each other in terms of functionality, configurations, and/or form-factor, or the like. According to some embodiments, system100may further comprise a computing system160, which may include, without limitation, at least one of a call server, a call controller, a call manager, a media gateway controller, a video call server, an instant messaging server, a network operations center (“NOC”), a centralized call server, a centralized call controller, a centralized call manager, a centralized media gateway controller, a centralized video call server, a centralized instant messaging server, a distributed computing-based call server, a distributed computing-based call controller, a distributed computing-based call manager, a distributed computing-based media gateway controller, a distributed computing-based video call server, a distributed computing-based instant messaging server, or a distributed computing-based NOC, and/or the like. In some instances, the computing system160may further include, but is not limited to, a data collection engine160aand a policy engine160b, and/or the like. System100may further comprise a provisioning layer165, a plurality of security deep packet inspection (“DPI”) engines175a-175n(collectively, “security DPI engines175,” or the like), and a machine learning engine180aand corresponding database(s)180b. The data collection engine160amay be configured to collect network traffic data, while the policy engine160bmay be configured to analyze SIP data and to reroute the SIP data to a security DPI engine175for a deep scan in the case the policy engine160bencounters, discovers, or identifies signs or indications of potential fraudulent or malicious actions when analyzing the SIP data, or the like. The provisioning layer165may be configured to send instructions to one or more routers145and/or one or more routers150to control routing of network traffic data within network(s)140. Each security DPI engine175may be configured to perform a deep scan of SIP requests or SIP data (a) to identify any known fraudulent or malicious attack vectors contained within the SIP request or SIP data, (b) to determine whether the calling party is a known malicious entity, and/or (c) to determine whether the source address is associated with a known malicious entity, or the like, and may also be configured to initiate one or more mitigation actions, such as those described below with reference toFIG.3(although not limited to those mitigation actions). Machine learning engine180amay be configured (1) to aid or assist the security DPI engines175in performing the one or more mitigation actions, (2) to aid or assist the security DPI engines175in analyzing the SIP requests or SIP data to identify any known fraudulent or malicious attack vectors, and (3) to aid or assist the security DPI engines175in identifying any known bad actors or known malicious entities involved with sending the SIP requests or SIP data, and/or the like. Database(s)180bmay be used by machine learning engine180ato store, to search for, and to add at least one of (A) information regarding configurations and implementations for each of the one or more mitigation actions, (B) information regarding known fraudulent or malicious SIP-based attack vectors, and/or (C) information regarding known bad actors or known malicious entities, and/or the like. In some cases, each of the plurality of routers145and/or150may employ active data collection to collect network traffic data. The collected network traffic data may include, but is not limited to, SIP data and non-SIP data, and each of the plurality of routers145and/or150may provide the SIP data as continually collected data to the computing system160and/or the data collection engine160a, where continually collecting data from the plurality of routers in the network comprises receiving the SIP data from each router in a continual, periodic, or sporadic manner, or the like. In some embodiments, system100may further comprise one or more network or system logs190configured to receive and store logs pertaining to at least one of each instance that a SIP request or data is rerouted to one of the security DPI engines175for a deep scan, each instance that a deep scan is performed on a SIP request or data by one of the security DPI engines175, each instance of one or more mitigation actions being initiated, each instance of an event being recorded to the machine learning engine, each instance of network traffic to a destination address135a-135nbeing normalized after mitigation actions have been initiated and after a suitable amount of time has elapsed and/or after a suitable number of subsequent checks of SIP requests or data have been performed, and/or the like. In operation, each of routers145a-145nand each of routers150a-150nmay employ active data collection to continually, periodically, or sporadically collect network traffic data, which may include, but is not limited to SIP data and non-SIP data. Computing system160and/or data collection engine160aof computing system160may continually collect data from the plurality of routers145and150(depicted inFIG.1by short dash lines between data collection engine160aand each of routers145a-145nand150a-150n), by receiving SIP data from the plurality of routers145and150in a continual, periodic, or sporadic manner. In some cases, computing system160and/or data collection engine160amay continually collect data from one or more of the plurality of security DPI engines175a-175n(also depicted inFIG.1by short dash lines, in this case, between data collection engine160aand security DPI engines175). Computing system160and/or data collection engine160amay receive, from a first router145aamong the plurality of routers145and150in network(s)140, first SIP data, the first SIP data indicating a request to initiate a SIP-based media communication session between a calling party or originating party110aat a source address115ain originating network(s)120and a called party or destination party130aat a destination address135ain the network(s)140. In some embodiments, the SIP-based media communication may include, without limitation, at least one of a voice over Internet Protocol (“VoIP”) call, an IP-based video call, or an instant message over IP, and/or the like. In some cases, the network(s)140and the originating network(s)120may be the same network (operated by the same service provider). Alternatively, the network(s)140and the originating network(s)120may be different networks (either operated by the same service provider or operated by different service providers). Although not shown inFIG.1, one or more session border controllers (“SBCs”) may be deployed to protect network(s)140and may be disposed between originating network(s)120and network(s)140, particularly in the case that originating network(s)120and network(s)140are different network(s) and/or operated by different service providers. In the case that the destination address135ais part of a customer network (e.g., local area network (“LAN”) or the like), one or more other SBCs may also be deployed to protect network(s)140and may be disposed between network(s)140and the customer network (also not shown inFIG.1). Computing system160and/or policy engine160bof computing system160may analyze the received first SIP data to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions. According to some embodiments, analyzing the received first SIP data may comprise analyzing the received first SIP data based at least in part on information regarding packets sent from the source address, to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions, where the information regarding packets sent from the source address may comprise information including at least one of packet sizes of packets sent from the source address, frequency of packets sent from the source address, or types of packets sent from the source address, and/or the like. In some instances, the potential fraudulent or malicious actions may include, but are not limited to, at least one of one or more attacks to obtain fraudulent access, one or more denial of service (“DoS”) attacks, one or more distributed denial of service (“DDos”) attacks, or one or more user datagram protocol (“UDP”) attacks (which may be a form of volumetric DDoS attack), and/or the like. Based on a determination that the received first SIP data does not comprise at least one abnormality indicative of potential fraudulent or malicious actions, computing system160and/or policy engine160bmay establish a media communication session along an original Internet Protocol (“IP”) path155a(among a plurality of original IP paths155a-155nin network(s)140) between the calling party110aand the called party130a. On the other hand, based on a determination that the received first SIP data comprises at least one abnormality indicative of potential fraudulent or malicious actions, computing system160and/or policy engine160bmay reroute the first SIP data to a security DPI engine175. In some embodiments, establishing the media communication session or rerouting the first SIP data to the security DPI engine may comprise computing system160and/or policy engine160bsending routing updates to one or more routers145a-145nand/or150a-150n, via provisioning layer165of the network(s)140(depicted inFIG.1by the solid arrows from policy engine160bto provisioning layer165to the routers145a-145nand150a-150n), to correspondingly establish the media communication session along the original IP path155aor route the first SIP data from the source address115aalong modified IP path170a(among a plurality of modified IP paths170a-170nin network(s)140) to a selected one of the security DPI engines175a-175n(e.g., security DPI engine175a). In at least the latter case, the routing updates are constructed such that (i) latency is minimized, (ii) available capacity of the security DPI engine is taken into account, and (iii) there is no single points of failure, or the like. To meet these conditions, the selected security DPI engine175amay be the security DPI engine that is physically closest to destination address135a. Alternatively, to meet these conditions, the selected security DPI engine175amay be not be the nearest one to the destination address135a, but may be selected because network connections between the selected security DPI engine175amay provide from lower latency, the security DPI engine175bthat is physically closest to destination address135amay lack the necessary capacity for performing a deep scan, and/or selecting the security DPI engine175bthat is physically closest to the destination address135amay result in a single point of failure, or the like. The selected security DPI engine175amay perform a deep scan of the received first SIP data to identify any known fraudulent or malicious attack vectors contained within the received first SIP data and to determine whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity. In the case that the deep scan of the received first SIP data does not reveal or identify any known fraudulent or malicious attack vectors contained within the received first SIP data, does not reveal or identify the calling party as a known malicious entity, and/or or does not reveal or identify the source address as being associated with a known malicious entity, then the selected security DPI engine175amay establish a media communication session along original IP path155abetween the calling party110aand the called party130a, in some cases, by routing the first SIP data or SIP request195along return IP path185aand via router150ato the called party130a. In some embodiments, establishing the media communication session may comprise the selected security DPI engine175aeither (A) directly sending routing updates to one or more routers145a-145nand/or150a-150n, via provisioning layer165of the network(s)140(depicted inFIG.1by the solid arrows from security DPI engine175to provisioning layer165to the routers145a-145nand150a-150n) or (B) indirectly communicating with computing system160and/or policy engine160bto send routing updates to one or more routers145a-145nand/or150a-150n, via provisioning layer165of the network(s)140(depicted inFIG.1by the short dash line between security DPI engine175and data collection engine160aof computing system160and by the solid arrows from policy engine160bto provisioning layer165to the routers145a-145nand150a-150n), to correspondingly establish the media communication session along the original IP path155a. On the other hand, in the case that the deep scan of the received first SIP data reveals or identifies at least one known fraudulent or malicious attack vector contained within the received first SIP data, then the selected security DPI engine175amay initiate one or more mitigation actions (which is shown in the non-limiting list provided inFIG.3, for instance), including, but not limited to, at least one of: rerouting or blocking any suspect network traffic to the destination address130afrom the source address115a; rerouting or blocking all network traffic to the destination address130afrom the source address115a; rerouting or blocking any suspect network traffic to all destinations from the source address115a; rerouting or blocking all network traffic to all destinations from the source address115a; rerouting or blocking any suspect network traffic to all destinations from one or more source addresses115a-115xassociated with the calling party110aor associated with an entity (not shown inFIG.1) associated with the source address115a; or rerouting or blocking all network traffic to all destinations from one or more source addresses115a-115xassociated with the calling party110aor associated with an entity (not shown inFIG.1) associated with the source address115a; and/or the like. In some embodiments, at least one of identifying any known fraudulent or malicious attack vectors contained within the received first SIP data, determining whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity, or initiating the one or more mitigation actions, and/or the like, may be performed using at least in part machine learning engine180a(and corresponding database(s)180b). According to some embodiments, computing system160, policy engine160b, and/or security DPI engine175amay normalize all network traffic to the destination address135a(e.g., from source address115a) either after a predetermined period (e.g., a set number of days, a set number of weeks, a set number of months, or longer, etc.) and/or after a predetermined number of SIP data checks (e.g., 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, or more, etc.) showing no abnormalities indicative of potential fraudulent or malicious actions. Merely by way of example, in some cases, computing system160and/or security DPI engine175amay send a report to a system log(s) or network log(s)190in response to at least one of: rerouting the first SIP data to the security DPI engine, performing the deep scan of the received first SIP, initiating the one or more mitigation actions, recording an event to the machine learning engine, or normalizing the network traffic to the destination address, and/or the like. These and other functions of the system100(and its components) are described in greater detail below with respect toFIGS.2-4. FIGS.2A-2E(collectively, “FIG.2”) are process flow diagrams illustrating a set of non-limiting examples200of fraud or DDoS protection for SIP-based communication that may be implemented, in accordance with various embodiments. In particular,FIG.2depicts non-limiting examples of discrete aspects or portions of the fraud or DDoS protection for SIP-based communication that may be implemented, including, but not limited to, establishing a SIP-based media communication in response to determining an absence or lack of abnormalities in a SIP request or SIP data indicative of potential fraudulent or malicious actions (as shown inFIG.2A); rerouting the SIP request or SIP data to a security DPI engine in response to determining presence of at least one abnormality in the SIP request or SIP data indicative of potential fraudulent or malicious actions (as shown inFIG.2B); establishing a SIP-based media communication in response to determining an absence or lack of known fraudulent or malicious attack vectors (among other signs of fraud or attack) in the SIP request or SIP data after a deep scan of the SIP request or SIP data (as shown inFIG.2C); initiating one or more mitigation actions in response to identifying at least one known fraudulent or malicious attack vector in the SIP request or SIP data (as shown inFIG.2D); or normalizing all network traffic to the destination address after some conditions have been met (as shown inFIG.2E); and/or the like. The calling device205, the originating party210, the source address215, the originating network(s)220, the called device225, the destination party230, the destination address235, the network(s)240, the router245, the router250, the original IP Path255, the computing system260, the data collection engine260a, the polity engine260b, the provisioning layer265, the modified IP path270, the security DPI engines275a-275n, the machine learning engine280aand the corresponding database(s)280b, the return IP path285, and the network log(s)290of system200inFIGS.2A-2Eare otherwise similar, if not identical, to the calling device105, the originating party110a, the source address115a, the originating network(s)120, the called device125, the destination party130a, the destination address135a, the network(s)140, the router145a, the router150a, the original IP Path155a, the computing system160, the data collection engine160a, the polity engine160b, the provisioning layer165, the modified IP path170a, the security DPI engines175a-175n, the machine learning engine180aand the corresponding database(s)180b, the return IP path185a, and the network log(s)190, respectively, of system100inFIG.1, and the descriptions of these components of system100are applicable to the corresponding components of system200, respectively. Referring toFIG.2A, each of routers245and250may employ active data collection to continually, periodically, or sporadically collect network traffic data, which may include, but is not limited to SIP data and non-SIP data. Computing system260and/or data collection engine260aof computing system260may continually collect data from the plurality of routers245and250(depicted inFIG.2Aby short dash lines between data collection engine260aand each of routers245and250), by receiving SIP data from the plurality of routers245and250in a continual, periodic, or sporadic manner. Computing system260and/or data collection engine260amay receive, from a first router245in network(s)240, first SIP data or SIP request295, the first SIP data or SIP request295indicating a request to initiate a SIP-based media communication session between a calling party or originating party210at a source address215in originating network(s)220and a called party or destination party230at a destination address235in the network(s)240. Computing system260and/or policy engine260bof computing system260may analyze the received first SIP data or SIP request295to determine whether the received first SIP data or SIP request295comprises any abnormalities indicative of potential fraudulent or malicious actions, as described in detail above with respect toFIG.1. Based on a determination that the received first SIP data or SIP request295does not comprise at least one abnormality indicative of potential fraudulent or malicious actions, computing system260and/or policy engine260bmay establish a media communication session along an original Internet Protocol (“IP”) path255(among a plurality of original IP paths215-215nin network(s)240) between the calling party210and the called party230. In some embodiments, as described above with respect toFIG.1, establishing the media communication session may comprise computing system260and/or policy engine260bsending routing updates to one or more routers245and/or250, via provisioning layer265of the network(s)240(depicted inFIG.2Aby the solid arrows from policy engine260bto provisioning layer265to the routers245and250), to establish the media communication session along the original IP path255. On the other hand, with reference toFIG.2B, based on a determination that the received first SIP data or SIP request295comprises at least one abnormality indicative of potential fraudulent or malicious actions, computing system260and/or policy engine260bmay reroute the first SIP data or SIP request295to a security DPI engine275(i.e., a selected one of security DPI engines275a-275n). According to some embodiments, as described above with respect toFIG.1, rerouting the first SIP data or SIP request295to the security DPI engine275may comprise computing system260and/or policy engine260bsending routing updates to one or more routers245and/or250, via provisioning layer265of the network(s)240(depicted inFIG.2Bby the solid arrows from policy engine260bto provisioning layer265to the router245), to route the first SIP data or SIP request295from the source address215along modified IP path270to a selected one (in this case, security DPI engine275n) of the security DPI engines275a-275n. In such a case, the routing updates are constructed such that (i) latency is minimized, (ii) available capacity of the security DPI engine is taken into account, and (iii) there is no single points of failure, or the like. To meet these conditions, the selected security DPI engine275nmay be the security DPI engine that is physically closest to destination address235. Alternatively, to meet these conditions, the selected security DPI engine275nmay be not be the nearest one to the destination address235, but may be selected because network connections between the selected security DPI engine275nmay provide from lower latency, the security DPI engine275bthat is physically closest to destination address235may lack the necessary capacity for performing a deep scan, and/or selecting the security DPI engine275bthat is physically closest to the destination address235may result in a single point of failure, or the like. The selected security DPI engine275nmay perform a deep scan of the received first SIP data or SIP request295to identify any known fraudulent or malicious attack vectors contained within the received first SIP data or SIP request295and to determine whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity. Turning toFIG.2C, in the case that the deep scan of the received first SIP data or SIP request295does not reveal or identify any known fraudulent or malicious attack vectors contained within the received first SIP data or SIP request295, does not reveal or identify the calling party as a known malicious entity, and/or or does not reveal or identify the source address as being associated with a known malicious entity, then the selected security DPI engine275nmay establish a media communication session along original IP path255between the calling party210and the called party230, in some cases, by routing the first SIP data or SIP request295along return IP path285and via router250to the called party230. In some embodiments, establishing the media communication session may comprise the selected security DPI engine275neither (A) directly sending routing updates to one or more routers245and/or250, via provisioning layer265of the network(s)240(depicted inFIG.1by the solid arrows from security DPI engine175to provisioning layer165to the routers145and150) or (B) indirectly communicating with computing system260and/or policy engine260bto send routing updates to one or more routers245and/or250, via provisioning layer265of the network(s)240(depicted inFIG.1by the short dash line between security DPI engine175aand data collection engine160aof computing system160and by the solid arrows from policy engine160bto provisioning layer165to the routers145aand150a), to correspondingly establish the media communication session along the original IP path255. Alternatively, the selected security DPI engine275nmay route the first SIP data or SIP request295along return IP path285and via router250to the called party230, by either (C) directly sending routing updates to router250, via provisioning layer265of the network(s)240(depicted inFIG.2Cby the solid arrows from security DPI engine275nto provisioning layer265to the router250) or (D) indirectly communicating with computing system260and/or policy engine260bto send routing updates to router250, via provisioning layer265of the network(s)240(depicted inFIG.1by the short dash line between security DPI engine175aand data collection engine160aof computing system160and by the solid arrows from policy engine160bto provisioning layer165to the router150a) On the other hand, with reference toFIG.2D, in the case that the deep scan of the received first SIP data or SIP request295reveals or identifies at least one known fraudulent or malicious attack vector contained within the received first SIP data or SIP request295, then the selected security DPI engine275nmay initiate one or more mitigation actions (which is described above with respect toFIG.1and as shown in the non-limiting list provided inFIG.3, for instance). In some embodiments, at least one of identifying any known fraudulent or malicious attack vectors contained within the received first SIP data or SIP request295, determining whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity, or initiating the one or more mitigation actions, and/or the like, may be performed using at least in part machine learning engine280a(and corresponding database(s)280b). Turning toFIG.2E, computing system260, policy engine260b, and/or security DPI engine275nmay normalize all network traffic to the destination address235(e.g., from source address215) either after a predetermined period (e.g., a set number of days, a set number of weeks, a set number of months, or longer, etc.) and/or after a predetermined number of SIP data checks (e.g., 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, or more, etc.) showing no abnormalities indicative of potential fraudulent or malicious actions. Merely by way of example, in some cases, computing system260and/or security DPI engine275nmay send a report to a system log(s) or network log(s)290in response to at least one of: rerouting the first SIP data or SIP request295to the security DPI engine (as shown inFIG.2B), performing the deep scan of the received first SIP (as shown inFIGS.2B-2D), initiating the one or more mitigation actions (as shown inFIG.2D), recording an event to the machine learning engine (not shown inFIG.2), or normalizing the network traffic to the destination address (as shown inFIG.2E), and/or the like. FIG.3is a schematic diagram illustrating a non-limiting example300of various mitigation actions may be initiated when implementing the fraud or DDoS protection for SIP-based communication, in accordance with various embodiments. As shown in the non-limiting example300ofFIG.3, initiating mitigation action(s)305(such as the mitigations actions described with respect toFIGS.1,2, and4) may include, but is(are) not limited to, at least one of: rerouting any suspect network traffic to the destination address from the source address (block310); blocking any suspect network traffic to the destination address from the source address (block315); rerouting all network traffic to the destination address from the source address (block320); blocking all network traffic to the destination address from the source address (block325); rerouting any suspect network traffic to all destinations from the source address (block330); blocking any suspect network traffic to all destinations from the source address (block335); rerouting all network traffic to all destinations from the source address (block340); blocking all network traffic to all destinations from the source address (block345); rerouting any suspect network traffic to all destinations from one or more source addresses associated with the calling party or associated with an entity associated with the source address (block350); blocking any suspect network traffic to all destinations from one or more source addresses associated with the calling party or associated with an entity associated with the source address (block355); rerouting all network traffic to all destinations from one or more source addresses associated with the calling party or associated with the entity associated with the source address (block360); blocking all network traffic to all destinations from one or more source addresses associated with the calling party or associated with the entity associated with the source address (block365); and/or the like. FIGS.4A-4D(collectively, “FIG.4”) are flow diagrams illustrating a method400for implementing fraud or DDoS protection for SIP-based communication, in accordance with various embodiments. While the techniques and procedures are depicted and/or described in a certain order for purposes of illustration, it should be appreciated that certain procedures may be reordered and/or omitted within the scope of various embodiments. Moreover, while the method400illustrated byFIG.4can be implemented by or with (and, in some cases, are described below with respect to) the systems, examples, or embodiments100,200, and300ofFIGS.1,2, and3, respectively (or components thereof), such methods may also be implemented using any suitable hardware (or software) implementation. Similarly, while each of the systems, examples, or embodiments100,200, and300ofFIGS.1,2, and3, respectively (or components thereof), can operate according to the method400illustrated byFIG.4(e.g., by executing instructions embodied on a computer readable medium), the systems, examples, or embodiments100,200, and300ofFIGS.1,2, and3can each also operate according to other modes of operation and/or perform other suitable procedures. In the non-limiting embodiment ofFIG.4A, method400, at block405, may comprise receiving, using a computing system, continually collected data from a plurality of routers in a network. According to some embodiments, the computing system may include, without limitation, at least one of a call server, a call controller, a call manager, a media gateway controller, a video call server, an instant messaging server, a network operations center (“NOC”), a centralized call server, a centralized call controller, a centralized call manager, a centralized media gateway controller, a centralized video call server, a centralized instant messaging server, a distributed computing-based call server, a distributed computing-based call controller, a distributed computing-based call manager, a distributed computing-based media gateway controller, a distributed computing-based video call server, a distributed computing-based instant messaging server, or a distributed computing-based NOC, and/or the like. In some instances, each of the plurality of routers may employ active data collection to collect network traffic data. The collected network traffic data may include, but is not limited to, SIP data and non-SIP data, and each of the plurality of routers provides the SIP data as the continually collected data to the computing system, where continually collecting data from the plurality of routers in the network comprises receiving the SIP data from each router in a continual, periodic, or sporadic manner, or the like. At block410, the method400may comprise receiving, using the computing system and from a first router among the plurality of routers in the network, first session initiation protocol (“SIP”) data, the first SIP data indicating a request to initiate a SIP-based media communication session between a calling party at a source address in an originating network and a called party at a destination address in the network. In some cases, the network and the originating network may be the same network (operated by the same service provider). Alternatively, the network and the originating network may be different networks (either operated by the same service provider or operated by different service providers). In some embodiments, the SIP-based communication may include, without limitation, at least one of a voice over Internet Protocol (“VoIP”) call, an IP-based video call, or an instant message over IP, and/or the like. Method400may further comprise, at block415, analyzing, using the computing system, the received first SIP data to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions. If not, method400would continue to the process at block420. If so, method400would continue to the process at block425. According to some embodiments, analyzing the received first SIP data may comprise analyzing, using the computing system, the received first SIP data based at least in part on information regarding packets sent from the source address, to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions, where the information regarding packets sent from the source address may comprise information including at least one of packet sizes of packets sent from the source address, frequency of packets sent from the source address, or types of packets sent from the source address, and/or the like. In some instances, the potential fraudulent or malicious actions may include, but are not limited to, at least one of one or more attacks to obtain fraudulent access, one or more denial of service (“DoS”) attacks, one or more distributed denial of service (“DDos”) attacks, or one or more user datagram protocol (“UDP”) attacks, and/or the like. At block420, method400may comprise, based on a determination that the received first SIP data does not comprise at least one abnormality indicative of potential fraudulent or malicious actions, establishing, using the computing system, a media communication session between the calling party and the called party. Method400then returns to the process at block405. Alternatively, at block425, method400may comprise, based on a determination that the received first SIP data comprises at least one abnormality indicative of potential fraudulent or malicious actions, rerouting, using the computing system, the first SIP data to a security deep packet inspection (“DPI”) engine. In some embodiments, rerouting the first SIP data to the security DPI engine may comprise sending, using the computing system, routing updates to one or more routers, via a provisioning layer of the network, to route the first SIP data from the source address, where the routing updates are constructed such that (i) latency is minimized, (ii) available capacity of the security DPI engine is taken into account, and (iii) there is no single points of failure, or the like. According to some embodiments, the computing system may comprise a data collection engine and a policy engine, where receiving the first SIP data (at block410) may be performed by the data collection engine, and where analyzing and rerouting the received first SIP data (at blocks415and425) may be performed by the policy engine. Method400may further comprise performing, using the security DPI engine, a deep scan of the received first SIP data to identify any known fraudulent or malicious attack vectors contained within the received first SIP data and to determine whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity. If the deep scan of the received first SIP data does not reveal or identify any known fraudulent or malicious attack vectors contained within the received first SIP data, does not reveal or identify the calling party as a known malicious entity, and/or or does not reveal or identify the source address as being associated with a known malicious entity, then method400would proceed to the process at block420(which would then follow the processes as described above). On the other hand, if the deep scan of the received first SIP data reveals or identifies at least one known fraudulent or malicious attack vectors contained within the received first SIP data, then method400would continue to the process at block435. If the deep scan of the received first SIP data reveals or identifies the calling party as a known malicious entity and/or or reveals or identifies the source address as being associated with a known malicious entity, the method400may also continue to the process at block435. At block435, method400may comprise initiating one or more mitigation actions, a non-limiting list of which is provided inFIG.3, for instance. In some embodiments, at least one of identifying any known fraudulent or malicious attack vectors contained within the received first SIP data (at block430), determining whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity (also at block430), or initiating the one or more mitigation actions (at block435), and/or the like, may be performed using at least in part a machine learning engine. Method400may further comprise normalizing, using the computing system and after initiating the one or more mitigation actions, all network traffic to the destination address after at least one of a predetermined period or a predetermined number of SIP data checks showing no abnormalities indicative of potential fraudulent or malicious actions (block440). Turning toFIG.4B, rerouting the first SIP data to the security DPI engine (at block425) may comprise: identifying, using the computing system, one or more security DPI engines among a plurality of security DPI engines in the network to reroute the first SIP data (block445); selecting, using the computing system, a first security DPI engine from the identified one or more security DPI engines, based at least in part on at least one of proximity of each of the one or more security DPI engines to the destination address, latency caused by rerouting data to each of the one or more security DPI engines, available capacity of each of the one or more security DPI engines, or identification of points of failure due to selection of each of the one or more security DPI engines, and/or the like (block450); and rerouting, using the computing system, the first SIP data to the selected first security DPI engine (block455). Referring toFIG.4C, method400may further comprise, at block460, recording an event to the machine learning engine in response to at least one of identifying any known fraudulent or malicious attack vectors contained within the received first SIP data (at block430), determining whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity (also at block430), or initiating the one or more mitigation actions (at block435), and/or the like. With reference toFIG.4D, method400may further comprise, at block465, sending, using the computing system, a report to a system log in response to at least one of: rerouting the first SIP data to the security DPI engine (at block425), performing the deep scan of the received first SIP (at block430), initiating the one or more mitigation actions (at block435), recording an event to the machine learning engine (at block460), or normalizing the network traffic to the destination address (at block440), and/or the like. Exemplary System and Hardware Implementation FIG.5is a block diagram illustrating an exemplary computer or system hardware architecture, in accordance with various embodiments.FIG.5provides a schematic illustration of one embodiment of a computer system500of the service provider system hardware that can perform the methods provided by various other embodiments, as described herein, and/or can perform the functions of computer or hardware system (i.e., calling devices105,105a-105e, and205, called devices125,125a-125e, and225, routers145a-145n,150a-150n,245, and250, computing systems160and260, data collection engines160aand260a, policy engines160band260b, provisioning layers165and265, security deep packet inspection (“DPI”) engines175,175a-175n, and275a-275n, machine learning engines180aand280a, etc.), as described above. It should be noted thatFIG.5is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate.FIG.5, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner. The computer or hardware system500—which might represent an embodiment of the computer or hardware system (i.e., calling devices105,105a-105e, and205, called devices125,125a-125e, and225, routers145a-145n,150a-150n,245, and250, computing systems160and260, data collection engines160aand260a, policy engines160band260b, provisioning layers165and265, security DPI engines175,175a-175n, and275a-275n, machine learning engines180aand280a, etc.), described above with respect toFIGS.1-4— is shown comprising hardware elements that can be electrically coupled via a bus505(or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors510, including, without limitation, one or more general-purpose processors and/or one or more special-purpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices515, which can include, without limitation, a mouse, a keyboard, and/or the like; and one or more output devices520, which can include, without limitation, a display device, a printer, and/or the like. The computer or hardware system500may further include (and/or be in communication with) one or more storage devices525, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like. The computer or hardware system500might also include a communications subsystem530, which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a Bluetooth™ device, an 802.11 device, a WiFi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like. The communications subsystem530may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein. In many embodiments, the computer or hardware system500will further comprise a working memory535, which can include a RAM or ROM device, as described above. The computer or hardware system500also may comprise software elements, shown as being currently located within the working memory535, including an operating system540, device drivers, executable libraries, and/or other code, such as one or more application programs545, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods. A set of these instructions and/or code might be encoded and/or stored on a non-transitory computer readable storage medium, such as the storage device(s)525described above. In some cases, the storage medium might be incorporated within a computer system, such as the system500. In other embodiments, the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer or hardware system500and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system500(e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code. It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware (such as programmable logic controllers, field-programmable gate arrays, application-specific integrated circuits, and/or the like) might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed. As mentioned above, in one aspect, some embodiments may employ a computer or hardware system (such as the computer or hardware system500) to perform methods in accordance with various embodiments of the invention. According to a set of embodiments, some or all of the procedures of such methods are performed by the computer or hardware system500in response to processor510executing one or more sequences of one or more instructions (which might be incorporated into the operating system540and/or other code, such as an application program545) contained in the working memory535. Such instructions may be read into the working memory535from another computer readable medium, such as one or more of the storage device(s)525. Merely by way of example, execution of the sequences of instructions contained in the working memory535might cause the processor(s)510to perform one or more procedures of the methods described herein. The terms “machine readable medium” and “computer readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In an embodiment implemented using the computer or hardware system500, various computer readable media might be involved in providing instructions/code to processor(s)510for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer readable medium is a non-transitory, physical, and/or tangible storage medium. In some embodiments, a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like. Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s)525. Volatile media includes, without limitation, dynamic memory, such as the working memory535. In some alternative embodiments, a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus505, as well as the various components of the communication subsystem530(and/or the media by which the communications subsystem530provides communication with other devices). In an alternative set of embodiments, transmission media can also take the form of waves (including without limitation radio, acoustic, and/or light waves, such as those generated during radio-wave and infra-red data communications). Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code. Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s)510for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system500. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention. The communications subsystem530(and/or components thereof) generally will receive the signals, and the bus505then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory535, from which the processor(s)505retrieves and executes the instructions. The instructions received by the working memory535may optionally be stored on a storage device525either before or after execution by the processor(s)510. As noted above, a set of embodiments comprises methods and systems for implementing call or data routing, and, more particularly, to methods, systems, and apparatuses for implementing fraud or distributed denial of service (“DDoS”) protection for session initiation protocol (“SIP”)-based communication.FIG.6illustrates a schematic diagram of a system600that can be used in accordance with one set of embodiments. The system600can include one or more user computers, user devices, or customer devices605. A user computer, user device, or customer device605can be a general purpose personal computer (including, merely by way of example, desktop computers, tablet computers, laptop computers, handheld computers, and the like, running any appropriate operating system, several of which are available from vendors such as Apple, Microsoft Corp., and the like), cloud computing devices, a server(s), and/or a workstation computer(s) running any of a variety of commercially-available UNIX™ or UNIX-like operating systems. A user computer, user device, or customer device605can also have any of a variety of applications, including one or more applications configured to perform methods provided by various embodiments (as described above, for example), as well as one or more office applications, database client and/or server applications, and/or web browser applications. Alternatively, a user computer, user device, or customer device605can be any other electronic device, such as a thin-client computer, Internet-enabled mobile telephone, and/or personal digital assistant, capable of communicating via a network (e.g., the network(s)610described below) and/or of displaying and navigating web pages or other types of electronic documents. Although the exemplary system600is shown with two user computers, user devices, or customer devices605, any number of user computers, user devices, or customer devices can be supported. Certain embodiments operate in a networked environment, which can include a network(s)610. The network(s)610can be any type of network familiar to those skilled in the art that can support data communications using any of a variety of commercially-available (and/or free or proprietary) protocols, including, without limitation, TCP/IP, SNA™ IPX™ AppleTalk™, and the like. Merely by way of example, the network(s)610(similar to networks120,140,220, and240ofFIGS.1and2, or the like) can each include a local area network (“LAN”), including, without limitation, a fiber network, an Ethernet network, a Token-Ring™ network, and/or the like; a wide-area network (“WAN”); a wireless wide area network (“WWAN”); a virtual network, such as a virtual private network (“VPN”); the Internet; an intranet; an extranet; a public switched telephone network (“PSTN”); an infra-red network; a wireless network, including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the Bluetooth™ protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks. In a particular embodiment, the network might include an access network of the service provider (e.g., an Internet service provider (“ISP”)). In another embodiment, the network might include a core network of the service provider, and/or the Internet. Embodiments can also include one or more server computers615. Each of the server computers615may be configured with an operating system, including, without limitation, any of those discussed above, as well as any commercially (or freely) available server operating systems. Each of the servers615may also be running one or more applications, which can be configured to provide services to one or more clients605and/or other servers615. Merely by way of example, one of the servers615might be a data server, a web server, a cloud computing device(s), or the like, as described above. The data server might include (or be in communication with) a web server, which can be used, merely by way of example, to process requests for web pages or other electronic documents from user computers605. The web server can also run a variety of server applications, including HTTP servers, FTP servers, CGI servers, database servers, Java servers, and the like. In some embodiments of the invention, the web server may be configured to serve web pages that can be operated within a web browser on one or more of the user computers605to perform methods of the invention. The server computers615, in some embodiments, might include one or more application servers, which can be configured with one or more applications accessible by a client running on one or more of the client computers605and/or other servers615. Merely by way of example, the server(s)615can be one or more general purpose computers capable of executing programs or scripts in response to the user computers605and/or other servers615, including, without limitation, web applications (which might, in some cases, be configured to perform methods provided by various embodiments). Merely by way of example, a web application can be implemented as one or more scripts or programs written in any suitable programming language, such as Java™, C, C#™ or C++, and/or any scripting language, such as Perl, Python, or TCL, as well as combinations of any programming and/or scripting languages. The application server(s) can also include database servers, including, without limitation, those commercially available from Oracle™, Microsoft™, Sybase™ IBM™, and the like, which can process requests from clients (including, depending on the configuration, dedicated database clients, API clients, web browsers, etc.) running on a user computer, user device, or customer device605and/or another server615. In some embodiments, an application server can perform one or more of the processes for implementing call or data routing, and, more particularly, to methods, systems, and apparatuses for implementing fraud or distributed denial of service (“DDoS”) protection for session initiation protocol (“SIP”)-based communication, as described in detail above. Data provided by an application server may be formatted as one or more web pages (comprising HTML, JavaScript, etc., for example) and/or may be forwarded to a user computer605via a web server (as described above, for example). Similarly, a web server might receive web page requests and/or input data from a user computer605and/or forward the web page requests and/or input data to an application server. In some cases, a web server may be integrated with an application server. In accordance with further embodiments, one or more servers615can function as a file server and/or can include one or more of the files (e.g., application code, data files, etc.) necessary to implement various disclosed methods, incorporated by an application running on a user computer605and/or another server615. Alternatively, as those skilled in the art will appreciate, a file server can include all necessary files, allowing such an application to be invoked remotely by a user computer, user device, or customer device605and/or server615. It should be noted that the functions described with respect to various servers herein (e.g., application server, database server, web server, file server, etc.) can be performed by a single server and/or a plurality of specialized servers, depending on implementation-specific needs and parameters. In certain embodiments, the system can include one or more databases620a-620n(collectively, “databases620”). The location of each of the databases620is discretionary: merely by way of example, a database620amight reside on a storage medium local to (and/or resident in) a server615a(and/or a user computer, user device, or customer device605). Alternatively, a database620ncan be remote from any or all of the computers605,615, so long as it can be in communication (e.g., via the network610) with one or more of these. In a particular set of embodiments, a database620can reside in a storage-area network (“SAN”) familiar to those skilled in the art. (Likewise, any necessary files for performing the functions attributed to the computers605,615can be stored locally on the respective computer and/or remotely, as appropriate.) In one set of embodiments, the database620can be a relational database, such as an Oracle database, that is adapted to store, update, and retrieve data in response to SQL-formatted commands. The database might be controlled and/or maintained by a database server, as described above, for example. According to some embodiments, system600might further comprise a computing system625(similar to computing systems160and260ofFIGS.1and2, or the like), which may comprise data collection engine625a(similar to data collection engines160aand260aofFIGS.1and2, or the like) and policy engine625b(similar to policy engines160band260bofFIGS.1and2, or the like), or the like. System600may further comprise a calling device630(similar to calling devices105,105a-105e, and205ofFIGS.1and2, or the like) that is associated with an originating party635(similar to originating party110a-110nor210ofFIGS.1and2, or the like) at a source address640(similar to source addresses115a-115nand215ofFIGS.1and2, or the like) in originating network(s)645(similar to originating networks120and220ofFIGS.1and2, or the like), a called device605(including user devices605aand605b, or the like; similar to called devices125,125a-125e, and225ofFIGS.1and2, or the like) that is associated with a destination party650(similar to destination party130a-130nor230ofFIGS.1and2, or the like) at a destination address655(similar to destination addresses135a-135nand235ofFIGS.1and2, or the like) in network(s)660(similar to networks140and240ofFIGS.1and2, or the like). System600may further comprise router665(similar to routers145a-145nand245ofFIGS.1and2, or the like), router670(similar to routers150a-150nand250ofFIGS.1and2, or the like), and provisioning layer675(similar to provisioning layers165and265ofFIGS.1and2, or the like), each disposed in network(s)660. System600may further comprise one or more security deep packet inspection (“DPI”) engines680or680a-680n(similar to security DPI engines175,175a-175n, and275a-275nofFIGS.1and2, or the like), machine learning engine685aand corresponding database(s)685b(similar to machine learning engines180aand280aand corresponding database(s)180band280bofFIGS.1and2, or the like), and one or more network logs690(similar to network log(s)190ofFIGS.1and2, or the like). In operation, each of routers665and670may employ active data collection to continually, periodically, or sporadically collect network traffic data, which may include, but is not limited to SIP data and non-SIP data. Computing system625and/or data collection engine625aof computing system625may continually collect data from the plurality of routers665and670, by receiving SIP data from the plurality of routers665and670in a continual, periodic, or sporadic manner. In some cases, computing system625and/or data collection engine625amay continually collect data from one or more of the plurality of security DPI engines680a-680n. Computing system625and/or data collection engine625amay receive, from a first router665in network(s)660, first SIP data, the first SIP data indicating a request to initiate a SIP-based media communication session between a calling party or originating party635at a source address640in originating network(s)645and a called party or destination party650at a destination address655in the network(s)660. In some embodiments, the SIP-based media communication may include, without limitation, at least one of a voice over Internet Protocol (“VoIP”) call, an IP-based video call, or an instant message over IP, and/or the like. In some cases, the network(s)660and the originating network(s)645may be the same network (operated by the same service provider). Alternatively, the network(s)660and the originating network(s)645may be different networks (either operated by the same service provider or operated by different service providers). Although not shown inFIG.6, one or more session border controllers (“SBCs”) may be deployed to protect network(s)660and may be disposed between originating network(s)645and network(s)660, particularly in the case that originating network(s)645and network(s)660are different network(s) and/or operated by different service providers. In the case that the destination address655is part of a customer network (e.g., local area network (“LAN”) or the like), one or more other SBCs may also be deployed to protect network(s)660and may be disposed between network(s)660and the customer network (also not shown inFIG.6). Computing system625and/or policy engine625bof computing system625may analyze the received first SIP data to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions. According to some embodiments, analyzing the received first SIP data may comprise analyzing the received first SIP data based at least in part on information regarding packets sent from the source address, to determine whether the received first SIP data comprises any abnormalities indicative of potential fraudulent or malicious actions, where the information regarding packets sent from the source address may comprise information including at least one of packet sizes of packets sent from the source address, frequency of packets sent from the source address, or types of packets sent from the source address, and/or the like. In some instances, the potential fraudulent or malicious actions may include, but are not limited to, at least one of one or more attacks to obtain fraudulent access, one or more denial of service (“DoS”) attacks, one or more distributed denial of service (“DDos”) attacks, or one or more user datagram protocol (“UDP”) attacks (which may be a form of volumetric DDoS attack), and/or the like. Based on a determination that the received first SIP data does not comprise at least one abnormality indicative of potential fraudulent or malicious actions, computing system625and/or policy engine625bmay establish a media communication session along an original Internet Protocol (“IP”) path between the calling party635and the called party650. On the other hand, based on a determination that the received first SIP data comprises at least one abnormality indicative of potential fraudulent or malicious actions, computing system625and/or policy engine625bmay reroute the first SIP data to a security DPI engine680. In some embodiments, establishing the media communication session or rerouting the first SIP data to the security DPI engine may comprise computing system625and/or policy engine625bsending routing updates to router665and/or router670, via provisioning layer675of the network(s)660, to correspondingly establish the media communication session along the original IP path155aor route the first SIP data from the source address640along a modified IP path to a selected one of the security DPI engines680a-680n(e.g., security DPI engine680a). In at least the latter case, the routing updates are constructed such that (i) latency is minimized, (ii) available capacity of the security DPI engine is taken into account, and (iii) there is no single points of failure, or the like. To meet these conditions, the selected security DPI engine680amay be the security DPI engine that is physically closest to destination address655. Alternatively, to meet these conditions, the selected security DPI engine680amay be not be the nearest one to the destination address655, but may be selected because network connections between the selected security DPI engine680amay provide from lower latency, the security DPI engine680bthat is physically closest to destination address655may lack the necessary capacity for performing a deep scan, and/or selecting the security DPI engine680bthat is physically closest to the destination address655may result in a single point of failure, or the like. The selected security DPI engine680amay perform a deep scan of the received first SIP data to identify any known fraudulent or malicious attack vectors contained within the received first SIP data and to determine whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity. In the case that the deep scan of the received first SIP data does not reveal or identify any known fraudulent or malicious attack vectors contained within the received first SIP data, does not reveal or identify the calling party as a known malicious entity, and/or or does not reveal or identify the source address as being associated with a known malicious entity, then the selected security DPI engine680amay establish a media communication session along original IP path155abetween the calling party635and the called party650, in some cases, by routing the first SIP data or SIP request along a return IP path and via router670to the called party650. In some embodiments, establishing the media communication session may comprise the selected security DPI engine680aeither (A) directly sending routing updates to one or more routers665-665nand/or670-670n, via provisioning layer675of the network(s)660or (B) indirectly communicating with computing system625and/or policy engine625bto send routing updates to one or more routers665-665nand/or670-670n, via provisioning layer675of the network(s)660, to correspondingly establish the media communication session along the original IP path. On the other hand, in the case that the deep scan of the received first SIP data reveals or identifies at least one known fraudulent or malicious attack vector contained within the received first SIP data, then the selected security DPI engine680amay initiate one or more mitigation actions (which is shown in the non-limiting list provided inFIG.3, for instance). In some embodiments, at least one of identifying any known fraudulent or malicious attack vectors contained within the received first SIP data, determining whether the calling party is a known malicious entity or whether the source address is associated with a known malicious entity, or initiating the one or more mitigation actions, and/or the like, may be performed using at least in part machine learning engine685a(and corresponding database(s)685b). According to some embodiments, computing system625, policy engine625b, and/or security DPI engine680amay normalize all network traffic to the destination address655(e.g., from source address640) either after a predetermined period (e.g., a set number of days, a set number of weeks, a set number of months, or longer, etc.) and/or after a predetermined number of SIP data checks (e.g., 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, or more, etc.) showing no abnormalities indicative of potential fraudulent or malicious actions. Merely by way of example, in some cases, computing system625and/or security DPI engine680amay send a report to a system log(s) or network log(s)690in response to at least one of: rerouting the first SIP data to the security DPI engine, performing the deep scan of the received first SIP, initiating the one or more mitigation actions, recording an event to the machine learning engine, or normalizing the network traffic to the destination address, and/or the like. These and other functions of the system600(and its components) are described in greater detail above with respect toFIGS.1-4. While certain features and aspects have been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the methods and processes described herein may be implemented using hardware components, software components, and/or any combination thereof. Further, while various methods and processes described herein may be described with respect to particular structural and/or functional components for ease of description, methods provided by various embodiments are not limited to any particular structural and/or functional architecture but instead can be implemented on any suitable hardware, firmware and/or software configuration. Similarly, while certain functionality is ascribed to certain system components, unless the context dictates otherwise, this functionality can be distributed among various other system components in accordance with the several embodiments. Moreover, while the procedures of the methods and processes described herein are described in a particular order for ease of description, unless the context dictates otherwise, various procedures may be reordered, added, and/or omitted in accordance with various embodiments. Moreover, the procedures described with respect to one method or process may be incorporated within other described methods or processes; likewise, system components described according to a particular structural architecture and/or with respect to one system may be organized in alternative structural architectures and/or incorporated within other described systems. Hence, while various embodiments are described with—or without—certain features for ease of description and to illustrate exemplary aspects of those embodiments, the various components and/or features described herein with respect to a particular embodiment can be substituted, added and/or subtracted from among other described embodiments, unless the context dictates otherwise. Consequently, although several exemplary embodiments are described above, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims. | 99,710 |
11943240 | DETAILED DESCRIPTION The following discussion is presented to enable any person skilled in the art to make and use the technology disclosed, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed implementations will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other implementations and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the implementations shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. As noted above, cloud computing environments are used by organizations or other end-users to store a wide variety of different types of information in many contexts and for many uses. This data can often include sensitive and/or confidential information, and can be the target for malicious activity such as acts of fraud, privacy breaches, data theft, etc. These risks can arise from individuals that are both inside the organization as well as outside the organization. Cloud environments often include security infrastructure to enforce access control, data loss prevention, or other processes to secure data from potential vulnerabilities. However, even with such security infrastructures, it can be difficult for an organization to understand the data posture and breadth of access to the data stored in the cloud in the organization's cloud account. In other words, it can be difficult to identify which users have access to which data, and which data may be exposed to malicious or otherwise unauthorized users, both inside or outside the organization. The present system is directed to a cloud security posture analysis system configured to analyze and take action on the security posture of a cloud account. The system discovers sensitive data among the cloud storage resources and discovers access patterns to the sensitive data. The results are used to identify security vulnerabilities to understand the data security posture, detect and remediate the security vulnerabilities, and to prevent future breaches to sensitive data. The system provides real-time visibility and control on the control data infrastructure by discovering resources, sensitive data, and access paths, and tracking resource configuration, deep context and trust relationships in real-time as a graph or other visualization. It is noted that the technology disclosed herein can depict all graph embodiments in equivalent and analogous tabular formats or other visualization formats based on the data and logic disclosed herein. The system can further score breach paths based on sensitivity, volume, and/or permissions to show an attack surface and perform constant time scanning, by deploying scanners locally within the cloud account. Thus, the scanners execute in the cloud service itself, with metadata being returned indicative of the analysis. Thus, in one example, an organization's cloud data does not leave the organization's cloud account. Rather, the data can be scanned in place and metadata sent for analysis by the cloud security posture analysis system, which further enhances data security. FIG.1is a block diagram illustrating one example of a cloud architecture100in which a cloud environment102is accessed by one or more actors104through a network106, such as the Internet or other wide area network. Cloud environment102includes one or more cloud services108-1,108-2,108-N, collectively referred to as cloud services108. As noted above, cloud services108can include cloud storage services such as, but not limited to, AWS, GCP, Microsoft Azure, to name a few. Further, cloud services108-1,108-2,108-N can include the same type of cloud service, or can be different types of cloud services, and can be accessed by any of a number of different actors104. For example, as illustrated inFIG.1, actors104include users110, administrators112, developers114, organizations116, and/or applications118. Of course, other actors120can access cloud environment102as well. Architecture100includes a cloud security posture analysis system122configured to access cloud services108to identify and analyze cloud security posture data. Examples of system122are discussed in further detail below. Briefly, however, system122is configured to access cloud services108and identify connected resources, entities, actors, etc. within those cloud services, and to identify risks and violations against access to sensitive information. As shown inFIG.1, system122can reside within cloud environment102or outside cloud environment102, as represented by the dashed box inFIG.1. Of course, system122can be distributed across multiple items inside and/or outside cloud environment102. Users110, administrators112, developers114, or any other actors104, can interact with cloud environment102through user interface displays123having user interface mechanisms124. For example, a user can interact with user interface displays123provided on a user device (such as a mobile device, a laptop computer, a desktop computer, etc.) either directly or over network106. Cloud environment102can include other items125as well. FIG.2is a block diagram illustrating one example of cloud service108-1. For the sake of the present discussion, but not by limitation, cloud service108-1will be discussed in the context of an account within AWS. Of course, other types of cloud services and providers are within the scope of the present disclosure. Cloud service108-1includes a plurality of resources126and an access management and control system128configured to manage and control access to resources126by actors104. Resources126include compute resources130, storage resources132, and can include other resources134. Compute resources130include a plurality of individual compute resources130-1,130-2,130-N, which can be the same and/or different types of compute resources. In the present example, compute resources130can include elastic compute resources, such as elastic compute cloud (AWS EC2) resources, AWS Lambda, etc. An elastic compute cloud (EC2) is a cloud computing service designed to provide virtual machines called instances, where users can select an instance with a desired amount of computing resources, such as the number and type of CPUs, memory and local storage. An EC2 resource allows users to create and run compute instances on AWS, and can use familiar operating systems like Linus, Windows, etc. Users can select an instance type based on the memory and computing requirements needed for the application or software to be run on the instance. AWS Lambda is an event-based service that delivers short-term compute capabilities and is designed to run code without the need to deploy, use or manage virtual machine instances. An example implementation is used by an organization to address specific triggers or events, such as database updates, storage changes or custom events generated from other applications. Such a compute resource can include a server-less, event-driven compute service that allows a user to run code for many different types of applications or backend services without provisioning or managing servers. Storage resources132are accessible through compute resources130, and can include a plurality of storage resources132-1,132-2,132-N, which can be the same and/or different types of storage resources. A storage resource132can be defined based on object storage. For example, AWS Simple Storage Service (S3) provides highly-scalable cloud object storage with a simple web service interface. An S3 object can contain both data and metadata, and objects can reside in containers called buckets. Each bucket can be identified by a unique user-specified key or file name. A bucket can be a simple flat folder without a file system hierarchy. A bucket can be viewed as a container (e.g., folder) for objects (e.g., files) stored in the S3 storage resource. Compute resources130can access or otherwise interact with storage resources132through network communication paths based on permissions data136and/or access control data138. System128illustratively includes identity and access management (IAM) functionality that controls access to cloud service108-1using entities (e.g., IAM entities) provided by the cloud computing platform. Permissions data136includes policies140and can include other permissions data142. Access control data138includes identities144and can include other access control data146as well. Examples of identities144include, but are not limited to, users, groups, roles, etc. In AWS, for example, an IAM user is an entity that is created in the AWS service and represents a person or service who uses the IAM user to interact with the cloud service. An IAM user provides the ability to sign into the AWS management console for interactive tasks and to make programmatic requests to AWS services using the API, and includes a name, password, and access keys to be used with the API. Permissions can be granted to the IAM user to make the IAM user a member of a user group with attached permission policies. An IAM user group is a collection of IAM users with specified permissions. Use of IAM groups can make management of permissions easier for those users. An IAM role in AWS is an IAM identity that has specific permissions, and has some similarities to an IAM user in that the IAM role is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Roles can be used to delegate access to users, applications, and/or services that don't normally have access to the AWS resources. Roles can be used by IAM users in a same AWS account and/or in different AWS accounts than the role. Also, roles can be used by computer resources130, such as EC2 resources. A service role is a role assumed by a service to perform actions in an account on behalf of a user. Service roles include permissions required for the service to access the resources needed by the service. Service roles can vary from service to service. A service role for an EC2 instance, for example, is a special type of service role that an application running on an EC2 instance can assume to perform actions. Policies140can include identity-based policies that are attached to IAM identities can grant permissions to the identity. Policies140can also include resource-based policies that are attached to resources126. Examples include S3 bucket policies and IAM role trust policies. An example trust policy includes a JSON policy document that defines the principles that are trusted to assume a role. In AWS, a policy is an object that, when associated with an identity or resource, defines permissions of the identity or resource. AWS evaluates these policies when an IAM principal user or a role) makes a request. Permissions in the policy determine whether the request is allowed or denied. Policies are often stored as JSON documents that are attached to the IAM identities (user, groups of users, role). A permissions boundary is a managed policy for an IAM identity that defines the maximum permissions that the identity-based policies can grant to an entity, but does not grant the permissions. Further, access control lists (ACLs) control which principles in other accounts can access the resource to which the ACL is attached. ACLs can be similar to resource-based policies. In some implementations of the technology disclosed, the terms “roles” and “policies” are used interchangeably. Cloud service108-1includes one or more deployed cloud scanners148, and can include other items150as well. Cloud scanner148run locally on the cloud-based services and the server systems, and can utilize elastic compute resources, such as, but not limited to, AWS Lambda resources. Cloud scanner148is configured to access and scan the cloud service108-1on which the scanner is deployed. Examples are discussed in further detail below. Briefly, however, a scanner accesses the data stored in storage resources132, permissions data136, and access control data138to identify particular data patterns (such as, but not limited to, sensitive string patterns) and traverse or trace network communication paths between pairs of compute resources130and storage resources132. The results of the scanner can be utilized to identify subject vulnerabilities, such as resources vulnerable to a breach attack, and to construct a cloud attack surface graph or other data structure that depicts propagation of a breach attack along the network communication paths. Given a graph of connected resources, such as compute resources130, storage resources132, etc., entities (e.g., accounts, roles, policies, etc.), and actors (e.g., users, administrators, etc.), risks and violations against access to sensitive information is identified. A directional graph can be built to capture nodes that represent the resources and labels that are assigned for search and retrieval purposes. For example, a label can mark the node as a database or S3 resource, actors as users, administrators, developers, etc. Relationships between the nodes are created using information available from the cloud infrastructure configuration. For example, using the configuration information, system122can determine that a resource belongs to a given account and create a relationship between the policy attached to a resource and/or identify the roles that can be taken up by a user. FIG.3is a block diagram illustrating one example of cloud security posture analysis system122. As noted above, system122can be deployed in cloud environment102and/or access cloud environment102through network106shown inFIG.1. System122includes a cloud account onboarding component202, a cloud scanner deployment component204, a cloud data scanning and analysis system206, a visualization system208, and a data store210. System122can also include one or more processors or servers212, and can include other items214as well. Cloud account onboarding component202is configured to onboard cloud services108for analysis by system122. After onboarding, cloud scanner deployment component204is configured to deploy a cloud scanner (e.g., deployed cloud scanner(s)148shown inFIG.2) to the cloud service. In one example, the deployed scanners are on-demand agent-less scanners configured to perform agent-less scanning within the cloud service. One example of an agent-less scanner does not require agents to be installed on each specific device or machine. The scanners operate on the resources126and access management and control system128directly within the cloud service, and generate metadata that is returned to system122. Thus, in one example, the actual cloud service data is not required to leave the cloud service for analysis. Cloud data scanning and analysis system206includes a metadata ingestion component216configured to receive the metadata generated by the deployed cloud scanner(s)148. System206also includes a query engine218, a policy engine220, a breach vulnerability evaluation component222, one or more application programming interfaces (APIs)224, a cloud security issue identification component226, a cloud security issue prioritization component228, historical resource state analysis component230, and can include other items232as well. Query engine218is configured to execute queries against the received metadata and generated cloud security issue data. Policy engine220can execute security policies against the cloud data and breach vulnerability evaluation component222is configured to evaluate potential breach vulnerabilities in the cloud service. APIs224are exposed to users, such as administrators, to interact with system122to access the cloud security posture data. Component226is configured to identify cloud security issues and component228can prioritize the identified cloud security issues based on any of a number of criteria. Historical resource state analysis component230is configured to analyze a history of states of resources126. Component230includes a triggering component234configured to detect a trigger that to perform historical resource state analysis. Triggering component234is configured to identify an event that triggers component230to analyze the state of resources126. The event can be, for example, a user input to selectively trigger the analysis, or a detected event such as the occurrence of a time period, an update to a resource, etc. Accordingly, historical resource state can be tracked automatically and/or in response to user input. Component230includes a resource configuration change tracking component236configured to track changes in the configuration of resources126. Component230also includes an anomalous state detection component238, and can include other items240as well. Component238is configured to detect the occurrence of anomalous states in resources126. A resource anomaly can be identified where a given resource has an unexpected state, such as a difference from other similar resources identified in the cloud service. Visualization system208is configured to generate visualizations of the cloud security posture from system206. Illustratively, system208includes a user interface component242configured to generate a user interface for a user, such as an administrator. In the illustrated example, component242includes a web interface generator244configured to generate web interfaces that can be displayed in a web browser on a client device. Visualization system208also includes a resource graph generator component246, a cloud attack surface graph generator component248, and can include other items250as well. Resource graph generator component246is configured to generate a graph or other representation of the relationships between resources126. For example, component246can generate a cloud infrastructure map that graphically depicts pairs of compute resources and storage resources as nodes and network communication paths as edges between the nodes. Cloud attack surface graph generator component248is configured to generate a surface graph or other representation of vulnerabilities of resources to a breach attack. In one example, the representation of vulnerabilities can include a cloud attack surface map that graphically depicts propagation of a breach attack along network communication paths as edges between nodes that represent the corresponding resources. Data store210stores the metadata252obtained by metadata ingestion component216, sensitive data profiles254, and can store other items256as well. Examples of sensitive data profiles are discussed in further detail below. Briefly, however, sensitive data profiles254can identify data patterns that are categorized as sensitive or meeting some predefined pattern of interest. Pattern matching can be performed based on the target data profiles. For example, pattern matching can be performed to identify social security numbers, credit card numbers, other personal data, medical information, to name a few. In one example, artificial intelligence (AI) is utilized to perform named entity recognition (e.g., natural language processing modules can identify sensitive data, in various languages, representing names, company names, locations, etc.). FIG.4is a block diagram illustrating one example of a deployed scanner148. Scanner148includes a resource identification component262, a permissions data identification component264, an access control data identification component266, a cloud infrastructure scanning component268, a cloud data scanning component270, a metadata output component272, and can include other items274as well. Resource identification component262is configured to identify the resources126within cloud service108-1(and/or other cloud services108) and to generate corresponding metadata that identifies these resources. Permissions data identification component264identifies the permissions data136and access control data identification component266identifies access control data138. Cloud infrastructure scanning component268scans the infrastructure of cloud service108to identify the relationships between resources130and132and cloud data scanning component270scans the actual data stored in storage resources132. The generated metadata is output by component272to cloud security posture analysis system122. FIG.5is a flow diagram300showing an example operation of system122in on-boarding a cloud account and deploying one or more scanners. At block302, a request to on-board a cloud service to cloud security posture analysis system122is receives. For example, an administrator can submit a request to on-board cloud service108-1. FIG.6illustrates one example of a user interface display304provided for an administrator. Display304includes a display pane306including a number of display elements representing cloud accounts that have been on-boarded to system122. Display304includes a user interface control308that can be actuated to submit an on-boarding request at block302. Referring again toFIG.5, at block310, an on-boarding user interface display is generated. At block312, user input is received that defines a new cloud account to be on-boarded. The user input can define a cloud provider identification314, a cloud account identification316, a cloud account name318, access credentials to the cloud account320, and can include other input322defining the cloud account to be on-boarded. FIG.7illustrates one example of an on-boarding user interface display324that is displayed in response to user actuation of control308. Display324includes a user interface mechanism326configured to receive input to select or otherwise define a particular cloud account provider. In the illustrated example, mechanism326includes a plurality of selectable controls representing different cloud providers including, but not limited to, AWS, GCP, Azure. Display324includes a user input mechanism328configured to receive input defining a cloud account identifier, and an account nickname. User input mechanisms330allow the user to define other parameters for the on-boarding. A user input mechanism332is actuated to generate a cloud formation template, or other template, to be used in the on-boarding process based on the selected cloud account provider. Once the cloud account is connected to system122, display304inFIG.6can be updated to show the details of the cloud account as well as the scan status. InFIG.6, each entry includes a display name334, an account ID336, a data store count338, and a risk count340. Data store count338includes an indication of the number of data stores in the cloud account and the risk count340includes an indication of a number if identified security risks. A field342indicates the last scan status, such as whether the last scan has completed or whether the scanner is currently in progress or currently scanning. A field344indicates the time at which the last scan was completed. Referring again toFIG.5, at block346, the cloud account is authorized using roles. For example, administrator access (block348) can be defined for the cloud scanner using IAM roles. One or more cloud scanners are defined at block350and can include, but are not limited to, cloud infrastructure scanners352, cloud data scanners354, vulnerability scanners356, or other scanners358. At block360, the cloud scanners are deployed to run locally on the cloud service, such as illustrated inFIG.2. The cloud scanners discover resources at block362, scan data in the resources at block364, and can find vulnerabilities at block366. As discussed in further detail below, a vulnerability can identified based on finding a predefined risk signature in the cloud service resources. The risk signatures can be queried upon, and define expected behavior within the cloud service and locate anomalies based on this data. At block368, if more cloud services are to be on-boarded, operation returns to block310. At block370, the scan results from the deployed scanners are received. As noted above, the scan results include metadata (block372) generated by the scanners running locally on the cloud service. At block374, one or more actions are performed based on the scan results. At block376, the action includes security issue detection. For example, a breach risk on a particular resource (such as a storage resource storing sensitive data) is identified. At block378, security issue prioritization can be performed to prioritize the detected security issues. Examples of security issue detection and prioritization are discussed in further detail below. Briefly, security issues can be detected by executing a query against the scan results using vulnerability or risk signatures. The risk signatures identify criterion such as accessibility of the resources, access and/or permissions between resources, and data types in accessed data stores. Further, each risk signature can be scored and prioritized based impact. For example, a risk signature can include weights indicative of likelihood of occurrence of a breach and impact if the breach occurs. The action can further include providing user interfaces at block380that indicate the scan status (block382), a cloud infrastructure representation (such as a map or graph) (block384), and/or a cloud attack surface representation (map or graph) (block386). The cloud attack surface representation can visualize vulnerabilities based on the low. Remedial actions can be taken at block388, such as creating a ticket (block390) for a developer or other user to address the security issues. Of course, other actions can be taken at block392. For instance, the system can make adjustments to cloud account settings/configurations to address/remedy the security issues. FIG.8illustrates one example of a user interface display400, that can be displayed at block376. Display400provides a dashboard for a user which provides an overview of on-boarded cloud service accounts. The dashboard identifies a number of users402, a number of assets404, a number of data stores406, and a number of accounts408. A data sensitivity pane410includes a display element412that identifies a number of the data stores that include sensitive data, a display element413that identifies a number of users with access to the sensitive data, a display element414that identifies a number of resources having sensitive data, and a display element416that identifies a number of risks on the data stores having sensitive data. Further, graphs or charts can be generated to identify those risks based on factors such as status (display element418) or impact (display element420). Display element420illustratively categorizes the risks based on impact as well as the likelihood of occurrence of those risks. Risk categorization is discussed in further detail below. Briefly, however, display element420stratifies one or more of breach likelihood scores or breach impact scores categories representing different levels of severity, such as high, medium, and low severity levels. In one example, display element420is color coded based on the degree of impact of the risk (e.g., high impact is highlighted in red, medium impact is highlighted in yellow, and low impact is highlighted in green). FIG.9is a flow diagram450illustrating one example of cloud infrastructure scanning performed by cloud scanner148deployed in cloud service108-1. At block452, an agent-less scanner is executed on the cloud service. The scanner can perform constant time scanning at block454. An example constant time scanner runs an algorithm in which the running time does not depend, or has little dependence on, the size of the input. The scanner obtains a stream of bytes and looks for a multiplicity of patterns (one hundred patterns, two hundred patterns, three hundred patterns, etc.) in one pass through the stream of bytes, with the same or substantially similar performance. Further, the scanner can return real-time results at block456. Accordingly, cloud security posture analysis122receives updates to the security posture data as changes are made to the cloud services. At block458, the scanner discovers the compute resources130and, at block460, the storage resources132. Sensitive data can be discovered at block462. The agent-less scanner does not require a proxy or agent running in the cloud service, and can utilize server-less containers and resources to scan the documents and detect sensitive data. The data can be accessed using APIs associated with the scanners. The sensitive data can be identified using pattern matching, such as by querying the data using predefined risk signatures. At block464, access paths between the resources are discovered based on permissions data136(block466), and/or access control data138(block468). A rule processing engine, such as using JSON metadata, can be utilized to analyze the roles and policies, and can build access relationships between the nodes representing the resources. The policies can be decoded to get access type (allow, deny, etc.) and the policy can be placed in a node to link from a source to target node and create the access relationship. At block470, metadata indicative of the scanning results is generated and outputted by metadata output component272. FIGS.10-1,10-2,10-3, and10-4(collectively referred to asFIG.10) provide a flow diagram500illustrating an example operation for streamlined analysis of security posture. For sake of illustration, but not by limitation,FIG.10will be discussed in the context of cloud security posture analysis system122illustrated inFIG.3. Security posture can be analyzed by system206using metadata252to return from the cloud service scanners. At block502, permissions data and access control data are accessed for pairs of compute and storage resources. The permissions and access control data can include identity-based permissions at block504, resource-based permissions at block506, or other permissions as well. At block508, network communication paths between the pairs of resources are traced based on the permissions and access control data. For example, the permissions and access control data can identify which paths have read access from a compute resource from a particular compute resource to a particular storage resource, as represented at block510. Similarly, paths with write access from compute to storage resources can be identified at block512, paths with synchronization access between storage resources can be identified at block514. Of course, other types of paths can be identified as well. For sake of example, but not by limitation, a directional graph is constructed to captures all resources as nodes, with labels assigned to the nodes for search and retrieval. In the AWS example, labels can mark a node as a database or S3 resource. Similarly, labels can represent actors as normal users, admins, developers, etc. Then, known relationships are identified between the nodes, for example using the information available from the cloud infrastructure configuration (e.g., defining a resource belongs to a given account). Similarly, a relationship can be created between the policy attached to a resource, and/or the roles that can be taken up by a user. In addition to storing static information, a rule processing engine (e.g., using JavaScript Object Notation (JSON) metadata) to analyze the roles and policies and build the “access” relationship between the nodes. The analysis can be used to decode the policy to get the access type (e.g., allow, deny, etc.), and the placement of the policy in a node can be used to link from the source node to target node and create the access relationship (e.g., allow, deny, etc.). Similarly, role definitions can be analyzed to find the access type. The graph can therefore include various types of nodes, updated to reflect direct relationships. An iterative process can be performed to find transitive relationships between resources (e.g., resource access for a given entity/actors/resources). In one example, for each access relationship from a first node N1 to a second node N2, the process identify all incoming access relationships of N1. Then, the access types targeting node N1 are analyzed and updated. Using the relationships identified to access N1, the relationships to N2 are updated, and a new set of access relationships are identified to N2 through N1. The process continues to proceed to identify all such relationships with the goal of creating relationships to all nodes that have sensitive data. In one example, block508identifies “access types” which include normalized forms of access permissions. For example, an access type “can read” can be defined to include a plurality of different read objects within AWS (e.g., defined in terms of allowable APIs). Similarly, the AWS permissions “PutObject” and “PutObjectAcl” are transformed to a normalized access type “can write” within system122. At block516, sensitivity classification data is accessed for objects in the storage resources. The sensitivity classification data can include sensitive data profiles at block518. At block520, crawlers can be selected for structured and/or unstructured databases. Crawling the databases can include executing a snapshot of structured databases, creating a dump of structured databases, and scanning the dump for sensitivity classification, as represented at block524. At block526, a subset of the pairs of resources are qualified as vulnerable to a breach attack. The qualification can be based on the permissions data at block528, the access control data at block530, and/or risk criterion at block532. The risk criterion can include any of a wide variety of different types of criteria. For example, a risk criterion can indicate a variety of access to the resources at block534. One example includes a number of different roles with access to the resource, as represented at block536. Also, a risk criterion can indicate a width of configured access to the resources, at block538. For example, the width of configured can include a number of workloads with access to the resources (block540) and/or a type of workload with access to the resources (block542). A risk criterion can also indicate a number of users with access to the resources at block544, a volume of sensitive data in the resources at block546, and/or types of categories of sensitive data at block548. Of course, other types of risk criterion can be utilized as well. In one example, the risk criterion can be defined based on user input.FIG.11illustrates one example of a user interface display550that facilitates user definition of risk criterion. Display550includes a set of user input mechanisms that allows a user to define likelihood weights, represented at numeral552, and impact weights, represented at554. For sake of illustration, a first user input mechanism556allows a user to set a weight that influences a likelihood score for variations in the variety of access to the resources (e.g., block534). Similarly, controls558,560, and562allow a user to set weights that influence likelihood scores for a width of configured access, a number of principles or users with access, and the type of workloads with access, represented by reference numerals558,560, and562, respectively. Similarly, controls563,564,566,568, and570, allow a user to set weights on impact scores for risk criterion associated with a volume of sensitive data, a type of sensitive data, and categories of sensitive data (i.e., legal data, medical data, financial data), respectively. Referring again toFIG.10, at block572, a first subset of the storage resources that satisfy a subject vulnerability signature are identified. A subject vulnerability signature illustratively includes a risk signature indicative of a risk of vulnerability or breach. FIG.12illustrates an example user interface display574that can be accessed from display304illustrated inFIG.6, and displays a set of risk signatures. The risk signatures can be predefined and/or user-defined. For example, display574can include user input mechanisms that allow a user to add, delete, or modify a set of risk signatures576. As noted above, each risk signature defines a set of criteria that the resources and data in cloud service108-1can be queries upon to identify indications of vulnerabilities in the cloud service. The risk signatures inFIG.12include a name field578, a unique risk signature ID field580, and a description identified in a description field582. A result header field584identifies types of data that will be provided in the results when the risk signature is matched. A resource field586identifies the type of resource, and a tags field588identifies tags that label or otherwise identify the risk signature. Additionally, a likelihood factor field590indicates a likelihood factor that is assigned to the risk signature and an impact factor signature592indicates an impact factor assigned to the risk signature. The likelihood factor indicates a likelihood assigned to occurrence of the risk signature and the impact factor assigns an impact to the cloud service assigned to the occurrence of the risk signature. For sake of illustration, a likelihood factor of ten (out of a scale of ten) indicates that the vulnerability is likely to occur if the risk signature is identified in the cloud posture data, whereas a likelihood factor of one indicates a low likelihood. Similarly, an impact factor of ten (out of a scale of ten) indicates that the vulnerability is considered to have a high impact, whereas an impact factor of one indicates the vulnerability is considered to have a low impact on the cloud service. A risk signature can be defined based upon any of a wide variety of criteria. For example, a risk signature can identify one or more configurations or settings of compute resources130. Examples include, but are not limited to, a configuration that indicates whether the compute resource provides accessibility to a particular type of data, such as confidential data, medical data, financial data, personal data, or any other type of private and/or sensitive content. In another example, a risk signature indicates that a compute resource is publicly accessible, includes a public Internet protocol (IP) address, or has IP forwarding enabled. In another example, a risk signature indicates that a compute resource has monitoring disabled, has no IAM role assigned to the compute resource, has backup disabled, data encryption disabled, and/or a low or short backup retention policy. Also, a risk signature can identify password policies set for the compute resource. For instance, a risk signature can indicate a lack of minimum password policies, such as no minimum password length, no requirement of symbols, lowercase letters, uppercase letters, numbers, or password reuse policy. Also, a risk criterion can indicate a location of the compute resource, such as whether the compute resource is located outside of a particular region. Risk signatures can also indicate configurations and/or settings of storage resources132. For example, the configurations and settings can indicate authentication or permissions enforced by the storage resource, such as whether authentication is required for read, write, delete, synchronization, or any other operation. Also, the risk signature can indicate whether multi-factor authentication is disabled for the storage resource, as well as a breadth of permissions grants (e.g., whether all authenticated users are granted permissions within the storage resource). Also, a risk signature can indicate whether encryption is enabled by default, a password policy enforced by the storage resource, whether the storage resource is anonymously accessible, publicly accessible, has a key management service disabled, has logging disabled, life cycle management disabled, whether the storage resource is utilized for website hosting, has geo-restriction disabled, or has backup functionality disabled. Also, the risk signature can indicate a type of data stored by the storage resource, such as the examples discussed above. Referring again toFIG.10, the first subset of storage resources identified at block572, are based on determining that the storage resources satisfy a risk signature of containing private and/or sensitive content, as represented at block594. In another example, the subject vulnerability signature is based on a prevalence of accessibility of a given role within a network exceeding a set threshold, as represented at block596. For instance, the given role can include principles (block598), workloads (block600), a cloud environment (block602), a company (block604), or other roles (block606). Also, the subject vulnerability signature can indicate that the storage resources are accessible by more than a threshold number of users, as represented at block608. Also, the subject vulnerability signature can indicate that the storage resources are accessible by a vulnerable compute resource that is publicly accessible, as represented at block610. This determination can be based on identifying that the compute resource is accessible through a public portal, at block612and/or is accessible by users outside a given company network at block614. As represented at block616, the subject vulnerability signature can indicate that the storage resources are accessible by inactive users. For example, inactive users can include users who have not accessed the resources within a threshold time, at block618. At block620, a second subset of storage resources are identified that synchronization data from the first subset. At block622, a particular compute resource is determined to have anomalous access to a given storage resource. The identification of anomalous access can be based on a comparison of a network communication path of the particular compute resource against paths of other compute resources. For example, the paths of other compute resources can be used to identify an expected communication path for the particular compute resource and/or expected permission for the particular resource. Then, if a difference above a threshold is identified, the particular compute resource is identified as anomalous. At block624, a representation of the propagation of the breach attack along the network communication paths is generated. In one example, the representation includes a cloud attack surface map, as represented at block626. An example cloud attack surface map includes nodes representing the resources (block628) and edges representing the breach attack propagation (block630). The map graphically depicts the subset of storage resources (block632) and the subject vulnerability signature (block634). Also, the map can graphically depict the anomalous access to the particular compute resource (block636). For example, public accesses to the subset of storage resources can be graphically depicted at block638and storage resources that grant external access and/or resources that are initialized from outside a particular jurisdiction can be identified at blocks640and642, respectively. FIG.13illustrates one example of a user interface display650that graphically depicts vulnerability risks, in tabular form. In one example, display650renders the data discussed with respect to the cloud attack surface at block626ofFIG.10in a table. Display650includes a user input mechanism652to specify a time range for visualizing the risk, and includes a description654, a resource identifier656, and an account identifier658for the cloud service account. The display can also indicate the impact660and likelihood662of the vulnerability risk, as well as signature identifier664that identifies the particular risk signature that was matched. Display650also includes a details control666that is actuatable to display details of the identified risk. One example of a details display pane668is illustrated inFIG.14. Display pane668shows a description of the risk at display element670and an indication672of the query utilized to match the risk signature. Referring again toFIG.10, at block676, a query is received for execution against the results of the metadata analysis. For example, a query can specify a subject vulnerability at block678and/or the query can request identification of resources with anomalous access at block680. At block682, the query is executed against the cloud attack surface map. For example, the cloud attack surface map can be filtered to identify results that match the query. The query results (e.g., the filtered map) is returned at block684. The filtered results can include identifying a subset of storage resources that match the query (block686) and/or resources having anomalous access at block688. The cloud attack surface graph is graphically filtered based on the results at block690. For example, the graph can be filtered based on applications running on the pairs of resources in the identified subset (block692). Breach likelihood scores and breach impact scores are determined for the resources at block694, and the scores can be depicted on the cloud attack surface map at block696. In one example, the scores are graphically categorized or stratified at block698into high, medium, or low risk. One example is discussed above with respect toFIG.8. FIG.15illustrates one example of a user interface display700configured to graphically depict breach likelihood and impact scores. Display700identifies data stores in storage resources132that are identified as meeting a subject vulnerability. Each entry shown in display700identifies a type702of the resource, an impact score704, a likelihood score706, a resource identifier708that identifies the resource, and a cloud service identifier710that identifies the particular cloud resource. Based on actuation of a risk item view generator mechanism712, display700shows details for the given resource in a details pane714, as shown inFIG.16. Display pane714can show users716that have access to the resource, roles718that have access to the resource, other resources720that have access to the resource, as well as external users722or external roles724. Display pane714also shows the access type726. FIG.17illustrates one example of a display pane730showing access details for a particular data store, along with a list of users who have access to that data store, and the access type for those users. Upon actuation of a roles actuator732, the display shows a list of roles that have access to the data store, as shown inFIG.18. Upon actuation of a resources actuator734, the display shows a list of resources that have access to the data store, as shown inFIG.19. FIGS.20-1,20-2,20-3, and20-4(collectively referred to asFIG.20) provide a flow diagram800illustrating one example of infrastructure analysis and query execution. At block802, permissions data and access control data for pairs of compute and storage resources is accessed. Policy data is accessed at block804. For example, the policy data can include identity-based policies (block806), resource-based policies (block808), permissions boundaries (block810), service control policies (SCP) (block812), session policies (block814) as well as other policies (block816). At block818, network communication paths are traced between the pairs of resources. Tracing the network communication path can be based on the permissions data at block820, the access control data at block822, the policy data at block824, and/or other data at block826. At block828, a cloud infrastructure map is constructed. An example of a cloud infrastructure map includes nodes that graphically represent pairs of compute and storage resources (block830), and edges that represent network communication paths between the resources (block832). At block834, the map graphically depicts metadata associated with the pairs of resources. For example, a graphical metadata depiction is expandable or collapsible via user selection, as represented at block836. The metadata can be grouped across metadata categories at block838, such as based on cloud-sourced metadata at block840, derived metadata at block842, locally annotated metadata at block844, or based on other metadata categories at block846. The cloud infrastructure map can also graphically depict anomalous configured access instances at block848. For example, block848can detect different levels of access among resources that connect to a common network component, as represented at block850. At block852, the map graphically depicts anomalous actual access instances in the cloud environment. For instance, the instances can be detected from access logs at block854. User annotated tags for the resources can be depicted in the map at block856as well. At block858, a query is received. The query can include a search term860, a content category (block862), a data privacy policy (block864), a temporal period (block866), and can include other items868as well. The query is executed at block870and query results are returned at block872. For example, the query results can identify a subset of the pairs of resources that contain the searched content at block874. At block876, resources are identified that do not have the search content, but have access to the subset. At block878, the query results can identify a subset of the pairs of resources that contain a searched content category. For example, at block880, resources are identified that do not have the content from the content category, but that have access to the subset of resources that have the searched content category. At block882, the query results can identify a subset of resources as complying with a given data privacy policy, specified in the query. Additionally, the results can identify resources that have access to the identified subset, at block884. At block886, a prior state of the resources is identified. Of course, the query results can identify other data888as well. At block890, a filter criterion is received. The filter criterion can be based on the metadata (block892), based on applications running on at least one pair of resources (block894), and/or based on one or more networks in the cloud environment (block896). The networks can include virtual private clouds (VPCs)898, regions900, Internet gateways902, network access control lists904, sub networks906, or other networks908. The filter criterion can also be based on tags at block910, such as users annotated tags represented at block912. The filter criterion can also be based on owners of the resources (block914), a creation date and/or time of the resources (block916), an inactive/stale criterion (block918), or other filter criterion (block920). At block922, the cloud infrastructure map is filtered based on the filter criterion and a filtered cloud infrastructure map is rendered at block924. FIGS.21-1and21-2(collectively referred to asFIG.21) provide a flow diagram1000illustrating one example of cloud data scanning in a cloud service. At block1002, administrative access to the cloud account is obtained. A scan schedule for scanning the cloud account is defined at block1004. FIGS.22and23illustrates example user interface displays for defining a scan schedule at block1004. As shown inFIG.22, a user interface display1006includes a list1008of currently defined scan schedules1010,1012,1014, etc. Each scan schedule is defined by a set of criteria1016for identifying which data stores are to be scanned, along with temporal criteria1018that define when the scan is to run. The scan schedule can be edited using an edit actuator1020. Further, the data scan can be executed manually, through a control1022. New schedules can be defined using a new schedule control1024.FIG.23illustrates user interface display1006when a given one of the data scans has been initiated and includes a scan status indicator1026. Referring again toFIG.21, block1028represents deployment and execution of a scanner locally on the cloud account. In one example, the data is access using APIs, and text is extracted using a text extraction method. Once the text is obtained, natural language processing (NLP) modules identify sensitive data in different languages. For instance, the scanner includes a file system crawler for each data store that is configured to identify pattern and context-based entities and/or machine learning-based entities, such as named entity recognition (names, company names, locations). Further, data loss prevention (DLP) engines can identify social security numbers, credit card numbers, etc. That is, the engine can identify which nodes content particular types of sensitive data. A scanner is triggered and recognizers for sensitive entity detection are loaded, along with profiles for analysis. Text is extracted and entity detection is performed. In one example, the scanning is performed locally on the cloud service so that the organization's data does not leave the organization's cloud account, which can increase privacy and conformance with data policies. The scanners can be encapsulated as containers, that are deployed in the cloud environment using elastic compute instances, such as EC2 resources, Lambda resources, etc. At block1030, objects in the cloud environment are queued and, at block1032, the objects are partitioned into a plurality of object chunks. At block1034, a number (M) of object chunks are identified. At block1036, depending upon the number M, a number (N) of instances of the server-less container-less scanners are initialized. In one example, the number M is significantly larger than the number N (block1038). For example, the number M can be ten times more (block1040) than the number N, one hundred times more (block1042) than the number N, etc. Of course, other numbers of object chunks and instances of the scanners can be utilized, as represented at block1044. The scanners are dynamically scalable (block1046), and each scanner can be portable and independently executable as a microservice (block1048). At block1050, a multiplicity of different data patterns to scan are obtained. For example, the data patterns can include sensitive string patterns (block1052), social security numbers (block1054), credit card numbers (block1056), or other data patterns (block1058). For each scanner, a corresponding object chunk is scanned exactly once to detect the multiplicity of different data patterns, as represented at block1060. Accordingly, each scanner can identify a number of different data patterns, through a given pass through the object chunk. This single pass scanning increases efficiency by decreasing scanning latency. In one example, a multiplicity of object metadata can be detected at block1062. Sensitivity metadata is generated at block1064based on the detected data patterns. The system is controlled based on the sensitivity metadata at block1066. For example, the sensitivity metadata is sent to a metadata store in a control plane in the cloud environment at block1068. Alternatively, or in addition, the cloud attack surface graph is modified at block1070. For example, sensitivity annotation is applied to the graph at block1072. FIGS.24-1and24-2(collectively referred to asFIG.24) provide a flow diagram1100illustrating one example of depicting access links along communication paths between roles and resources. At block1102, an indication of access sub-networks (e.g., territories, regions, etc.) in a cloud environment between a plurality of resources and a plurality of users is obtained. For example, the indication can be obtained from memory at block1104. In one example, the access sub-networks are identified as subnetworks that make a subject resource accessible to one or more users, as represented at block1106. At block1108, user-to-role mappings for roles assigned to the plurality of users is obtained. For example, access management and control system128is used to identify roles defined at a particular resolution or level of the cloud environment, as represented at block1110. The access sub-networks are traversed at block1112and a number (U) of user-to-resource mappings between the users and the resources are built based on traversing the sub-networks, as represented at block1114. At block1116, the number U of user-to-resource mappings is evaluated against the user-to-role mappings to accumulate a number (R) of role-to-resource mapping. In one example, the number U is significantly larger than the number R, as represented at block1118. For example, the number U can be ten times more (block1120) or one hundred times more (block1122) than the number R. Of course, other numbers of mappings can be utilized as well, as represented at block1124. In one example, at block1126a role-to-resource mapping maps a particular role to a particular subset of resources. Also, new resources that are assigned to the particular role are automatically mapped to the particular subset, as represented at block1128. At block1130, access communication paths between the roles and the plurality of resources are traced based on the number R of role-to-resource mapping. At block1132, a compact access network graph is constructed that graphically depicts access links along the traced access communication path. For example, the graph can include nodes that represent roles and resources (block1134), and edges that represent access links along the access communication paths (block1136). At block1138, the compact access network graph can be graphically updated to reflect the new resource assigned at block1128. At block1140, a history of resource configuration changes and/or anomalous state (e.g., risks) detected for various resources is tracked. For example, this tracking can be manually triggered at block1142, or programmatically triggered at block1154. Further, the history can be tracked over a timeline, such as to indicate when a particular risk opened and/or closed, as represented at block1146. At block1148, a difference between a non-anomalous state and a successive anomalous state is tracked. The tracking can also include tracking a difference between successive anomalous states at block1150and/or a difference between successive versions of the resources at block1152. For example, the versions can be determined based on respective resource configurations of the successive versions, at block1144. The tracked difference can be compared to a threshold difference at block1156, to determine whether to track the instance of the resource configuration and/or state change. At block1158, the tracked history can be graphically rendered, such as on a timeline at block1160. The tracked difference can be graphically rendered at block1162. Further, the tracked history can be provided with a playback feature1164or a play forward feature1166, which allow a user to navigate through the tracked history. FIG.25illustrates a user interface display1200that includes a visualization of access communication paths. The visualization inFIG.25can be rendered as a cloud infrastructure graph (e.g., map) that shows relationships between compute and storage resources and/or mappings between users, roles, and resources, based on the permissions data and the access control data. Further, the visualization can be augmented using sensitivity classification data to represent propagation of breach attack along communication paths. For example, the visualization inFIG.25can be configured to render the subset(s) of resources identified inFIG.10. That is, display1200can include the cloud attack surface map at block626. As shown inFIG.25, nodes1202represent compute resources and nodes1204represent storage resources. Illustratively, the storage resources include data stores or buckets within a particular cloud service. Nodes1206represent roles and/or users. The links (e.g., access paths) or edges1208between nodes1202and1206represent that compute resources that can access the particular roles represented by nodes1206. The edges or links1210represent the storage resources that can be accessed by the particular roles or users represented by nodes1206. Based on these relationships between compute and storage relationships, display elements can be rendered along, or otherwise visually associated with, the edges1208and/or1210, to identify and graphically depict the propagation of breach attack. For instance, vulnerability display elements can be rendered in association with edges1208and/or1210to identify that a subject vulnerability signature (e.g., one or more risk signatures shown inFIG.12) has been identified in the data, based on querying the permissions and access control data using the subject vulnerability signature. For example, display element1209represents a risk signature between nodes1203and1212and display element1211represents (such as by including a description, icon, label, etc.) a risk signature between nodes1212and1222. Each display element1209,1211can represent (such as by including a description, icon, label, etc.) corresponding likelihood and impact scores, can be actuatable to render details of the subject vulnerability, such as in a display pane on display1200. The details can include which risk signature has been matched, which sensitive data is at risk, etc. The graph can be interactive at a plurality of different resolutions or levels. For example, a user can interact with the graph to zoom into a specific subset, e.g., based on cloud vendor concepts of proximity (regions, virtual private clouds (VPCs), subnets, etc.). Node1212includes an expand actuator1214that is actuatable to expand the display to show additional details of the roles, role groups, and/or users represented by node1212. When zooming into one region, such as when using the actuators discussed below, other regions can be zoomed out. This can be particularly advantageous when handling large diagrams. Further, the graph includes one or more filter mechanisms configured to filter the graph data by logical properties, such as names, values of various fields, IP addresses, etc. For example, a free form search box1215is configured to receive search terms and filter out all resources (e.g., by removing display of those resources) except those resources matching the search terms. In one example, the search terms include a subject vulnerability signature (e.g., containing private and sensitive content, public accessibility, accessibility by a particular user and/or role, particular applications running on the resources, access types, etc.). An input mechanism1217is configured to receive a temporal filter or search criterion. For example, a filter criterion is entered by a user to represent at least one of a creation time or date of computer resources and storage resources. Further, a query can be entered specifying at least one temporal period, wherein the cloud infrastructure map is updated to graphically return at least one prior state (e.g., a permissions state, an access control state, and/or a sensitivity data classification state) of compute resources and storage resources based on the temporal period. A checkbox (not shown inFIG.25, and which can be global to the diagram) provides the ability to toggle whether or not direct neighbors of the matching resources are also displayed, even if those neighbors themselves don't match the search terms. This allows users to search for specific resources and immediately visualize all entities that have access to the searched resources. To illustrate, assume a search for personally identifiable information (PII) matches a set of S3 buckets. In this case, the graph renders resources that have access to that PII. Further, the graph can show associated data and metadata (e.g., properties extracted from cloud APIs, properties derived such as presence of sensitive data, access paths, etc.). This data and metadata can be shown on a panel to the left or right of the diagram (such as shown inFIGS.27-30). Further, user can actuate user interface controls to collapse/expand this panel. In one example, the panel remains collapsed or expanded until changed, even across different searches and login sessions. Additionally, the display can groups properties in related categories (e.g., summary, all metadata retrieved from the cloud, all metadata derived, local annotations, etc.), and the diagram can be filtered (such as by using the free form search bar mentioned above) by metadata such as tags, applications running on them, identified owners, time since created, etc.). The state of the resources can be shown as of a user defined date or time. A calendar component can allow users to select a particular date to visualize historical state data as of that particular date. In one example, a user interface control allows user to define critical data (e.g., crown jewel data), such as through a filter mechanism (e.g., search box1215). The display then visually highlights that critical data along with all entities with access (defined by a filter such as CAN_READ/CAN_WRITE/CAN_SYNC etc) to the critical data. Anomalous configured access (different levels of access among similar resources can be visually highlighted in the display. For example, if there are four EC2 instances in a worker group connected to the same load balancer, all of the EC2 instances are expected to have the same type of access. However, if one of the EC2 instances has different access, the EC2 instance is identified as anomalous and visually highlighted to the user. Similarly, the display can visually highlight anomalous actual access. That is, instead of inspecting configured access, the system looks at actual access determined using, for example, access logs (e.g., cloudtrail logs, S3 access logs, etc.). Further, the display can be configured to allow the user to add tags to one or more selected resources in the diagram. For instance, when users visualize cloud assets in context, the user can add additional tags that let the user write policies, perform filtering etc. that further aid in visualization and understanding. The user interface allows the user to choose one or more resources and add tags (keys and values in AWS Tags, for example) to selected resources. FIG.26shows display1200after actuation of actuator1214. As shown inFIG.26, node1212has been expanded to show particular roles or role groups1216and the relationships between those roles and role groups (as represented by links1218), to the nodes1206. Role groups1216is represented by an actuatable display element, that is actuatable to display additional details associated with the corresponding role. For example, display element1220is actuatable to display details of the corresponding role, as shown inFIG.27. Referring again toFIG.25, the nodes1204representing the storage resources are also actuatable to show additional details. For example, node1222includes an actuator1224that is actuatable to display the view shown inFIG.28.FIG.28includes a representation1226of the constituents of the storage resource represented by node1222. One or more of the elements are further actuatable to show additional details of the constituent. For example, node display element1228includes an actuator1230to show, in the example display ofFIG.29, details of the virtual private cloud represented by node display element1228. Referring again toFIG.25, node1232is actuatable to show details of the corresponding compute resource. An example display for compute resource details is shown inFIG.30. FIG.31shows one example of a user interface display1250that visualizes resources identified based on the data scanning performed on cloud service108-1. Display1250includes a list of display elements1252, each representing a particular resource. Each entry includes an account ID1254, a resource type1256, a name1258, and a region1260. A details actuator1262can be actuated to show additional details of the corresponding resource. For example,FIG.32shows a display1264, that is displayed in response to actuation of actuator1262. Referring again toFIG.31, display1250includes navigation actuators1266, that are actuatable to navigate through different portions of the list.FIG.33illustrates a second page displayed in response to actuation of control1268. FIG.34shows an example of a user interface display1270displaying details of a particular resource, and includes a details actuator1272. Actuation of actuator1272displays the interface shown inFIG.35. As shown inFIG.35, the resource (illustratively “config-service-main”) is an AWS role having an access type identified at display element1274. The access type typically depends on the resource. In the present case, a principle1276identifies the entities that have the given role, and the access type identifies that the identified entities can assume the given role relative to the resource. This definition connects the roles to the resources. FIG.36illustrates a flow diagram1300for streamlined analysis of access sub-networks, such as regions or territories, in a cloud environment. At block1302, an indication of access sub-networks between a plurality of storage resources and compute resources is obtained. For example, the indication can be obtained from memory at block1304. In one example, each access sub-network makes a subject storage resource accessible to one or more compute resources, as represented at block1306. At block1308, compute resources-to-role mappings for roles assigned to the plurality of compute resources is obtained. Each mapping, in one example, maps a particular resource to a particular role defined in the cloud environment. The roles can be defined at a resolution or level of the cloud environment, as represented at block1310. At block1312, the access sub-networks are traversed to build, at block1314, a number (U) of compute resources-to-storage resource mappings between the compute resources and storage resources. Each mapping, in one example, maps a particular compute resource to a particular storage resource. At block1316, the number U of compute resources-to-storage resource mappings is evaluated against the compute resource-to-role mappings to accumulate a number (R) role-to-storage resource mappings between the roles and the plurality of storage resources. Each mapping, in the number R, maps a particular role to a particular storage resource and indicates which storage resource that particular role can access. In one example, the number U is significantly larger than the number R, as represented at block1318. For example, the number U can be greater than approximately ten times the number R, as represented at block1320. In another example, the number U is greater than approximately one hundred times the number R, as represented at block1322. These, of course, are for sake of example only. At block1324, the access communication paths are traced between the roles and the plurality of storage resources based on the number R of the role-to-storage resource mappings. At block1326, a compact access network graph is constructed that graphically depicts access links along the traced access communication paths. Examples of a network graph are discussed above. Briefly, in one example, nodes in the graph represent roles and storage resources (block1328), and edges represent access links along the access communication paths (block1330). It can thus be seen that the present disclosure describes technology for security posture analysis of a cloud account. In some described examples, the technology can discover sensitive data among the cloud storage resources and as well as access patterns to the sensitive data, using local scanners that reduce or eliminate need to send the cloud data outside the cloud environment. This improves data security. Further, the technology facilitates the discover of security vulnerabilities to understand the data security posture, detect, and remediate the security vulnerabilities, and to prevent future breaches to sensitive data. The system provides real-time visibility and control on the control data infrastructure by discovering resources, sensitive data, and access paths, and tracking resource configuration, deep context, and trust relationships in real-time as a graph or other visualization. One or more implementations of the technology disclosed or elements thereof can be implemented in the form of a computer product, including a non-transitory computer readable storage medium with computer usable program code for performing the method steps indicated. Furthermore, one or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps. Yet further, in another aspect, one or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) hardware module(s), (ii) software module(s) executing on one or more hardware processors, or (iii) a combination of hardware and software modules; any of (i)-(iii) implement the specific techniques set forth herein, and the software modules are stored in a computer readable storage medium (or multiple such media). Examples discussed herein include processor(s) and/or server(s). For sake of illustration, but not by limitation, the processors and/or servers include computer processors with associated memory and timing circuitry, and are functional parts of the corresponding systems or devices, and facilitate the functionality of the other components or items in those systems. Also, user interface displays have been discussed. Examples of user interface displays can take a wide variety of forms with different user actuatable input mechanisms. For instance, a user input mechanism can include icons, links, menus, text boxes, check boxes, etc., and can be actuated in a wide variety of different ways. Examples of input devices for actuating the input mechanisms include, but are not limited to, hardware devices (e.g., point and click devices, hardware buttons, switches, a joystick or keyboard, thumb switches or thumb pads, etc.) and virtual devices (e.g., virtual keyboards or other virtual actuators). For instance, a user actuatable input mechanism can be actuated using a touch gesture on a touch sensitive screen. In another example, a user actuatable input mechanism can be actuated using a speech command. The present figures show a number of blocks with corresponding functionality described herein. It is noted that fewer blocks can be used, such that functionality is performed by fewer components. Also, more blocks can be used with the functionality distributed among more components. Further, the data stores discussed herein can be broken into multiple data stores. All of the data stores can be local to the systems accessing the data stores, all of the data stores can be remote, or some data stores can be local while others can be remote. The above discussion has described a variety of different systems, components, logic, and interactions. One or more of these systems, components, logic and/or interactions can be implemented by hardware, such as processors, memory, or other processing components. Some particular examples include, but are not limited to, artificial intelligence components, such as neural networks, that perform the functions associated with those systems, components, logic, and/or interactions. In addition, the systems, components, logic and/or interactions can be implemented by software that is loaded into a memory and is executed by a processor, server, or other computing component, as described below. The systems, components, logic and/or interactions can also be implemented by different combinations of hardware, software, firmware, etc., some examples of which are described below. These are some examples of different structures that can be used to implement any or all of the systems, components, logic, and/or interactions described above. The elements of the described figures, or portions of the elements, can be disposed on a wide variety of different devices. Some of those devices include servers, desktop computers, laptop computers, tablet computers, or other mobile devices, such as palm top computers, cell phones, smart phones, multimedia players, personal digital assistants, etc. FIG.37is a simplified block diagram of one example of a client device1400, such as a handheld or mobile device, in which the present system (or parts of the present system) can be deployed.FIG.38illustrates an example of a handheld or mobile device. One or more communication links1402allows device1400to communicate with other computing devices, and can provide a channel for receiving information automatically, such as by scanning. An example includes communication protocols, such as wireless services used to provide cellular access to a network, as well as protocols that provide local wireless connections to networks. Applications or other data can be received on an external (e.g., removable) storage device or memory that is connected to an interface1404. Interface1404and communication links1402communicate with one or more processors1406(which can include processors or servers described with respect to the figures) along a communication bus (not shown inFIG.14), that can also be connected to memory1408and input/output (I/O) components1410, as well as clock1412and a location system1414. Components1410facilitate input and output operations for device1400, and can include input components such as microphones, touch screens, buttons, touch sensors, optical sensors, proximity sensors, orientation sensors, accelerometers. Components1410can include output components such as a display device, a speaker, and or a printer port. Clock1412includes, in one example, a real time clock component that outputs a time and date, and can provide timing functions for processor1406. Location system1414outputs a current geographic location of device1400and can includes a global positioning system (GPS) receiver, a LORAN system, a dead reckoning system, a cellular triangulation system, or other positioning system. Memory1408stores an operating system1416, network applications and corresponding configuration settings1418, communication configuration settings1420, communication drivers1422, and can include other items1424. Examples of memory1408include types of tangible volatile and non-volatile computer-readable memory devices. Memory1408can also include computer storage media that stores computer readable instructions that, when executed by processor1406, cause the processor to perform computer-implemented steps or functions according to the instructions. Processor1406can be activated by other components to facilitate functionality of those components as well. FIG.38illustrates one example of a tablet computer1450having a display screen1452, such as a touch screen or a stylus or pen-enabled interface. Screen1452can also provide a virtual keyboard and/or can be attached to a keyboard or other user input device through a mechanism, such as a wired or wireless link. Alternatively, or in addition, computer1450can receive voice inputs. FIG.39shows an example computer system5000that can be used to implement the technology disclosed. Computer system5000includes at least one central processing unit (CPU)5072that communicates with a number of peripheral devices via bus subsystem5055. These peripheral devices can include a storage subsystem5010including, for example, memory devices and a file storage subsystem5036, user interface input devices5038, user interface output devices5076, and a network interface subsystem5074. The input and output devices allow user interaction with computer system5000. Network interface subsystem5074provides an interface to outside networks, including an interface to corresponding interface devices in other computer systems. In one implementation, cloud security posture analysis system5018is communicably linked to the storage subsystem5010and the user interface input devices5038. User interface input devices5038can include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system5000. User interface output devices5076can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem can include an LED display, a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem can also provide a non-visual display such as audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system5000to the user or to another machine or computer system. Storage subsystem5010stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. These software modules are generally executed by processors5078. Processors5078can be graphics processing units (GPUs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), and/or coarse-grained reconfigurable architectures (CGRAs). Processors5078can be hosted by a deep learning cloud platform such as Google Cloud Platform™, Xilinx™, and Cirrascale™. Examples of processors5078include Google's Tensor Processing Unit (TPU)™, rackmount solutions like GX4 Rackmount Series™, GX50 Rackmount Series™, NVIDIA DGX-1™, Microsoft' Stratix V FPGA™, Graphcore's Intelligent Processor Unit (IPU)™, Qualcomm's Zeroth Platform™ with Snapdragon processors™, NVIDIA's Volta™, NVIDIA's DRIVE PX™, NVIDIA's JETSON TX1/TX2 MODULE™, Intel's Nirvana™, Movidius VPU™, Fujitsu DPI™, ARM's DynamicIQ™, IBM TrueNorth™, Lambda GPU Server with Testa V100s™, and others. Memory subsystem5022used in the storage subsystem5010can include a number of memories including a main random access memory (RAM)5032for storage of instructions and data during program execution and a read only memory (ROM)5034in which fixed instructions are stored. A file storage subsystem5036can provide persistent storage for program and data files, and can include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations can be stored by file storage subsystem5036in the storage subsystem5010, or in other machines accessible by the processor. Bus subsystem5055provides a mechanism for letting the various components and subsystems of computer system5000communicate with each other as intended. Although bus subsystem5055is shown schematically as a single bus, alternative implementations of the bus subsystem can use multiple busses. Computer system5000itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, a widely-distributed set of loosely networked computers, or any other data processing system or user device. Due to the ever-changing nature of computers and networks, the description of computer system5000depicted inFIG.50is intended only as a specific example for purposes of illustrating the preferred implementations of the present invention. Many other configurations of computer system5000are possible having more or less components than the computer system depicted inFIG.50. It should also be noted that the different examples described herein can be combined in different ways. That is, parts of one or more examples can be combined with parts of one or more other examples. All of this is contemplated herein. The technology disclosed can be practiced as a system, method, or article of manufacture. One or more features of an implementation can be combined with the base implementation. Implementations that are not mutually exclusive are taught to be combinable. One or more features of an implementation can be combined with other implementations. This disclosure periodically reminds the user of these options. Omission from some implementations of recitations that repeat these options should not be taken as limiting the combinations taught in the preceding sections—these recitations are hereby incorporated forward by reference into each of the following implementations. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. | 85,751 |
11943241 | DETAILED DESCRIPTION The following discussion is presented to enable any person skilled in the art to make and use the technology disclosed, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed implementations will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other implementations and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the implementations shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. As noted above, cloud computing environments are used by organizations or other end-users to store a wide variety of different types of information in many contexts and for many uses. This data can often include sensitive and/or confidential information, and can be the target for malicious activity such as acts of fraud, privacy breaches, data theft, etc. These risks can arise from individuals that are both inside the organization as well as outside the organization. Cloud environments often include security infrastructure to enforce access control, data loss prevention, or other processes to secure data from potential vulnerabilities. However, even with such security infrastructures, it can be difficult for an organization to understand the data posture and breadth of access to the data stored in the cloud in the organization's cloud account. In other words, it can be difficult to identify which users have access to which data, and which data may be exposed to malicious or otherwise unauthorized users, both inside or outside the organization. The present system is directed to a cloud security posture analysis system configured to analyze and take action on the security posture of a cloud account. The system discovers sensitive data among the cloud storage resources and discovers access patterns to the sensitive data. The results are used to identify security vulnerabilities to understand the data security posture, detect and remediate the security vulnerabilities, and to prevent future breaches to sensitive data. The system provides real-time visibility and control on the control data infrastructure by discovering resources, sensitive data, and access paths, and tracking resource configuration, deep context and trust relationships in real-time as a graph or other visualization. It is noted that the technology disclosed herein can depict all graph embodiments in equivalent and analogous tabular formats or other visualization formats based on the data and logic disclosed herein. The system can further score breach paths based on sensitivity, volume, and/or permissions to show an attack surface and perform constant time scanning, by deploying scanners locally within the cloud account. Thus, the scanners execute in the cloud service itself, with metadata being returned indicative of the analysis. Thus, in one example, an organization's cloud data does not leave the organization's cloud account. Rather, the data can be scanned in place and metadata sent for analysis by the cloud security posture analysis system, which further enhances data security. FIG.1is a block diagram illustrating one example of a cloud architecture100in which a cloud environment102is accessed by one or more actors104through a network106, such as the Internet or other wide area network. Cloud environment102includes one or more cloud services108-1,108-2,108-N, collectively referred to as cloud services108. As noted above, cloud services108can include cloud storage services such as, but not limited to, AWS, GCP, Microsoft Azure, to name a few. Further, cloud services108-1,108-2,108-N can include the same type of cloud service, or can be different types of cloud services, and can be accessed by any of a number of different actors104. For example, as illustrated inFIG.1, actors104include users110, administrators112, developers114, organizations116, and/or applications118. Of course, other actors120can access cloud environment102as well. Architecture100includes a cloud security posture analysis system122configured to access cloud services108to identify and analyze cloud security posture data. Examples of system122are discussed in further detail below. Briefly, however, system122is configured to access cloud services108and identify connected resources, entities, actors, etc. within those cloud services, and to identify risks and violations against access to sensitive information. As shown inFIG.1, system122can reside within cloud environment102or outside cloud environment102, as represented by the dashed box inFIG.1. Of course, system122can be distributed across multiple items inside and/or outside cloud environment102. Users110, administrators112, developers114, or any other actors104, can interact with cloud environment102through user interface displays123having user interface mechanisms124. For example, a user can interact with user interface displays123provided on a user device (such as a mobile device, a laptop computer, a desktop computer, etc.) either directly or over network106. Cloud environment102can include other items125as well. FIG.2is a block diagram illustrating one example of cloud service108-1. For the sake of the present discussion, but not by limitation, cloud service108-1will be discussed in the context of an account within AWS. Of course, other types of cloud services and providers are within the scope of the present disclosure. Cloud service108-1includes a plurality of resources126and an access management and control system128configured to manage and control access to resources126by actors104. Resources126include compute resources130, storage resources132, and can include other resources134. Compute resources130include a plurality of individual compute resources130-1,130-2,130-N, which can be the same and/or different types of compute resources. In the present example, compute resources130can include elastic compute resources, such as elastic compute cloud (AWS EC2) resources, AWS Lambda, etc. An elastic compute cloud (EC2) is a cloud computing service designed to provide virtual machines called instances, where users can select an instance with a desired amount of computing resources, such as the number and type of CPUs, memory and local storage. An EC2 resource allows users to create and run compute instances on AWS, and can use familiar operating systems like Linus, Windows, etc. Users can select an instance type based on the memory and computing requirements needed for the application or software to be run on the instance. AWS Lambda is an event-based service that delivers short-term compute capabilities and is designed to run code without the need to deploy, use or manage virtual machine instances. An example implementation is used by an organization to address specific triggers or events, such as database updates, storage changes or custom events generated from other applications. Such a compute resource can include a server-less, event-driven compute service that allows a user to run code for many different types of applications or backend services without provisioning or managing servers. Storage resources132are accessible through compute resources130, and can include a plurality of storage resources132-1,132-2,132-N, which can be the same and/or different types of storage resources. A storage resource132can be defined based on object storage. For example, AWS Simple Storage Service (S3) provides highly-scalable cloud object storage with a simple web service interface. An S3 object can contain both data and metadata, and objects can reside in containers called buckets. Each bucket can be identified by a unique user-specified key or file name. A bucket can be a simple flat folder without a file system hierarchy. A bucket can be viewed as a container (e.g., folder) for objects (e.g., files) stored in the S3 storage resource. Compute resources130can access or otherwise interact with storage resources132through network communication paths based on permissions data136and/or access control data138. System128illustratively includes identity and access management (IAM) functionality that controls access to cloud service108-1using entities (e.g., IAM entities) provided by the cloud computing platform. Permissions data136includes policies140and can include other permissions data142. Access control data138includes identities144and can include other access control data146as well. Examples of identities144include, but are not limited to, users, groups, roles, etc. In AWS, for example, an IAM user is an entity that is created in the AWS service and represents a person or service who uses the IAM user to interact with the cloud service. An IAM user provides the ability to sign into the AWS management console for interactive tasks and to make programmatic requests to AWS services using the API, and includes a name, password, and access keys to be used with the API. Permissions can be granted to the IAM user to make the IAM user a member of a user group with attached permission policies. An IAM user group is a collection of IAM users with specified permissions. Use of IAM groups can make management of permissions easier for those users. An IAM role in AWS is an IAM identity that has specific permissions, and has some similarities to an IAM user in that the IAM role is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Roles can be used to delegate access to users, applications, and/or services that don't normally have access to the AWS resources. Roles can be used by IAM users in a same AWS account and/or in different AWS accounts than the role. Also, roles can be used by computer resources130, such as EC2 resources. A service role is a role assumed by a service to perform actions in an account on behalf of a user. Service roles include permissions required for the service to access the resources needed by the service. Service roles can vary from service to service. A service role for an EC2 instance, for example, is a special type of service role that an application running on an EC2 instance can assume to perform actions. Policies140can include identity-based policies that are attached to IAM identities can grant permissions to the identity. Policies140can also include resource-based policies that are attached to resources126. Examples include S3 bucket policies and IAM role trust policies. An example trust policy includes a JSON policy document that defines the principles that are trusted to assume a role. In AWS, a policy is an object that, when associated with an identity or resource, defines permissions of the identity or resource. AWS evaluates these policies when an IAM principal user or a role) makes a request. Permissions in the policy determine whether the request is allowed or denied. Policies are often stored as JSON documents that are attached to the IAM identities (user, groups of users, role). A permissions boundary is a managed policy for an IAM identity that defines the maximum permissions that the identity-based policies can grant to an entity, but does not grant the permissions. Further, access control lists (ACLs) control which principles in other accounts can access the resource to which the ACL is attached. ACLs can be similar to resource-based policies. In some implementations of the technology disclosed, the terms “roles” and “policies” are used interchangeably. Cloud service108-1includes one or more deployed cloud scanners148, and can include other items150as well. Cloud scanner148run locally on the cloud-based services and the server systems, and can utilize elastic compute resources, such as, but not limited to, AWS Lambda resources. Cloud scanner148is configured to access and scan the cloud service108-1on which the scanner is deployed. Examples are discussed in further detail below. Briefly, however, a scanner accesses the data stored in storage resources132, permissions data136, and access control data138to identify particular data patterns (such as, but not limited to, sensitive string patterns) and traverse or trace network communication paths between pairs of compute resources130and storage resources132. The results of the scanner can be utilized to identify subject vulnerabilities, such as resources vulnerable to a breach attack, and to construct a cloud attack surface graph or other data structure that depicts propagation of a breach attack along the network communication paths. Given a graph of connected resources, such as compute resources130, storage resources132, etc., entities (e.g., accounts, roles, policies, etc.), and actors (e.g., users, administrators, etc.), risks and violations against access to sensitive information is identified. A directional graph can be built to capture nodes that represent the resources and labels that are assigned for search and retrieval purposes. For example, a label can mark the node as a database or S3 resource, actors as users, administrators, developers, etc. Relationships between the nodes are created using information available from the cloud infrastructure configuration. For example, using the configuration information, system122can determine that a resource belongs to a given account and create a relationship between the policy attached to a resource and/or identify the roles that can be taken up by a user. FIG.3is a block diagram illustrating one example of cloud security posture analysis system122. As noted above, system122can be deployed in cloud environment102and/or access cloud environment102through network106shown inFIG.1. System122includes a cloud account onboarding component202, a cloud scanner deployment component204, a cloud data scanning and analysis system206, a visualization system208, and a data store210. System122can also include one or more processors or servers212, and can include other items214as well. Cloud account onboarding component202is configured to onboard cloud services108for analysis by system122. After onboarding, cloud scanner deployment component204is configured to deploy a cloud scanner (e.g., deployed cloud scanner(s)148shown inFIG.2) to the cloud service. In one example, the deployed scanners are on-demand agent-less scanners configured to perform agent-less scanning within the cloud service. One example of an agent-less scanner does not require agents to be installed on each specific device or machine. The scanners operate on the resources126and access management and control system128directly within the cloud service, and generate metadata that is returned to system122. Thus, in one example, the actual cloud service data is not required to leave the cloud service for analysis. Cloud data scanning and analysis system206includes a metadata ingestion component216configured to receive the metadata generated by the deployed cloud scanner(s)148. System206also includes a query engine218, a policy engine220, a breach vulnerability evaluation component222, one or more application programming interfaces (APIs)224, a cloud security issue identification component226, a cloud security issue prioritization component228, historical resource state analysis component230, and can include other items232as well. Query engine218is configured to execute queries against the received metadata and generated cloud security issue data. Policy engine220can execute security policies against the cloud data and breach vulnerability evaluation component222is configured to evaluate potential breach vulnerabilities in the cloud service. APIs224are exposed to users, such as administrators, to interact with system122to access the cloud security posture data. Component226is configured to identify cloud security issues and component228can prioritize the identified cloud security issues based on any of a number of criteria. Historical resource state analysis component230is configured to analyze a history of states of resources126. Component230includes a triggering component234configured to detect a trigger that to perform historical resource state analysis. Triggering component234is configured to identify an event that triggers component230to analyze the state of resources126. The event can be, for example, a user input to selectively trigger the analysis, or a detected event such as the occurrence of a time period, an update to a resource, etc. Accordingly, historical resource state can be tracked automatically and/or in response to user input. Component230includes a resource configuration change tracking component236configured to track changes in the configuration of resources126. Component230also includes an anomalous state detection component238, and can include other items240as well. Component238is configured to detect the occurrence of anomalous states in resources126. A resource anomaly can be identified where a given resource has an unexpected state, such as a difference from other similar resources identified in the cloud service. Visualization system208is configured to generate visualizations of the cloud security posture from system206. Illustratively, system208includes a user interface component242configured to generate a user interface for a user, such as an administrator. In the illustrated example, component242includes a web interface generator244configured to generate web interfaces that can be displayed in a web browser on a client device. Visualization system208also includes a resource graph generator component246, a cloud attack surface graph generator component248, and can include other items250as well. Resource graph generator component246is configured to generate a graph or other representation of the relationships between resources126. For example, component246can generate a cloud infrastructure map that graphically depicts pairs of compute resources and storage resources as nodes and network communication paths as edges between the nodes. Cloud attack surface graph generator component248is configured to generate a surface graph or other representation of vulnerabilities of resources to a breach attack. In one example, the representation of vulnerabilities can include a cloud attack surface map that graphically depicts propagation of a breach attack along network communication paths as edges between nodes that represent the corresponding resources. Data store210stores the metadata252obtained by metadata ingestion component216, sensitive data profiles254, and can store other items256as well. Examples of sensitive data profiles are discussed in further detail below. Briefly, however, sensitive data profiles254can identify data patterns that are categorized as sensitive or meeting some predefined pattern of interest. Pattern matching can be performed based on the target data profiles. For example, pattern matching can be performed to identify social security numbers, credit card numbers, other personal data, medical information, to name a few. In one example, artificial intelligence (AI) is utilized to perform named entity recognition (e.g., natural language processing modules can identify sensitive data, in various languages, representing names, company names, locations, etc.). FIG.4is a block diagram illustrating one example of a deployed scanner148. Scanner148includes a resource identification component262, a permissions data identification component264, an access control data identification component266, a cloud infrastructure scanning component268, a cloud data scanning component270, a metadata output component272, and can include other items274as well. Resource identification component262is configured to identify the resources126within cloud service108-1(and/or other cloud services108) and to generate corresponding metadata that identifies these resources. Permissions data identification component264identifies the permissions data136and access control data identification component266identifies access control data138. Cloud infrastructure scanning component268scans the infrastructure of cloud service108to identify the relationships between resources130and132and cloud data scanning component270scans the actual data stored in storage resources132. The generated metadata is output by component272to cloud security posture analysis system122. FIG.5is a flow diagram300showing an example operation of system122in on-boarding a cloud account and deploying one or more scanners. At block302, a request to on-board a cloud service to cloud security posture analysis system122is receives. For example, an administrator can submit a request to on-board cloud service108-1. FIG.6illustrates one example of a user interface display304provided for an administrator. Display304includes a display pane306including a number of display elements representing cloud accounts that have been on-boarded to system122. Display304includes a user interface control308that can be actuated to submit an on-boarding request at block302. Referring again toFIG.5, at block310, an on-boarding user interface display is generated. At block312, user input is received that defines a new cloud account to be on-boarded. The user input can define a cloud provider identification314, a cloud account identification316, a cloud account name318, access credentials to the cloud account320, and can include other input322defining the cloud account to be on-boarded. FIG.7illustrates one example of an on-boarding user interface display324that is displayed in response to user actuation of control308. Display324includes a user interface mechanism326configured to receive input to select or otherwise define a particular cloud account provider. In the illustrated example, mechanism326includes a plurality of selectable controls representing different cloud providers including, but not limited to, AWS, GCP, Azure. Display324includes a user input mechanism328configured to receive input defining a cloud account identifier, and an account nickname. User input mechanisms330allow the user to define other parameters for the on-boarding. A user input mechanism332is actuated to generate a cloud formation template, or other template, to be used in the on-boarding process based on the selected cloud account provider. Once the cloud account is connected to system122, display304inFIG.6can be updated to show the details of the cloud account as well as the scan status. InFIG.6, each entry includes a display name334, an account ID336, a data store count338, and a risk count340. Data store count338includes an indication of the number of data stores in the cloud account and the risk count340includes an indication of a number if identified security risks. A field342indicates the last scan status, such as whether the last scan has completed or whether the scanner is currently in progress or currently scanning. A field344indicates the time at which the last scan was completed. Referring again toFIG.5, at block346, the cloud account is authorized using roles. For example, administrator access (block348) can be defined for the cloud scanner using IAM roles. One or more cloud scanners are defined at block350and can include, but are not limited to, cloud infrastructure scanners352, cloud data scanners354, vulnerability scanners356, or other scanners358. At block360, the cloud scanners are deployed to run locally on the cloud service, such as illustrated inFIG.2. The cloud scanners discover resources at block362, scan data in the resources at block364, and can find vulnerabilities at block366. As discussed in further detail below, a vulnerability can identified based on finding a predefined risk signature in the cloud service resources. The risk signatures can be queried upon, and define expected behavior within the cloud service and locate anomalies based on this data. At block368, if more cloud services are to be on-boarded, operation returns to block310. At block370, the scan results from the deployed scanners are received. As noted above, the scan results include metadata (block372) generated by the scanners running locally on the cloud service. At block374, one or more actions are performed based on the scan results. At block376, the action includes security issue detection. For example, a breach risk on a particular resource (such as a storage resource storing sensitive data) is identified. At block378, security issue prioritization can be performed to prioritize the detected security issues. Examples of security issue detection and prioritization are discussed in further detail below. Briefly, security issues can be detected by executing a query against the scan results using vulnerability or risk signatures. The risk signatures identify criterion such as accessibility of the resources, access and/or permissions between resources, and data types in accessed data stores. Further, each risk signature can be scored and prioritized based impact. For example, a risk signature can include weights indicative of likelihood of occurrence of a breach and impact if the breach occurs. The action can further include providing user interfaces at block380that indicate the scan status (block382), a cloud infrastructure representation (such as a map or graph) (block384), and/or a cloud attack surface representation (map or graph) (block386). The cloud attack surface representation can visualize vulnerabilities based on the low. Remedial actions can be taken at block388, such as creating a ticket (block390) for a developer or other user to address the security issues. Of course, other actions can be taken at block392. For instance, the system can make adjustments to cloud account settings/configurations to address/remedy the security issues. FIG.8illustrates one example of a user interface display400, that can be displayed at block376. Display400provides a dashboard for a user which provides an overview of on-boarded cloud service accounts. The dashboard identifies a number of users402, a number of assets404, a number of data stores406, and a number of accounts408. A data sensitivity pane410includes a display element412that identifies a number of the data stores that include sensitive data, a display element413that identifies a number of users with access to the sensitive data, a display element414that identifies a number of resources having sensitive data, and a display element416that identifies a number of risks on the data stores having sensitive data. Further, graphs or charts can be generated to identify those risks based on factors such as status (display element418) or impact (display element420). Display element420illustratively categorizes the risks based on impact as well as the likelihood of occurrence of those risks. Risk categorization is discussed in further detail below. Briefly, however, display element420stratifies one or more of breach likelihood scores or breach impact scores categories representing different levels of severity, such as high, medium, and low severity levels. In one example, display element420is color coded based on the degree of impact of the risk (e.g., high impact is highlighted in red, medium impact is highlighted in yellow, and low impact is highlighted in green). FIG.9is a flow diagram450illustrating one example of cloud infrastructure scanning performed by cloud scanner148deployed in cloud service108-1. At block452, an agent-less scanner is executed on the cloud service. The scanner can perform constant time scanning at block454. An example constant time scanner runs an algorithm in which the running time does not depend, or has little dependence on, the size of the input. The scanner obtains a stream of bytes and looks for a multiplicity of patterns (one hundred patterns, two hundred patterns, three hundred patterns, etc.) in one pass through the stream of bytes, with the same or substantially similar performance. Further, the scanner can return real-time results at block456. Accordingly, cloud security posture analysis122receives updates to the security posture data as changes are made to the cloud services. At block458, the scanner discovers the compute resources130and, at block460, the storage resources132. Sensitive data can be discovered at block462. The agent-less scanner does not require a proxy or agent running in the cloud service, and can utilize server-less containers and resources to scan the documents and detect sensitive data. The data can be accessed using APIs associated with the scanners. The sensitive data can be identified using pattern matching, such as by querying the data using predefined risk signatures. At block464, access paths between the resources are discovered based on permissions data136(block466), and/or access control data138(block468). A rule processing engine, such as using JSON metadata, can be utilized to analyze the roles and policies, and can build access relationships between the nodes representing the resources. The policies can be decoded to get access type (allow, deny, etc.) and the policy can be placed in a node to link from a source to target node and create the access relationship. At block470, metadata indicative of the scanning results is generated and outputted by metadata output component272. FIGS.10-1,10-2,10-3, and10-4(collectively referred to asFIG.10) provide a flow diagram500illustrating an example operation for streamlined analysis of security posture. For sake of illustration, but not by limitation,FIG.10will be discussed in the context of cloud security posture analysis system122illustrated inFIG.3. Security posture can be analyzed by system206using metadata252to return from the cloud service scanners. At block502, permissions data and access control data are accessed for pairs of compute and storage resources. The permissions and access control data can include identity-based permissions at block504, resource-based permissions at block506, or other permissions as well. At block508, network communication paths between the pairs of resources are traced based on the permissions and access control data. For example, the permissions and access control data can identify which paths have read access from a compute resource from a particular compute resource to a particular storage resource, as represented at block510. Similarly, paths with write access from compute to storage resources can be identified at block512, paths with synchronization access between storage resources can be identified at block514. Of course, other types of paths can be identified as well. For sake of example, but not by limitation, a directional graph is constructed to captures all resources as nodes, with labels assigned to the nodes for search and retrieval. In the AWS example, labels can mark a node as a database or S3 resource. Similarly, labels can represent actors as normal users, admins, developers, etc. Then, known relationships are identified between the nodes, for example using the information available from the cloud infrastructure configuration (e.g., defining a resource belongs to a given account). Similarly, a relationship can be created between the policy attached to a resource, and/or the roles that can be taken up by a user. In addition to storing static information, a rule processing engine (e.g., using JavaScript Object Notation (JSON) metadata) to analyze the roles and policies and build the “access” relationship between the nodes. The analysis can be used to decode the policy to get the access type (e.g., allow, deny, etc.), and the placement of the policy in a node can be used to link from the source node to target node and create the access relationship (e.g., allow, deny, etc.). Similarly, role definitions can be analyzed to find the access type. The graph can therefore include various types of nodes, updated to reflect direct relationships. An iterative process can be performed to find transitive relationships between resources (e.g., resource access for a given entity/actors/resources). In one example, for each access relationship from a first node N1 to a second node N2, the process identify all incoming access relationships of N1. Then, the access types targeting node N1 are analyzed and updated. Using the relationships identified to access N1, the relationships to N2 are updated, and a new set of access relationships are identified to N2 through N1. The process continues to proceed to identify all such relationships with the goal of creating relationships to all nodes that have sensitive data. In one example, block508identifies “access types” which include normalized forms of access permissions. For example, an access type “can read” can be defined to include a plurality of different read objects within AWS (e.g., defined in terms of allowable APIs). Similarly, the AWS permissions “PutObject” and “PutObjectAcl” are transformed to a normalized access type “can write” within system122. At block516, sensitivity classification data is accessed for objects in the storage resources. The sensitivity classification data can include sensitive data profiles at block518. At block520, crawlers can be selected for structured and/or unstructured databases. Crawling the databases can include executing a snapshot of structured databases, creating a dump of structured databases, and scanning the dump for sensitivity classification, as represented at block524. At block526, a subset of the pairs of resources are qualified as vulnerable to a breach attack. The qualification can be based on the permissions data at block528, the access control data at block530, and/or risk criterion at block532. The risk criterion can include any of a wide variety of different types of criteria. For example, a risk criterion can indicate a variety of access to the resources at block534. One example includes a number of different roles with access to the resource, as represented at block536. Also, a risk criterion can indicate a width of configured access to the resources, at block538. For example, the width of configured can include a number of workloads with access to the resources (block540) and/or a type of workload with access to the resources (block542). A risk criterion can also indicate a number of users with access to the resources at block544, a volume of sensitive data in the resources at block546, and/or types of categories of sensitive data at block548. Of course, other types of risk criterion can be utilized as well. In one example, the risk criterion can be defined based on user input.FIG.11illustrates one example of a user interface display550that facilitates user definition of risk criterion. Display550includes a set of user input mechanisms that allows a user to define likelihood weights, represented at numeral552, and impact weights, represented at554. For sake of illustration, a first user input mechanism556allows a user to set a weight that influences a likelihood score for variations in the variety of access to the resources (e.g., block534). Similarly, controls558,560, and562allow a user to set weights that influence likelihood scores for a width of configured access, a number of principles or users with access, and the type of workloads with access, represented by reference numerals558,560, and562, respectively. Similarly, controls563,564,566,568, and570, allow a user to set weights on impact scores for risk criterion associated with a volume of sensitive data, a type of sensitive data, and categories of sensitive data (i.e., legal data, medical data, financial data), respectively. Referring again toFIG.10, at block572, a first subset of the storage resources that satisfy a subject vulnerability signature are identified. A subject vulnerability signature illustratively includes a risk signature indicative of a risk of vulnerability or breach. FIG.12illustrates an example user interface display574that can be accessed from display304illustrated inFIG.6, and displays a set of risk signatures. The risk signatures can be predefined and/or user-defined. For example, display574can include user input mechanisms that allow a user to add, delete, or modify a set of risk signatures576. As noted above, each risk signature defines a set of criteria that the resources and data in cloud service108-1can be queries upon to identify indications of vulnerabilities in the cloud service. The risk signatures inFIG.12include a name field578, a unique risk signature ID field580, and a description identified in a description field582. A result header field584identifies types of data that will be provided in the results when the risk signature is matched. A resource field586identifies the type of resource, and a tags field588identifies tags that label or otherwise identify the risk signature. Additionally, a likelihood factor field590indicates a likelihood factor that is assigned to the risk signature and an impact factor signature592indicates an impact factor assigned to the risk signature. The likelihood factor indicates a likelihood assigned to occurrence of the risk signature and the impact factor assigns an impact to the cloud service assigned to the occurrence of the risk signature. For sake of illustration, a likelihood factor of ten (out of a scale of ten) indicates that the vulnerability is likely to occur if the risk signature is identified in the cloud posture data, whereas a likelihood factor of one indicates a low likelihood. Similarly, an impact factor of ten (out of a scale of ten) indicates that the vulnerability is considered to have a high impact, whereas an impact factor of one indicates the vulnerability is considered to have a low impact on the cloud service. A risk signature can be defined based upon any of a wide variety of criteria. For example, a risk signature can identify one or more configurations or settings of compute resources130. Examples include, but are not limited to, a configuration that indicates whether the compute resource provides accessibility to a particular type of data, such as confidential data, medical data, financial data, personal data, or any other type of private and/or sensitive content. In another example, a risk signature indicates that a compute resource is publicly accessible, includes a public Internet protocol (IP) address, or has IP forwarding enabled. In another example, a risk signature indicates that a compute resource has monitoring disabled, has no IAM role assigned to the compute resource, has backup disabled, data encryption disabled, and/or a low or short backup retention policy. Also, a risk signature can identify password policies set for the compute resource. For instance, a risk signature can indicate a lack of minimum password policies, such as no minimum password length, no requirement of symbols, lowercase letters, uppercase letters, numbers, or password reuse policy. Also, a risk criterion can indicate a location of the compute resource, such as whether the compute resource is located outside of a particular region. Risk signatures can also indicate configurations and/or settings of storage resources132. For example, the configurations and settings can indicate authentication or permissions enforced by the storage resource, such as whether authentication is required for read, write, delete, synchronization, or any other operation. Also, the risk signature can indicate whether multi-factor authentication is disabled for the storage resource, as well as a breadth of permissions grants (e.g., whether all authenticated users are granted permissions within the storage resource). Also, a risk signature can indicate whether encryption is enabled by default, a password policy enforced by the storage resource, whether the storage resource is anonymously accessible, publicly accessible, has a key management service disabled, has logging disabled, life cycle management disabled, whether the storage resource is utilized for website hosting, has geo-restriction disabled, or has backup functionality disabled. Also, the risk signature can indicate a type of data stored by the storage resource, such as the examples discussed above. Referring again toFIG.10, the first subset of storage resources identified at block572, are based on determining that the storage resources satisfy a risk signature of containing private and/or sensitive content, as represented at block594. In another example, the subject vulnerability signature is based on a prevalence of accessibility of a given role within a network exceeding a set threshold, as represented at block596. For instance, the given role can include principles (block598), workloads (block600), a cloud environment (block602), a company (block604), or other roles (block606). Also, the subject vulnerability signature can indicate that the storage resources are accessible by more than a threshold number of users, as represented at block608. Also, the subject vulnerability signature can indicate that the storage resources are accessible by a vulnerable compute resource that is publicly accessible, as represented at block610. This determination can be based on identifying that the compute resource is accessible through a public portal, at block612and/or is accessible by users outside a given company network at block614. As represented at block616, the subject vulnerability signature can indicate that the storage resources are accessible by inactive users. For example, inactive users can include users who have not accessed the resources within a threshold time, at block618. At block620, a second subset of storage resources are identified that synchronization data from the first subset. At block622, a particular compute resource is determined to have anomalous access to a given storage resource. The identification of anomalous access can be based on a comparison of a network communication path of the particular compute resource against paths of other compute resources. For example, the paths of other compute resources can be used to identify an expected communication path for the particular compute resource and/or expected permission for the particular resource. Then, if a difference above a threshold is identified, the particular compute resource is identified as anomalous. At block624, a representation of the propagation of the breach attack along the network communication paths is generated. In one example, the representation includes a cloud attack surface map, as represented at block626. An example cloud attack surface map includes nodes representing the resources (block628) and edges representing the breach attack propagation (block630). The map graphically depicts the subset of storage resources (block632) and the subject vulnerability signature (block634). Also, the map can graphically depict the anomalous access to the particular compute resource (block636). For example, public accesses to the subset of storage resources can be graphically depicted at block638and storage resources that grant external access and/or resources that are initialized from outside a particular jurisdiction can be identified at blocks640and642, respectively. FIG.13illustrates one example of a user interface display650that graphically depicts vulnerability risks, in tabular form. In one example, display650renders the data discussed with respect to the cloud attack surface at block626ofFIG.10in a table. Display650includes a user input mechanism652to specify a time range for visualizing the risk, and includes a description654, a resource identifier656, and an account identifier658for the cloud service account. The display can also indicate the impact660and likelihood662of the vulnerability risk, as well as signature identifier664that identifies the particular risk signature that was matched. Display650also includes a details control666that is actuatable to display details of the identified risk. One example of a details display pane668is illustrated inFIG.14. Display pane668shows a description of the risk at display element670and an indication672of the query utilized to match the risk signature. Referring again toFIG.10, at block676, a query is received for execution against the results of the metadata analysis. For example, a query can specify a subject vulnerability at block678and/or the query can request identification of resources with anomalous access at block680. At block682, the query is executed against the cloud attack surface map. For example, the cloud attack surface map can be filtered to identify results that match the query. The query results (e.g., the filtered map) is returned at block684. The filtered results can include identifying a subset of storage resources that match the query (block686) and/or resources having anomalous access at block688. The cloud attack surface graph is graphically filtered based on the results at block690. For example, the graph can be filtered based on applications running on the pairs of resources in the identified subset (block692). Breach likelihood scores and breach impact scores are determined for the resources at block694, and the scores can be depicted on the cloud attack surface map at block696. In one example, the scores are graphically categorized or stratified at block698into high, medium, or low risk. One example is discussed above with respect toFIG.8. FIG.15illustrates one example of a user interface display700configured to graphically depict breach likelihood and impact scores. Display700identifies data stores in storage resources132that are identified as meeting a subject vulnerability. Each entry shown in display700identifies a type702of the resource, an impact score704, a likelihood score706, a resource identifier708that identifies the resource, and a cloud service identifier710that identifies the particular cloud resource. Based on actuation of a risk item view generator mechanism712, display700shows details for the given resource in a details pane714, as shown inFIG.16. Display pane714can show users716that have access to the resource, roles718that have access to the resource, other resources720that have access to the resource, as well as external users722or external roles724. Display pane714also shows the access type726. FIG.17illustrates one example of a display pane730showing access details for a particular data store, along with a list of users who have access to that data store, and the access type for those users. Upon actuation of a roles actuator732, the display shows a list of roles that have access to the data store, as shown inFIG.18. Upon actuation of a resources actuator734, the display shows a list of resources that have access to the data store, as shown inFIG.19. FIGS.20-1,20-2,20-3, and20-4(collectively referred to asFIG.20) provide a flow diagram800illustrating one example of infrastructure analysis and query execution. At block802, permissions data and access control data for pairs of compute and storage resources is accessed. Policy data is accessed at block804. For example, the policy data can include identity-based policies (block806), resource-based policies (block808), permissions boundaries (block810), service control policies (SCP) (block812), session policies (block814) as well as other policies (block816). At block818, network communication paths are traced between the pairs of resources. Tracing the network communication path can be based on the permissions data at block820, the access control data at block822, the policy data at block824, and/or other data at block826. At block828, a cloud infrastructure map is constructed. An example of a cloud infrastructure map includes nodes that graphically represent pairs of compute and storage resources (block830), and edges that represent network communication paths between the resources (block832). At block834, the map graphically depicts metadata associated with the pairs of resources. For example, a graphical metadata depiction is expandable or collapsible via user selection, as represented at block836. The metadata can be grouped across metadata categories at block838, such as based on cloud-sourced metadata at block840, derived metadata at block842, locally annotated metadata at block844, or based on other metadata categories at block846. The cloud infrastructure map can also graphically depict anomalous configured access instances at block848. For example, block848can detect different levels of access among resources that connect to a common network component, as represented at block850. At block852, the map graphically depicts anomalous actual access instances in the cloud environment. For instance, the instances can be detected from access logs at block854. User annotated tags for the resources can be depicted in the map at block856as well. At block858, a query is received. The query can include a search term860, a content category (block862), a data privacy policy (block864), a temporal period (block866), and can include other items868as well. The query is executed at block870and query results are returned at block872. For example, the query results can identify a subset of the pairs of resources that contain the searched content at block874. At block876, resources are identified that do not have the search content, but have access to the subset. At block878, the query results can identify a subset of the pairs of resources that contain a searched content category. For example, at block880, resources are identified that do not have the content from the content category, but that have access to the subset of resources that have the searched content category. At block882, the query results can identify a subset of resources as complying with a given data privacy policy, specified in the query. Additionally, the results can identify resources that have access to the identified subset, at block884. At block886, a prior state of the resources is identified. Of course, the query results can identify other data888as well. At block890, a filter criterion is received. The filter criterion can be based on the metadata (block892), based on applications running on at least one pair of resources (block894), and/or based on one or more networks in the cloud environment (block896). The networks can include virtual private clouds (VPCs)898, regions900, Internet gateways902, network access control lists904, sub networks906, or other networks908. The filter criterion can also be based on tags at block910, such as users annotated tags represented at block912. The filter criterion can also be based on owners of the resources (block914), a creation date and/or time of the resources (block916), an inactive/stale criterion (block918), or other filter criterion (block920). At block922, the cloud infrastructure map is filtered based on the filter criterion and a filtered cloud infrastructure map is rendered at block924. FIGS.21-1and21-2(collectively referred to asFIG.21) provide a flow diagram1000illustrating one example of cloud data scanning in a cloud service. At block1002, administrative access to the cloud account is obtained. A scan schedule for scanning the cloud account is defined at block1004. FIGS.22and23illustrates example user interface displays for defining a scan schedule at block1004. As shown inFIG.22, a user interface display1006includes a list1008of currently defined scan schedules1010,1012,1014, etc. Each scan schedule is defined by a set of criteria1016for identifying which data stores are to be scanned, along with temporal criteria1018that define when the scan is to run. The scan schedule can be edited using an edit actuator1020. Further, the data scan can be executed manually, through a control1022. New schedules can be defined using a new schedule control1024.FIG.23illustrates user interface display1006when a given one of the data scans has been initiated and includes a scan status indicator1026. Referring again toFIG.21, block1028represents deployment and execution of a scanner locally on the cloud account. In one example, the data is access using APIs, and text is extracted using a text extraction method. Once the text is obtained, natural language processing (NLP) modules identify sensitive data in different languages. For instance, the scanner includes a file system crawler for each data store that is configured to identify pattern and context-based entities and/or machine learning-based entities, such as named entity recognition (names, company names, locations). Further, data loss prevention (DLP) engines can identify social security numbers, credit card numbers, etc. That is, the engine can identify which nodes content particular types of sensitive data. A scanner is triggered and recognizers for sensitive entity detection are loaded, along with profiles for analysis. Text is extracted and entity detection is performed. In one example, the scanning is performed locally on the cloud service so that the organization's data does not leave the organization's cloud account, which can increase privacy and conformance with data policies. The scanners can be encapsulated as containers, that are deployed in the cloud environment using elastic compute instances, such as EC2 resources, Lambda resources, etc. At block1030, objects in the cloud environment are queued and, at block1032, the objects are partitioned into a plurality of object chunks. At block1034, a number (M) of object chunks are identified. At block1036, depending upon the number M, a number (N) of instances of the server-less container-less scanners are initialized. In one example, the number M is significantly larger than the number N (block1038). For example, the number M can be ten times more (block1040) than the number N, one hundred times more (block1042) than the number N, etc. Of course, other numbers of object chunks and instances of the scanners can be utilized, as represented at block1044. The scanners are dynamically scalable (block1046), and each scanner can be portable and independently executable as a microservice (block1048). At block1050, a multiplicity of different data patterns to scan are obtained. For example, the data patterns can include sensitive string patterns (block1052), social security numbers (block1054), credit card numbers (block1056), or other data patterns (block1058). For each scanner, a corresponding object chunk is scanned exactly once to detect the multiplicity of different data patterns, as represented at block1060. Accordingly, each scanner can identify a number of different data patterns, through a given pass through the object chunk. This single pass scanning increases efficiency by decreasing scanning latency. In one example, a multiplicity of object metadata can be detected at block1062. Sensitivity metadata is generated at block1064based on the detected data patterns. The system is controlled based on the sensitivity metadata at block1066. For example, the sensitivity metadata is sent to a metadata store in a control plane in the cloud environment at block1068. Alternatively, or in addition, the cloud attack surface graph is modified at block1070. For example, sensitivity annotation is applied to the graph at block1072. FIGS.24-1and24-2(collectively referred to asFIG.24) provide a flow diagram1100illustrating one example of depicting access links along communication paths between roles and resources. At block1102, an indication of access sub-networks (e.g., territories, regions, etc.) in a cloud environment between a plurality of resources and a plurality of users is obtained. For example, the indication can be obtained from memory at block1104. In one example, the access sub-networks are identified as subnetworks that make a subject resource accessible to one or more users, as represented at block1106. At block1108, user-to-role mappings for roles assigned to the plurality of users is obtained. For example, access management and control system128is used to identify roles defined at a particular resolution or level of the cloud environment, as represented at block1110. The access sub-networks are traversed at block1112and a number (U) of user-to-resource mappings between the users and the resources are built based on traversing the sub-networks, as represented at block1114. At block1116, the number U of user-to-resource mappings is evaluated against the user-to-role mappings to accumulate a number (R) of role-to-resource mapping. In one example, the number U is significantly larger than the number R, as represented at block1118. For example, the number U can be ten times more (block1120) or one hundred times more (block1122) than the number R. Of course, other numbers of mappings can be utilized as well, as represented at block1124. In one example, at block1126a role-to-resource mapping maps a particular role to a particular subset of resources. Also, new resources that are assigned to the particular role are automatically mapped to the particular subset, as represented at block1128. At block1130, access communication paths between the roles and the plurality of resources are traced based on the number R of role-to-resource mapping. At block1132, a compact access network graph is constructed that graphically depicts access links along the traced access communication path. For example, the graph can include nodes that represent roles and resources (block1134), and edges that represent access links along the access communication paths (block1136). At block1138, the compact access network graph can be graphically updated to reflect the new resource assigned at block1128. At block1140, a history of resource configuration changes and/or anomalous state (e.g., risks) detected for various resources is tracked. For example, this tracking can be manually triggered at block1142, or programmatically triggered at block1154. Further, the history can be tracked over a timeline, such as to indicate when a particular risk opened and/or closed, as represented at block1146. At block1148, a difference between a non-anomalous state and a successive anomalous state is tracked. The tracking can also include tracking a difference between successive anomalous states at block1150and/or a difference between successive versions of the resources at block1152. For example, the versions can be determined based on respective resource configurations of the successive versions, at block1144. The tracked difference can be compared to a threshold difference at block1156, to determine whether to track the instance of the resource configuration and/or state change. At block1158, the tracked history can be graphically rendered, such as on a timeline at block1160. The tracked difference can be graphically rendered at block1162. Further, the tracked history can be provided with a playback feature1164or a play forward feature1166, which allow a user to navigate through the tracked history. FIG.25illustrates a user interface display1200that includes a visualization of access communication paths. The visualization inFIG.25can be rendered as a cloud infrastructure graph (e.g., map) that shows relationships between compute and storage resources and/or mappings between users, roles, and resources, based on the permissions data and the access control data. Further, the visualization can be augmented using sensitivity classification data to represent propagation of breach attack along communication paths. For example, the visualization inFIG.25can be configured to render the subset(s) of resources identified inFIG.10. That is, display1200can include the cloud attack surface map at block626. As shown inFIG.25, nodes1202represent compute resources and nodes1204represent storage resources. Illustratively, the storage resources include data stores or buckets within a particular cloud service. Nodes1206represent roles and/or users. The links (e.g., access paths) or edges1208between nodes1202and1206represent that compute resources that can access the particular roles represented by nodes1206. The edges or links1210represent the storage resources that can be accessed by the particular roles or users represented by nodes1206. Based on these relationships between compute and storage relationships, display elements can be rendered along, or otherwise visually associated with, the edges1208and/or1210, to identify and graphically depict the propagation of breach attack. For instance, vulnerability display elements can be rendered in association with edges1208and/or1210to identify that a subject vulnerability signature (e.g., one or more risk signatures shown inFIG.12) has been identified in the data, based on querying the permissions and access control data using the subject vulnerability signature. For example, display element1209represents a risk signature between nodes1203and1212and display element1211represents (such as by including a description, icon, label, etc.) a risk signature between nodes1212and1222. Each display element1209,1211can represent (such as by including a description, icon, label, etc.) corresponding likelihood and impact scores, can be actuatable to render details of the subject vulnerability, such as in a display pane on display1200. The details can include which risk signature has been matched, which sensitive data is at risk, etc. The graph can be interactive at a plurality of different resolutions or levels. For example, a user can interact with the graph to zoom into a specific subset, e.g., based on cloud vendor concepts of proximity (regions, virtual private clouds (VPCs), subnets, etc.). Node1212includes an expand actuator1214that is actuatable to expand the display to show additional details of the roles, role groups, and/or users represented by node1212. When zooming into one region, such as when using the actuators discussed below, other regions can be zoomed out. This can be particularly advantageous when handling large diagrams. Further, the graph includes one or more filter mechanisms configured to filter the graph data by logical properties, such as names, values of various fields, IP addresses, etc. For example, a free form search box1215is configured to receive search terms and filter out all resources (e.g., by removing display of those resources) except those resources matching the search terms. In one example, the search terms include a subject vulnerability signature (e.g., containing private and sensitive content, public accessibility, accessibility by a particular user and/or role, particular applications running on the resources, access types, etc.). An input mechanism1217is configured to receive a temporal filter or search criterion. For example, a filter criterion is entered by a user to represent at least one of a creation time or date of computer resources and storage resources. Further, a query can be entered specifying at least one temporal period, wherein the cloud infrastructure map is updated to graphically return at least one prior state (e.g., a permissions state, an access control state, and/or a sensitivity data classification state) of compute resources and storage resources based on the temporal period. A checkbox (not shown inFIG.25, and which can be global to the diagram) provides the ability to toggle whether or not direct neighbors of the matching resources are also displayed, even if those neighbors themselves don't match the search terms. This allows users to search for specific resources and immediately visualize all entities that have access to the searched resources. To illustrate, assume a search for personally identifiable information (PII) matches a set of S3 buckets. In this case, the graph renders resources that have access to that PII. Further, the graph can show associated data and metadata (e.g., properties extracted from cloud APIs, properties derived such as presence of sensitive data, access paths, etc.). This data and metadata can be shown on a panel to the left or right of the diagram (such as shown inFIGS.27-30). Further, user can actuate user interface controls to collapse/expand this panel. In one example, the panel remains collapsed or expanded until changed, even across different searches and login sessions. Additionally, the display can groups properties in related categories (e.g., summary, all metadata retrieved from the cloud, all metadata derived, local annotations, etc.), and the diagram can be filtered (such as by using the free form search bar mentioned above) by metadata such as tags, applications running on them, identified owners, time since created, etc.). The state of the resources can be shown as of a user defined date or time. A calendar component can allow users to select a particular date to visualize historical state data as of that particular date. In one example, a user interface control allows user to define critical data (e.g., crown jewel data), such as through a filter mechanism (e.g., search box1215). The display then visually highlights that critical data along with all entities with access (defined by a filter such as CAN_READ/CAN_WRITE/CAN_SYNC etc) to the critical data. Anomalous configured access (different levels of access among similar resources can be visually highlighted in the display. For example, if there are four EC2 instances in a worker group connected to the same load balancer, all of the EC2 instances are expected to have the same type of access. However, if one of the EC2 instances has different access, the EC2 instance is identified as anomalous and visually highlighted to the user. Similarly, the display can visually highlight anomalous actual access. That is, instead of inspecting configured access, the system looks at actual access determined using, for example, access logs (e.g., cloudtrail logs, S3 access logs, etc.). Further, the display can be configured to allow the user to add tags to one or more selected resources in the diagram. For instance, when users visualize cloud assets in context, the user can add additional tags that let the user write policies, perform filtering etc. that further aid in visualization and understanding. The user interface allows the user to choose one or more resources and add tags (keys and values in AWS Tags, for example) to selected resources. FIG.26shows display1200after actuation of actuator1214. As shown inFIG.26, node1212has been expanded to show particular roles or role groups1216and the relationships between those roles and role groups (as represented by links1218), to the nodes1206. Role groups1216is represented by an actuatable display element, that is actuatable to display additional details associated with the corresponding role. For example, display element1220is actuatable to display details of the corresponding role, as shown inFIG.27. Referring again toFIG.25, the nodes1204representing the storage resources are also actuatable to show additional details. For example, node1222includes an actuator1224that is actuatable to display the view shown inFIG.28.FIG.28includes a representation1226of the constituents of the storage resource represented by node1222. One or more of the elements are further actuatable to show additional details of the constituent. For example, node display element1228includes an actuator1230to show, in the example display ofFIG.29, details of the virtual private cloud represented by node display element1228. Referring again toFIG.25, node1232is actuatable to show details of the corresponding compute resource. An example display for compute resource details is shown inFIG.30. FIG.31shows one example of a user interface display1250that visualizes resources identified based on the data scanning performed on cloud service108-1. Display1250includes a list of display elements1252, each representing a particular resource. Each entry includes an account ID1254, a resource type1256, a name1258, and a region1260. A details actuator1262can be actuated to show additional details of the corresponding resource. For example,FIG.32shows a display1264, that is displayed in response to actuation of actuator1262. Referring again toFIG.31, display1250includes navigation actuators1266, that are actuatable to navigate through different portions of the list.FIG.33illustrates a second page displayed in response to actuation of control1268. FIG.34shows an example of a user interface display1270displaying details of a particular resource, and includes a details actuator1272. Actuation of actuator1272displays the interface shown inFIG.35. As shown inFIG.35, the resource (illustratively “config-service-main”) is an AWS role having an access type identified at display element1274. The access type typically depends on the resource. In the present case, a principle1276identifies the entities that have the given role, and the access type identifies that the identified entities can assume the given role relative to the resource. This definition connects the roles to the resources. FIG.36illustrates a flow diagram1300for streamlined analysis of access sub-networks, such as regions or territories, in a cloud environment. At block1302, an indication of access sub-networks between a plurality of storage resources and compute resources is obtained. For example, the indication can be obtained from memory at block1304. In one example, each access sub-network makes a subject storage resource accessible to one or more compute resources, as represented at block1306. At block1308, compute resources-to-role mappings for roles assigned to the plurality of compute resources is obtained. Each mapping, in one example, maps a particular resource to a particular role defined in the cloud environment. The roles can be defined at a resolution or level of the cloud environment, as represented at block1310. At block1312, the access sub-networks are traversed to build, at block1314, a number (U) of compute resources-to-storage resource mappings between the compute resources and storage resources. Each mapping, in one example, maps a particular compute resource to a particular storage resource. At block1316, the number U of compute resources-to-storage resource mappings is evaluated against the compute resource-to-role mappings to accumulate a number (R) role-to-storage resource mappings between the roles and the plurality of storage resources. Each mapping, in the number R, maps a particular role to a particular storage resource and indicates which storage resource that particular role can access. In one example, the number U is significantly larger than the number R, as represented at block1318. For example, the number U can be greater than approximately ten times the number R, as represented at block1320. In another example, the number U is greater than approximately one hundred times the number R, as represented at block1322. These, of course, are for sake of example only. At block1324, the access communication paths are traced between the roles and the plurality of storage resources based on the number R of the role-to-storage resource mappings. At block1326, a compact access network graph is constructed that graphically depicts access links along the traced access communication paths. Examples of a network graph are discussed above. Briefly, in one example, nodes in the graph represent roles and storage resources (block1328), and edges represent access links along the access communication paths (block1330). It can thus be seen that the present disclosure describes technology for security posture analysis of a cloud account. In some described examples, the technology can discover sensitive data among the cloud storage resources and as well as access patterns to the sensitive data, using local scanners that reduce or eliminate need to send the cloud data outside the cloud environment. This improves data security. Further, the technology facilitates the discover of security vulnerabilities to understand the data security posture, detect, and remediate the security vulnerabilities, and to prevent future breaches to sensitive data. The system provides real-time visibility and control on the control data infrastructure by discovering resources, sensitive data, and access paths, and tracking resource configuration, deep context, and trust relationships in real-time as a graph or other visualization. One or more implementations of the technology disclosed or elements thereof can be implemented in the form of a computer product, including a non-transitory computer readable storage medium with computer usable program code for performing the method steps indicated. Furthermore, one or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps. Yet further, in another aspect, one or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) hardware module(s), (ii) software module(s) executing on one or more hardware processors, or (iii) a combination of hardware and software modules; any of (i)-(iii) implement the specific techniques set forth herein, and the software modules are stored in a computer readable storage medium (or multiple such media). Examples discussed herein include processor(s) and/or server(s). For sake of illustration, but not by limitation, the processors and/or servers include computer processors with associated memory and timing circuitry, and are functional parts of the corresponding systems or devices, and facilitate the functionality of the other components or items in those systems. Also, user interface displays have been discussed. Examples of user interface displays can take a wide variety of forms with different user actuatable input mechanisms. For instance, a user input mechanism can include icons, links, menus, text boxes, check boxes, etc., and can be actuated in a wide variety of different ways. Examples of input devices for actuating the input mechanisms include, but are not limited to, hardware devices (e.g., point and click devices, hardware buttons, switches, a joystick or keyboard, thumb switches or thumb pads, etc.) and virtual devices (e.g., virtual keyboards or other virtual actuators). For instance, a user actuatable input mechanism can be actuated using a touch gesture on a touch sensitive screen. In another example, a user actuatable input mechanism can be actuated using a speech command. The present figures show a number of blocks with corresponding functionality described herein. It is noted that fewer blocks can be used, such that functionality is performed by fewer components. Also, more blocks can be used with the functionality distributed among more components. Further, the data stores discussed herein can be broken into multiple data stores. All of the data stores can be local to the systems accessing the data stores, all of the data stores can be remote, or some data stores can be local while others can be remote. The above discussion has described a variety of different systems, components, logic, and interactions. One or more of these systems, components, logic and/or interactions can be implemented by hardware, such as processors, memory, or other processing components. Some particular examples include, but are not limited to, artificial intelligence components, such as neural networks, that perform the functions associated with those systems, components, logic, and/or interactions. In addition, the systems, components, logic and/or interactions can be implemented by software that is loaded into a memory and is executed by a processor, server, or other computing component, as described below. The systems, components, logic and/or interactions can also be implemented by different combinations of hardware, software, firmware, etc., some examples of which are described below. These are some examples of different structures that can be used to implement any or all of the systems, components, logic, and/or interactions described above. The elements of the described figures, or portions of the elements, can be disposed on a wide variety of different devices. Some of those devices include servers, desktop computers, laptop computers, tablet computers, or other mobile devices, such as palm top computers, cell phones, smart phones, multimedia players, personal digital assistants, etc. FIG.37is a simplified block diagram of one example of a client device1400, such as a handheld or mobile device, in which the present system (or parts of the present system) can be deployed.FIG.38illustrates an example of a handheld or mobile device. One or more communication links1402allows device1400to communicate with other computing devices, and can provide a channel for receiving information automatically, such as by scanning. An example includes communication protocols, such as wireless services used to provide cellular access to a network, as well as protocols that provide local wireless connections to networks. Applications or other data can be received on an external (e.g., removable) storage device or memory that is connected to an interface1404. Interface1404and communication links1402communicate with one or more processors1406(which can include processors or servers described with respect to the figures) along a communication bus (not shown inFIG.14), that can also be connected to memory1408and input/output (I/O) components1410, as well as clock1412and a location system1414. Components1410facilitate input and output operations for device1400, and can include input components such as microphones, touch screens, buttons, touch sensors, optical sensors, proximity sensors, orientation sensors, accelerometers. Components1410can include output components such as a display device, a speaker, and or a printer port. Clock1412includes, in one example, a real time clock component that outputs a time and date, and can provide timing functions for processor1406. Location system1414outputs a current geographic location of device1400and can includes a global positioning system (GPS) receiver, a LORAN system, a dead reckoning system, a cellular triangulation system, or other positioning system. Memory1408stores an operating system1416, network applications and corresponding configuration settings1418, communication configuration settings1420, communication drivers1422, and can include other items1424. Examples of memory1408include types of tangible volatile and non-volatile computer-readable memory devices. Memory1408can also include computer storage media that stores computer readable instructions that, when executed by processor1406, cause the processor to perform computer-implemented steps or functions according to the instructions. Processor1406can be activated by other components to facilitate functionality of those components as well. FIG.38illustrates one example of a tablet computer1450having a display screen1452, such as a touch screen or a stylus or pen-enabled interface. Screen1452can also provide a virtual keyboard and/or can be attached to a keyboard or other user input device through a mechanism, such as a wired or wireless link. Alternatively, or in addition, computer1450can receive voice inputs. FIG.39shows an example computer system3900that can be used to implement the technology disclosed. Computer system3900includes at least one central processing unit (CPU)3972that communicates with a number of peripheral devices via bus subsystem3955. These peripheral devices can include a storage subsystem3910including, for example, memory devices and a file storage subsystem3936, user interface input devices3938, user interface output devices3976, and a network interface subsystem3974. The input and output devices allow user interaction with computer system3900. Network interface subsystem3974provides an interface to outside networks, including an interface to corresponding interface devices in other computer systems. In one implementation, cloud security posture analysis system3918is communicably linked to the storage subsystem3910and the user interface input devices3938. User interface input devices3938can include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system3900. User interface output devices3976can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem can include an LED display, a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem can also provide a non-visual display such as audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system3900to the user or to another machine or computer system. Storage subsystem3910stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. These software modules are generally executed by processors3978. Processors3978can be graphics processing units (GPUs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), and/or coarse-grained reconfigurable architectures (CGRAs). Processors3978can be hosted by a deep learning cloud platform such as Google Cloud Platform™, Xilinx™, and Cirrascale™. Examples of processors3978include Google's Tensor Processing Unit (TPU)™, rackmount solutions like GX4 Rackmount Series™, GX50 Rackmount Series™, NVIDIA DGX-1™, Microsoft' Stratix V FPGA™, Graphcore's Intelligent Processor Unit (IPU)™, Qualcomm's Zeroth Platform™ with Snapdragon processors™, NVIDIA's Volta™, NVIDIA's DRIVE PX™, NVIDIA's JETSON TX1/TX2 MODULE™, Intel's Nirvana™, Movidius VPU™, Fujitsu DPI™, ARM's DynamiclQ™, IBM TrueNorth™, Lambda GPU Server with Testa V100s™, and others. Memory subsystem3922used in the storage subsystem3910can include a number of memories including a main random access memory (RAM)3932for storage of instructions and data during program execution and a read only memory (ROM)3934in which fixed instructions are stored. A file storage subsystem3936can provide persistent storage for program and data files, and can include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations can be stored by file storage subsystem3936in the storage subsystem3910, or in other machines accessible by the processor. Bus subsystem3955provides a mechanism for letting the various components and subsystems of computer system3900communicate with each other as intended. Although bus subsystem3955is shown schematically as a single bus, alternative implementations of the bus subsystem can use multiple busses. Computer system3900itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, a widely-distributed set of loosely networked computers, or any other data processing system or user device. Due to the ever-changing nature of computers and networks, the description of computer system3900depicted inFIG.39is intended only as a specific example for purposes of illustrating the preferred implementations of the present invention. Many other configurations of computer system3900are possible having more or less components than the computer system depicted inFIG.39. It should also be noted that the different examples described herein can be combined in different ways. That is, parts of one or more examples can be combined with parts of one or more other examples. All of this is contemplated herein. The technology disclosed can be practiced as a system, method, or article of manufacture. One or more features of an implementation can be combined with the base implementation. Implementations that are not mutually exclusive are taught to be combinable. One or more features of an implementation can be combined with other implementations. This disclosure periodically reminds the user of these options. Omission from some implementations of recitations that repeat these options should not be taken as limiting the combinations taught in the preceding section—these recitations are hereby incorporated forward by reference into each of the following implementations. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. | 85,750 |
11943242 | Corresponding reference characters indicate corresponding parts throughout the drawings. Although specific features may be shown in some of the drawings and not in others, this is for convenience only. In accordance with the examples described herein, any feature of a drawing may be referenced and/or claimed in combination with any feature of any other drawing. DETAILED DESCRIPTION The present disclosure relates to systems and methods for deep automation anomaly detection in machinery and other equipment used in manufacturing and production environments. The systems and methods described herein provide anomaly detection capabilities for a broad variety of machines provided that the state, sequence, and timing information for machines can be obtained. An anomaly detection server is provided that is configured to identify states, sequences, and timing patterns that are associated with anomalous operation of a machine or machines. To accomplish this, the anomaly detection server creates a bitwise representation of the state, the sequence, and the timing of machine operation and learns the representations associated with normal operation. Thereby, the anomaly detection server learns the conditions under which a machine is not in normal operation, indicated because the bitwise representation for such conditions are outside the learned patterns defining normal states, sequences, and timing of operation. As described above, known approaches to anomaly detection are deficient for multiple reasons. First, most known anomaly detection approaches are reliant upon human expertise and resultantly are difficult to scale or to automate. Second, most known approaches to anomaly detection are inherently limited to a particular machine or a particular category of machines because the approaches focus on known trends or patterns indicating anomalous behavior in particular machines. However, the inherent variety of machine types makes it difficult to extend known anomaly detection approaches to varying classes of machines using such a machine-centric approach. Third, known anomaly detection approaches often result in false positives or false negatives because the approaches overly rely upon the past experience with machine performance (typically as experienced by human expert users) and fail to capture trends and variations in particular machines. The anomaly detection server and related systems and methods overcome the known deficiencies in anomaly detection approaches by providing a technologically driven solution rooted in computing technology and machine learning capable of detecting anomalous states across varying machine types. Thus, the server, systems, and methods described herein provide a technological improvement to the field of industrial machine maintenance, repair, and management. The systems described specifically provide robust methods for detecting anomalies in industrial machines to reduce time to investigate machine performance, reduce mean time to repair (“MTTR”) machinery, reduce downtime, and improve industrial stability. At a granular level, the systems and methods described use state, sequence, and timing information to identify anomalies while also identifying machine misalignments or system misalignments. As used herein, the term “misalignment” is used to describe a status when portions of a machine or machines are not in compatible states thereby undermining process performance. Anomaly detection based on state, sequence, and timing may be used to isolate components or parts of a machine that are anomalous. These components or parts may be focused on for repair. Misalignments may similarly be focused on to avoid systemic issues. As used herein, the term “anomaly” is used to describe a condition or state that is a statistical outlier, a novelty, or something outside of normal conditions. In the examples described, an anomaly may exist when a machine or a piece of equipment in a manufacturing process is not functioning in a normal or expected fashion. By contrast, as used herein, the term “normal” is used to describe a condition or state that is expected or within normal statistical boundaries. As used herein, the term “false positive” or “false anomaly” refers to an incorrect determination that an anomaly condition exists in a machine. A false positive or a false anomaly may be detected based on inaccurate modeling to describe the range of normal operating states, or based on incorrect or incomplete input regarding the machine. The anomaly detection server described herein is designed to avoid the risk of false positives based on efficient and adaptive modeling. The anomaly detection server described is configured provide anomaly detection for a wide variety of machines in manufacturing settings. In an example embodiment, the anomaly detection server includes a processor and a memory. In some examples, the anomaly detection server includes a suitable communications interface for interacting with any other devices or networks and an input/output for receiving input and presenting output using any suitable interfaces or displays. In some embodiments, the anomaly detection server is in communication with or coupled to sensors that are attached to at least one machine. Such sensors capture machine condition data about each machine and transmit the machine condition data to the anomaly detection server via a machine-side communications interface which transmits to the anomaly detection server via any suitable networking protocol. (In at least some examples, the anomaly detection server is integrated with one or more machines and directly accesses such machine condition data.) In such embodiments, the anomaly detection server is part of an anomaly detection system that may include such sensors, networks, machines, and communications interfaces. In further embodiments, machine condition data is captured at least partially using one or more cameras that monitor the condition of machines and thereby capture the physical state of the machines. In some such examples, some cameras may be located within the machine and transmit internal machine condition information. In such examples, the cameras are also capable of communicating with the anomaly detection server using a suitable communications interface along any suitable networking protocol Image or video data captured by such cameras may be processed to obtain suitable representations of the machine state including a bitwise or hexadecimal representation, as described below. In many embodiments, the anomaly detection server is configured to provide anomaly detection software (“ADS”) that, when executed, provides the anomaly detection functions described. In some embodiments, the ADS further utilizes descriptive analytics to process state, sequence, and timing (“SQT”) information associated with each machine or group of machines that are monitored for potential anomalies. As used herein, the term “state” refers to unique condition of the system as determined by input information. As used herein, the term “sequence” refers to the order of steps performed within a given machine to carry out the actions of a normal process. As used herein, the term “timing” refers to the timing of steps, between steps, and between steps and function changes. As used herein, a “step” is a number reflecting a unit value of progress in a step counter in a machine. Steps are used to monitor the progress of machines through their purposes. As the “State”, “SeQuence”, and “Timing” are used to identify normal operations of each machine involved in the process, the SQT functions as a building block used by the anomaly detection server. As used herein, “descriptive analytics” are analytics that are actionable and allow a user to, for example, identify a possible anomaly and further to diagnose it based on possible underlying causes of the anomaly. In some embodiments, the ADS is also configured to provide an anomaly detection interface (“ADI”) that allows for interaction with users via a suitable input/output interface (e.g., a monitor, touchscreen, keyboard, and/or mouse) to present detected anomalies and aid in resolving such anomalies. The anomaly detection server is configured to receive registration input defining machines within a production environment. In some examples, multiple machines may be used in a specific order (e.g., within an assembly line) or multiple orders. Each machine is capable of performing one or more steps reflecting physical actions that a machine may take. In most examples, the registration input defines physical attributes of each machine including the input that may be provided by each machine with respect to its performance. The registration input may also define the sequence of steps performed by each machine. In some examples, the registration input may also define the steps of multiple machines with respect to one another in a process and allow the anomaly detection server to therefore identify anomalies within the sequenced process across machines. The registration input may also include purpose information defining the purpose of each machine. As used herein, a “purpose” describes the role of each machine in the relevant process, describing what result each machine provides for the process. The registration input may also include step information defining each step taken by the machine within the relevant process. The registration input may also include phase information defining each phase of operation provided by each machine. As used herein, a “phase” is a group or combination of steps which typically are associated with a real-world task performed by the machine at issue. In one example, a machine may be used to weld together several pieces of a chassis. The purpose may therefore be welding, there may be several distinct welding phases for each welded section, and each welding phase may have individual steps for each action performed by the machine in each welding phase (e.g., moving the welding arm from one position to another, activating the welding torch, or changing the orientation of the welding torch). Thus, the anomaly detection server is configured to receive registration input defining each machine including purpose definitions, phase definitions, and step definitions. The anomaly detection server applies the registration input to create a first map configured to convert condition data into an analysis vector for the machine. Specifically, the first map allows the anomaly detection server to generate the analysis vector based on condition data received, directly or indirectly, from each machine. (In some examples, the location of each machine is referred to as a “station” and the anomaly detection server defines the model for operation across all such stations.) The analysis vector describes states of each machine (or component.) State information may be reflected in a bitwise binary format, a hexadecimal format, or any other suitable form. As described below, the anomaly detection server similarly may be configured to map each machine with respect to sequence and timing and to learn normal operating states, sequences, and timings (“SQT”) based on the corresponding data. In so doing, the anomaly detection server is configured to essentially reverse engineer a finite-state machine (“FSM”) model for each machine as the anomaly detection server learns all (or nearly all) possible normal patterns for each of SQT. As used herein, a finite-state machine model is a mathematical model of computation whereby a real machine is analyzed as an abstract machine having n number of possible states at any given time. However, in an FSM model, the real machine can only be in one of the n states. In one example, the registration input represents a functional map created by a human, as indicated below inFIG.6. Specifically, the functional map of the registration input defines which registers (e.g., inputs or outputs from sensor inputs or outputs, typically represented as numeric binary, decimal, or hexadecimal values with lengths of one or more units) are associated with each function. Therefore, in some embodiments, a human provides the definition of the functional map, the registration inputs, and indirectly of the register relationships and vectors. In other examples, a computer system may act independently or with human guidance to define the functional map, the registration inputs, and indirectly of the register relationships and vectors. In an example embodiment, the first map defines a vector of bits that contain all possible representations of machine state data for a particular machine. In one case, the analysis vector is a one-dimensional vector. In the example embodiment, the first map is created specific to each machine based on registration input whereby the first map allows for the creation of n unique, finite states for a particular machine and a possible vector length of m. Assuming the amount of possible values for each digit of the vector is x, the value of n may be described as n=xn. If a particular machine has possible thirty-two possible binary values for encoded machine condition data there are n=232possible finite states for the machine. The first map also creates any necessary encoding of received condition data into the analysis vector. The condition data may be initially received as, for example, analog signal data, digital signal data, audio data, video data, or image data. In an example embodiment, machines utilize sensors that provide the analysis vector directly to the anomaly detection server. However, the anomaly detection server is capable of any necessary processing and encoding if the condition data is not provided in the form of an analysis vector. As described above, the anomaly detection server learns the possible states, sequences, and timing of each machine in normal operation. To accomplish this, the anomaly detection server also receives condition data from each machine. The anomaly detection server applies the first map, as needed, to convert condition data where condition data is not provided in the form of an analysis vector. The condition data provided may reflect functions performed by each machine including, for example, inputs, outputs, and steps. As used herein, the term “function” represents a collection of values (e.g., hexadecimal values or bits) that represent sub-systems in a particular machine. In some examples, an analysis vector may be defined specific to each function (or sub-system) of a machine. This approach may allow the analysis vectors to be shorter and to only include values likely to be used by each component. In operation, the anomaly detection server receives condition data in order to learn and recognize acceptable normal patterns of states across sequences and timing. Specifically, the anomaly detection server receives condition data reflecting the state of each register of each machine or sub-system (encoded in any suitable form including binary or hexadecimal) while the process is performed. In a first example, a machine learning model in the anomaly detection server is trained actively by using the machine in all suitable variations to carry out a process. In a second example, the machine learning model in the anomaly detection server is trained passively by letting the machine execute the process a suitable minimum number of times in order to capture the variations of the process. As the learning process continues, the anomaly detection server obtains analysis vectors (directly from the machine, or after suitable processing of machine condition data) sufficient to provide a minimum sample size to define normal conditions. In at least some examples, the minimum sample size is bound by the subset o of finite-states n that are possible to create for analysis vectors. Thus, as explained above, the process of learning at least partially represents reverse engineering of an FSM model for each machine. In one aspect, the state information reflected in the analysis vectors used to train the machine learning model are time-invariant, meaning that the first order of training performed on analysis vectors is not a direct function of time. However, as the anomaly detection server receives and processes the analysis vectors defining states to learn normal operation, the anomaly detection server also obtains suitable sequencing information and timing information. Each analysis vector for a given state is also obtained with necessary time stamps. As a result, the anomaly detection server is able to learn all known patterns of analysis vectors, and therefore learn the normal models of operation of the FSM for each machine. In one aspect, the patterns learned are a function of sequencing such that a series of states a, b, c, d, and e are analyzed to determine which sequences are normal and which are not. (As a practical example, where a machine performs welds at various joints, welds to outer joints may be performed last. In such an example, the anomaly detection server may be aware of a state x for performing an inner joint weld and state y for performing an outer joint weld. The sequence of xy is expected to be reflected in training data and thus describing a normal pattern of sequencing while the sequence of yx is not expected to be in the training data.) Similarly, in many examples the intervals of time between steps may be relevant to the proper performance of a process. Thus, in many examples, the machine learning model trains by learning time stamps associated with each state and learns the appropriate intervals between steps. In some examples, the anomaly detection server may therefore identify increasing delays between a first and second step and identify an anomaly that may reflect deterioration of parts that is causing the lag. In other examples, the anomaly detection server may identify undue variations in timing between steps indicating that some component(s) is behaving in an erratic fashion and may indicate anomalous conditions. In operation, to train on sequencing and timing, machine condition data and analysis vectors include at least a time stamp to allow for time sorting used to organize the analysis vectors. Further, in some examples, the anomaly detection server also receives a sequencing identifier for each analysis vector to facilitate such time sorting. The above description on machine learning assumes that training occurs during normal processing. In some examples, the training of the anomaly detection system may be conducted when a machine is (intentionally or unintentionally) in an anomalous state. In many examples, the ADI allows a user to categorize, tag, or otherwise annotate state, sequence, and timing data for a machine to indicate whether the underlying data is normal or anomalous. Thus, in some contexts, training may be performed using anomalous data. In such cases, the anomalous data is not used to describe the normal conditions but rather to better define anomalous data patterns for states, sequences, and timing. As it effectively applies an FSM model to identify all normal system states and behaviors (reflected in the changes in states over sequences and timing), the SQT data provides fundamental measures of normal behavior for a machine (or machines) within a process. In one respect, this approach may be referred to as “machine sanity” which allows the anomaly detection server, related systems, and human users to have a clear understanding of when states of machines, components, or multiple machines are anomalous. A machine learning model for a machine or machines may be deemed to be sufficiently trained when: (i) the machines have run for a predetermined amount of cycles that are expected to encompass normal behavior; (ii) the machines have completed cycles for all possible states of the n finite states; or (iii) the anomaly detection server is able to establish a sufficient statistical sample size to determine normal states, sequencing, and timing. In one example, a machine learning model is trained upon completion of four hundred (400) cycles. However, the determination of sufficient training time may vary depending upon the context of the machine, the age of the machine, and the complexity of the machine. Upon sufficient training, the anomaly detection server may use the machine learning model to create a signature or boundary conditions indicating normal behavior. In some examples, the anomaly detection server learns all trained combinations of data (i.e., all relevant combinations of states, sequences, and timing data) and categorizes such trained combination data for rapid checking. Such categorized data may be used to create the signature or boundary conditions. The patterns for normal states, sequences, and timing may be stored in any suitable database that is included within or in connection with the anomaly detection server. The anomaly detection server also monitors each relevant machine for a process and scans the machine condition data obtained in such monitoring to obtain analysis vectors. The monitoring analysis vectors are compared to the learned patterns described above to determine whether the monitoring analysis vectors are within normal behaviors or outside of them. In other words, the anomaly detection server scans for discrepancies between the models for normal operations of the machine and the machine condition data by comparing: (i) analysis vectors for each state to known normal states; (ii) analysis vector groups reflecting sequences to known normal sequences; and (iii) analysis vector groups reflecting timing sequences to known normal timing sequences. To compare sequences, the anomaly detection server repeatedly processes the monitoring analysis vectors (and any timing or sequence indicators) to extract an order and processes the analysis vectors into ordered groups to identify monitored sequences. The anomaly detection server compares each monitored sequence to known normal sequences that are stored in the signature, boundary conditions, or otherwise. Similarly, to compare timing sequences the anomaly detection server repeatedly processes the monitoring analysis vectors (and any timing indicators such as timestamps) to extract a time sequenced set of analysis vectors. The anomaly detection server compares the time sequenced sets of analysis vectors to known timed sequences that are stored in the signature, boundary conditions, or otherwise. If the anomaly detection server determines that a discrepancy exists between the monitoring analysis vectors (whether individually or in sequence or timed sequence) and normal patterns, it transmits an alert indicating an anomaly or a potential anomaly. In one example, the alert is provided via the ADI or any other interface within the ADS. In one example, the anomaly detection server generates a visual indicator if the computer system identifies such a discrepancy between the model(s) for normal states, sequences, and timing and the actual operation of the machine indicated in the machine condition data. In one example, the anomaly detection server generates a “heat map” showing in which location (or station) the anomaly was detected, which machine is implicated, and approximately the percentage of completion of the cycle being performed when the anomaly was detected. (In such an example, the ADS is configured to track the physical or organizational location of each machine and component based on registration input. The ADS may further be configured to track the processing cycles based on registration input.) The heat map also may display colors associated with a predefined number of detected anomalies. In some examples, where anomalous data is used to train the machine learning model of the anomaly detection server, the analysis vectors for such anomalies may also be associated with details identifying components or parts that are anomalous and possible solutions to resolve the anomaly. In other examples, upon each alert of an anomaly, the ADS may capture information for resolution of the detected anomaly. In one example, the captured information may allow a user to input at the ADI that the anomaly was a false positive. In some examples, such false positive information is used by the anomaly detection server to retrain the models for normal states, sequences, and timing and avoid a future false positive that is the same or similar. In a second example, the captured information may allow a user to enter input at the ADI identifying the diagnosis of the anomaly, the applied solution, and any result information. The anomaly detection server uses such input to provide more sophisticated diagnostic and analytic recommendations when future anomalies are detected. As described above, in some examples the anomaly detection server applies the same approach to an entire process spanning multiple machines. Thus, in such examples an anomaly detection server may analyze varying analysis vector types (for each machine) to determine sequences and timed sequences of patterns for the process as a whole. In such examples, the anomaly detection server may then train to the entire process and monitor based on input from multiple machines. If the anomaly detection server detects an error, the ADI may present anomaly detection alerts and diagnoses in the context of the process as a whole. Once we have an abnormal, we want to use partial pattern recognition which is much more complex and cannot be described by simple rules. This is where we want to use machine learning and hierarchical pattern matching in order to more complexly describe a normal running state and a partial anomalous state. We also want to correlate video to more accurately capture normal machine state by bringing in more supporting data from the environment. In some examples, the anomaly detection server is also configured to provide reporting and analytics on the health of a machine, a group of machines, a process, or an entire facility and indicate rates of anomalies and trends in anomalies. Such analytics may be used to ascertain the effectiveness of the ADS and to identify potential machines or processes that experience unusual anomaly rates or other patterns. The ADS may also provide data on, for example, sensitivity of the anomaly detection server or selectivity of the anomaly detection server. Sensitivity is a measure of total anomalies detected (“TA”) divided by total anomalies plus false normal (“FN”). As used herein, “false normal” is an indication where the ADS incorrectly reports that no anomaly is present when one is present. To facilitate capture of FNs, the ADI may receive input from users indicating that a particular pattern that was identified as normal was actually anomalous. Thus, the formula for sensitivity s may be described as: s=TATA+FN The anomaly detection server may further retrain based on sensitivity calculations to become more or less sensitive, depending upon parameters provided by users at the ADI. Specifically, if a user seeks the ADS to report more potential anomalies with a risk of false positives, the user may set a parameter to increase sensitivity. Conversely, if a user seeks the ADS to report fewer potential anomalies with a risk of false normal, the user may set a parameter to reduce sensitivity. In most examples, because users seek to reduce RTTM, parameters will be set in the ADI to avoid oversensitivity or under-sensitivity. The ADS may also report on selectivity which is given as the total number of normal states detected (“TN”) divided by the total number of normal states plus false anomalies (“FA”). Thus, the formula for selectivity se may be described as: se=TNTN+FA Like sensitivity, users may adjust the selectivity with parameters to cause the anomaly detection server to be more or less selective. Generally, the systems and methods described herein are configured to perform at least the following steps: receive a plurality of training analysis vectors associated with a monitored machine during a training period from the at least one sensor, wherein each of the training analysis vectors describe a condition of the monitored machine at a corresponding point in time; apply the training analysis vectors to a machine learning model to create a trained machine learning model configured to describe normal states in the monitored machine as indicated by training analysis vectors; receive a plurality of monitoring analysis vectors associated with the monitored machine during a monitoring period from the at least one sensor, wherein each of the monitoring analysis vectors describe a condition of the monitored machine at a corresponding point in time; apply the plurality of monitoring analysis vectors to the trained machine learning model to identify at least one discrepancy indicating an anomalous state in the monitored machine; transmit an alert indicating that an anomaly is detected in the monitored machine; receive a registration input defining finite states of the monitored machine; process the registration input to create a first map configured to convert machine condition data into a corresponding analysis vector representing a corresponding state of the monitored machine; receive a plurality of training machine condition data associated with the monitored machine during the training period; process the plurality of training machine condition data with the first map to receive the plurality of training analysis vectors; receive a plurality of monitoring machine condition data associated with the monitored machine during a monitoring period; process the plurality of monitoring machine condition data with the first map to receive the plurality of monitoring analysis vectors; receive the plurality of training analysis vectors associated with the monitored machine during the training period, wherein each of the training analysis vectors is associated with a timing indicator representing the relative time of each of the training analysis vectors; apply the training analysis vectors to a machine learning model to create the trained machine learning model configured to identify normal sequenced patterns in the monitored machine, wherein the normal sequenced patterns represent an ordered sequence of identified normal states; receive the plurality of monitoring analysis vectors associated with the monitored machine during the monitoring period, wherein each of the monitoring analysis vectors is associated with a timing indicator representing the relative time of each of the monitoring analysis vectors; process the plurality of monitoring analysis vectors with the corresponding timing indicators to obtain a sequenced monitoring pattern of monitored states; apply the sequenced monitoring pattern to the trained machine learning model to identify at least one discrepancy indicating an anomalous pattern in the monitored machine; transmit the alert indicating that an anomaly is detected in the monitored machine; receive a first user resolution input regarding the alert indicating whether the alert was associated with a false positive anomaly or a confirmed anomaly; train the machine learning model with the first user resolution input; receive a first user resolution input regarding the alert indicating a determined reason for the anomaly; train the machine learning model with the first user resolution input such that the machine learning model is configured to provide the determined reason for the anomaly when a corresponding future anomaly is detected; receive a user sensitivity input parameter indicating a preference for sensitivity of the machine learning model; train the machine learning model based on the sensitivity input; receive a user selectivity input parameter indicating a preference for sensitivity of the machine learning model; train the machine learning model based on the selectivity input; receive the plurality of training analysis vectors from a sensor in communication with the monitored machine; receive a set of image input from a camera of the monitored machine; and process the set of image input into the plurality of training analysis vectors. FIG.1is a schematic diagram of an example environment100for anomaly detection in a machine120by an anomaly detection server110. Specifically, anomaly detection server110includes processor111, memory112, input/output113, and communications device114. Anomaly detection server110is in communication with sensors121,122,123, and124in machine120. Sensors121,122,123, and124are configured to capture machine condition data and provide it via a suitable networking protocol (such as via network104or direct communication) to anomaly detection server110as described herein. In some examples, machine condition data is provided as analysis vectors and in others machine condition data is provided in any suitable format including analog signal data, digital signal data, audio data, video data, or image data. In some examples, machine120also includes camera125capable of capturing machine images and/or video showing internal or external conditions of machine120. Anomaly detection server110may also be in communication with user computing device130which includes corresponding processor131, memory132, input/output133, and communications device134. In some embodiments, the ADS and associated human machine interfaces (“HMI”) such as ADI may be provided via user computing device130or via anomaly detection server110directly. Thus, interfaces for registration, reporting, anomaly alerts, anomaly diagnoses, and anomaly resolution may be provided through either system110or130or any other suitable device connected thereto. FIG.2is a hierarchical flowchart200describing relationships between concepts used in anomaly detection of machines described herein. As described above, each machine120(shown inFIG.1) is associated with a machine purpose210. As used herein, a machine purpose describes the role of each machine in the relevant process, describing what result each machine provides for the process. To fulfill its machine purpose210, each machine120performs one or more phases220of operation. As used herein, a “phase” is a group or combination of steps which typically are associated with a real-world task performed by the machine at issue. In executing each phase220, each machine120performs multiple steps and experiences multiple states230. As used herein, the term “state” refers to unique condition of the system as determined by machine condition data or other data. In each state230, each machine120performs multiple functions240. As used herein, the term “function” represents a collection of values (e.g., hexadecimal values or bits) that represent sub-systems in a particular machine. In performing each “function”, each machine120has an associated bit value250. As used herein, “bits” are any suitable digit (e.g., binary or hexadecimal) representing the condition of a part or component. Bit250may represent a physical input, physical output, or a processing step. FIG.3is a block diagram of an example computing system300that may be used to perform one or more computing operations. While examples of the disclosure are illustrated and described herein with reference to the anomaly detection server110or user computing device130being a computing system300, aspects of the disclosure are operable with any computing system that executes instructions to implement the operations and functionality associated with the computing system300. The computing system300shows only one example of a computing environment for performing one or more computing operations and is not intended to suggest any limitation as to the scope of use or functionality of the disclosure. In some examples, the computing system300includes a system memory310(e.g., computer storage media) and a processor320coupled to the system memory310. The processor320may include one or more processing units (e.g., in a multi-core configuration). Although the processor320is shown separate from the system memory310, examples of the disclosure contemplate that the system memory310may be onboard the processor320, such as in some embedded systems. The processor320is programmed or configured to execute computer-executable instructions stored in the system memory310to detect anomalies in a machine or machines and to train a machine learning model to perform such detection. The system memory310includes one or more computer-readable media that allow information, such as the computer-executable instructions and other data, to be stored and/or retrieved by the processor320. Some examples include a computer program product embodied on a non-transitory computer-readable medium (e.g., system memory310) that the processor320executes to perform the steps described inFIG.4and to otherwise perform the functions of the anomaly detection server including provision of the ADS and ADI. By way of example, and not limitation, computer-readable media may include computer storage media and communication media. Computer storage media are tangible and mutually exclusive to communication media. For example, the system memory310may include computer storage media in the form of volatile and/or nonvolatile memory, such as read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), solid-state storage (SSS), flash memory, magnetic tape, a floppy disk, a hard disk, a compact disc (CD), a digital versatile disc (DVD), a memory card, random-access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), and/or any other medium that may be used to store desired information that may be accessed by the processor320. Computer storage media are implemented in hardware and exclude carrier waves and propagated signals. That is, computer storage media for purposes of this disclosure are not signals per se. A user (e.g., an engineer or technician) may enter commands and other input into the computing system300through one or more input devices330(e.g., a touchscreen, a keyboard, or a tablet) coupled to the processor320or via a user computing device130in communication with the anomaly detection server110. The input devices330are configured to receive information. Example input device330include, without limitation, a pointing device (e.g., mouse, trackball, touch pad, joystick), a keyboard, a game pad, a controller, a microphone, a camera, a gyroscope, an accelerometer, a position detector, and an electronic digitizer (e.g., on a touchscreen). Information, such as text, images, video, audio, and the like, may be presented to a user via one or more output devices340(e.g., actuator130) coupled to the processor320. The output devices340are configured to convey information. Example output devices340include, without limitation, a monitor, a projector, a printer, a speaker, a vibrating component. In some examples, an output device340is integrated with an input device330(e.g., a capacitive touch-screen panel, a controller including a vibrating component). As used herein, output devices340are configured to and capable of providing output that may be a signal to actuator130from the perspective of the computing system300or components embodied thereon, including a programmable logic controller (“PLC”). In one example, the output signal is a numeric value such as a binary value. The output signal may be represented at a length of one or more digits. In the example embodiment, a vector can contain sixteen (16) output signals or input signals from systems or machines that are sent to input device330and thereby represent the state of a system or machine. The vector may alternately be referred to as an analysis vector or a machine vector. One or more network components350may be used to operate the computing system300in a networked environment using one or more logical connections. Logical connections include, for example, local area networks, wide area networks, and the Internet. The network components350allow the processor320, for example, to convey information to and/or receive information from one or more remote devices, such as another computing system or one or more remote computer storage media. Network components350may include a network adapter, such as a wired or wireless network adapter or a wireless data transceiver. FIG.4is a flowchart of an example method400of anomaly detection, such as in the environment ofFIG.1. Anomaly detection server110(shown inFIG.1) is configured to cause processor111to receive410a plurality of training analysis vectors associated with a monitored machine during a training period from the at least one sensor, wherein each of the training analysis vectors describe a condition of the monitored machine at a corresponding point in time. Anomaly detection server110also causes processor111to apply420the training analysis vectors to a machine learning model to create a trained machine learning model configured to describe normal states in the monitored machine as indicated by training analysis vectors. Anomaly detection server110also causes processor111to receive430a plurality of monitoring analysis vectors associated with the monitored machine during a monitoring period from the at least one sensor, wherein each of the monitoring analysis vectors describe a condition of the monitored machine at a corresponding point in time. Anomaly detection server110also causes processor111to apply440the plurality of monitoring analysis vectors to the trained machine learning model to identify at least one discrepancy indicating an anomalous state in the monitored machine. Anomaly detection server110also causes processor111to transmit450an alert indicating that an anomaly is detected in the monitored machine. FIG.5is an illustration500of a series of analysis vectors or vectors or machine vectors that may be used by the example computing system310(shown inFIG.3) to perform anomaly detection as indicated inFIG.4. In this example, each system or machine provides, via a suitable sensor, input, or output, output or input signals to computing system310. In the illustrated example of illustration500, a series of vectors are shown for a particular machine. For simplicity of illustration, the indicated vectors are shown in binary with a bit indicated in a dash (“-”) showing a 0 (or off) and a bit indicated in a filled in space showing a 1 (or on). Thus, each vector captures a state for a particular state, sequence, or timing for a machine. In the illustrated column510, a series of bit values are shown for a series of vectors. As show in the associated text, at vector1, the bit value is off and indicates a Start process is OFF, at vector2, the bit value is on and indicates that a continue value is ON, at vector3, the bit value is on and indicates that a SS Relay Mode is ON, and at vector21, the bit value F/O #3 Rinse is ON. Thus, over distinct vectors, the distinct bit values vary and capture the machine state distinctly. As described herein, bits are binary values but can be compressed into other units such as hexadecimal. FIG.6is an illustration of a functional map600that identifies portions of a system from which associated Functions and analysis vectors or vectors or machine vectors may be derived, which may be used by the example computing system310(shown inFIG.3) to perform anomaly detection as indicated inFIG.4. InFIG.6, functional states for Functions 1-6 are shown as represented in vectors (described in more detail inFIG.7) and associated with corresponding physical portions of a machine or system indicated in610,620,630,640,650, and660. As described above, defining the portions610,620,630,640,650, and660associated with Functions 1-6 requires the creation of a functional map (whether by a human, through automation, or a combination thereof). In some examples, portions may be defined by discrete and obvious portions of a machine, while in others, portions may span the components of one or more machines as shown in portion660and corresponding Function 6. (For clarity, portion610corresponds to Function 1, portion620corresponds to Function 2, portion630corresponds to Function 3, portion640corresponds to Function 4, portion650corresponds to Function 5, and portion660corresponds to Function 6. Further, the illustrative examples adjacent to the right of each Function are illustrative vectors or analysis vectors describing the state of each corresponding portion610,620,630,640,650, and660.) FIG.7is a graphical description and illustration700of example analysis vectors or vectors or machine vectors that may be used by the example computing system ofFIG.3to perform anomaly detection as indicated inFIG.4. For clarity, a vector (or analysis vector or machine vector) may be defined as follows. In one example, a vector or a machine vector or an analysis vector is represented as a numeric integer or string of values (of hexadecimals, decimals, binary, or any other suitable base) that contain one or more machine state(s) as shown in710. Within the vector are substrings (indicated in bold in720) that contain one or more complete function state and are based on registers from one or more input values or output values. FIG.8is a second illustration800of a series of analysis vectors or vectors or machine vectors that may be used by the example computing system310(shown inFIG.3) to perform anomaly detection as indicated inFIG.4, from the perspective of an anomaly detection interface (“ADI”) that may be provided by the example computing system. As described above, the ADI provides an example of the output a user will see. In the illustration800, the ADI simplifies output so that minimal user is required. In one aspect, the ADI presents anomaly severity using suitable user indications (e.g., color coding, styling, or alerts) to indicate when an anomaly is severe. In the illustrated example of illustration800, anomalies are differently shaded to indicate severity. In one example, anomalies are shaded according to a scale of colors or gradients that move from normal to mild anomalies to severe anomalies. In illustration800, the ADI also indicates the presence of anomalies at varying cycle completion percentages. In the example, the percentages vary by decile, moving from 0-100 percent in ten percent increments. In illustration800, an anomaly scale810is shown to indicate the relative severity of anomalies given by a particular color value. The following includes definitions of selected terms employed herein. The definitions include various examples and/or forms of components that fall within the scope of a term and that can be used for implementation. The examples are not intended to be limiting. A “bus”, as used herein, refers to an interconnected architecture that is operably connected to other computer components inside a computer or between computers. The bus can transfer data between the computer components. The bus can be a memory bus, a memory controller, a peripheral bus, an external bus, a crossbar switch, and/or a local bus, among others. The bus can also be a vehicle bus that interconnects components inside a vehicle using protocols such as Media Oriented Systems Transport (MOST), Controller Area network (CAN), Local Interconnect Network (LIN), among others. “Computer communication”, as used herein, refers to a communication between two or more computing devices (e.g., computer, personal digital assistant, cellular telephone, network device) and can be, for example, a network transfer, a file transfer, an applet transfer, an email, a hypertext transfer protocol (HTTP) transfer, and so on. A computer communication can occur across, for example, a wireless system (e.g., IEEE 802.11), an Ethernet system (e.g., IEEE 802.3), a token ring system (e.g., IEEE 802.5), a local area network (LAN), a wide area network (WAN), a point-to-point system, a circuit switching system, a packet switching system, among others. A “disk”, as used herein can be or include, for example, magnetic tape, a floppy disk, a hard disk, a compact disc (CD), a digital versatile disc (DVD), a memory card, and/or a flash drive. The disk can store an operating system that controls or allocates resources of a computing device. A “database”, as used herein can refer to table, a set of tables, and a set of data stores and/or methods for accessing and/or manipulating those data stores. Some databases can be incorporated with a disk as defined above. A “memory”, as used herein can include non-volatile memory and/or volatile memory. Non-volatile memory can include, for example, read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), solid-state drives, and/or disks. Volatile memory can include, for example, random-access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), and/or double data rate SDRAM (DDR SDRAM). The memory can store an operating system that controls or allocates resources of a computing device. An “operable connection”, or a connection by which entities are “operably connected”, is one in which signals, physical communications, and/or logical communications can be sent and/or received. An operable connection can include a wireless interface, a physical interface, a data interface and/or an electrical interface. A “processor”, as used herein, processes signals and performs general computing and arithmetic functions. Signals processed by the processor can include digital signals, data signals, computer instructions, processor instructions, messages, a bit, a bit stream, or other means that can be received, transmitted and/or detected. Generally, the processor can be a variety of various processors including multiple single and multicore processors and co-processors and other multiple single and multicore processor and co-processor architectures. The processor can include various units to execute various functions. A “unit” or “module”, as used herein, includes, but is not limited to, non-transitory computer readable medium that stores instructions, instructions in execution on a machine, hardware, firmware, software in execution on a machine, and/or combinations of each to perform a function(s) or an action(s), and/or to cause a function or action from another unit, module, method, and/or system. A unit or module may also include logic, a software controlled microprocessor, a discrete logic circuit, an analog circuit, a digital circuit, a programmed logic device, a memory device containing executing instructions, logic gates, a combination of gates, and/or other circuit components. Multiple units or modules may be combined into one unit and single units or modules may be distributed among multiple units or modules. A “value” and “level”, as used herein can include, but is not limited to, a numerical or other kind of value or level such as a percentage, a non-numerical value, a discrete state, a discrete value, a continuous value, among others. The term “value of X” or “level of X” as used throughout this detailed description and in the claims refers to any numerical or other kind of value for distinguishing between two or more states of X. For example, in some cases, the value or level of X may be given as a percentage. In other cases, the value or level of X could be a value in a range. In still other cases, the value or level of X may not be a numerical value, but could be associated with a given discrete state, such as “not X”, “slightly X”, “X”, “very X” and “extremely X.” Examples are described herein and illustrated in the accompanying drawings to disclose aspects of the disclosure and also to enable a person skilled in the art to practice the aspects, including making or using the above-described systems and executing or performing the above-described methods. The anomaly detection systems and methods described function to improve the technological field of machine maintenance, anomaly detection, and industrial production. Having described aspects of the disclosure in terms of various examples with their associated operations, it will be apparent that modifications and variations are possible without departing from the scope of the disclosure as defined in the appended claims. That is, aspects of the disclosure are not limited to the specific examples described herein, and all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense. Components of the systems and/or operations of the methods described herein may be utilized independently and separately from other components and/or operations described herein. Moreover, the methods described herein may include additional or fewer operations than those disclosed, and the order of execution or performance of the operations described herein is not essential unless otherwise specified. That is, the operations may be executed or performed in any order, unless otherwise specified, and it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of the disclosure. Although specific features of various examples of the disclosure may be shown in some drawings and not in others, this is for convenience only. In accordance with the principles of the disclosure, any feature of a drawing may be referenced and/or claimed in combination with any feature of any other drawing. It should be apparent from the foregoing description that various examples may be implemented in hardware. Furthermore, various examples may be implemented as instructions stored on a non-transitory machine-readable storage medium, such as a volatile or non-volatile memory, which may be read and executed by at least one processor to perform the operations described in detail herein. A machine-readable storage medium may include any mechanism for storing information in a form readable by a machine, such as a personal or laptop computer, a server, or other computing device. Thus, a non-transitory machine-readable storage medium excludes transitory signals but may include both volatile and non-volatile memories, including but not limited to read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and similar storage media. It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in machine readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. When introducing elements of the disclosure or the examples thereof, the articles “a”, “an”, “the” and “said” are intended to mean that there are one or more of the elements. References to an “embodiment” or an “example” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments or examples that also incorporate the recited features. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be elements other than the listed elements. The phrase “one or more of the following: A, B, and C” means “at least one of A and/or at least one of B and/or at least one of C.” The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims. | 56,238 |
11943243 | DESCRIPTION OF EMBODIMENTS (Underlying Knowledge Forming Basis of the Present Disclosure) When an anomaly occurs in a communication network, it is important not only to specify an anomalous frame contained in frames that have been transmitted, but also to understand the details of the anomaly and respond appropriately and quickly according to the details in order to prevent or minimize damage. Here, the “details of the anomaly” are, for example, the location of the anomaly in the frame, a conceivable cause of the anomaly, and a danger level of the anomaly. However, when using the past methods described in the Background Art section, even if the degree of anomaly of the frame or communication can be understood, information on the details of the anomaly, mentioned here, cannot be obtained. An anomaly detection method according to one aspect of the present disclosure, conceived of in order to solve such a problem, is an anomaly detection method that, in a communication network system, determines whether each of frames, which are contained in observation data constituted by a collection of frames transmitted and received over the communication network system and observed in a predetermined period, is anomalous, and outputs an anomalous part of a payload in a frame determined to be anomalous. The anomaly detection method includes obtaining a data distribution of a plurality of feature amounts pertaining to a part of the payload included in the frame the part being at least one bit, detecting whether or not the frame contained in the observation data is anomalous, and outputting the anomalous part. In the obtaining, the data distribution is obtained for a collection of frames that are sent and received over the communication network system, the collection being obtained at a different timing from a timing at which the observation data is obtained. In the detecting, a difference between the data distribution obtained in the obtaining and a data distribution of a feature amount extracted from the frame contained in the observation data is calculated, and the frame is determined to be an anomalous frame when the frame has a feature amount for which the difference is at least a predetermined value. In the outputting, when a frame determined to be an anomalous frame in the detecting is present, an anomaly contribution level is calculated for the plurality of feature amounts that have been extracted from the anomalous frame, and an anomalous payload part is output, the anomalous payload part being at least one part in the payload that corresponds to a feature amount for which the anomaly contribution level is at least a predetermined value. Through this, not only are anomalous frames detected from a large number of frames being transmitted and received on the communication network, but information pertaining to anomalous parts of the payloads in the frames is obtained as well. Using the details of the anomaly identified in this manner makes it possible to respond more quickly and appropriately to the anomaly. Additionally, the anomaly detection method may further include determining an anomaly type, wherein in the determining of an anomaly type, an anomalous payload part length is specified based on the anomalous payload part, and the anomaly type is determined according to the anomalous payload part length. Through this, information pertaining to what type of anomaly has occurred is obtained with respect to the anomalous frame that has been detected. Using the additional details of the anomaly identified in this manner makes it possible to respond more quickly and appropriately to the anomaly. Additionally, in the determining of an anomaly type, the anomaly type may be determined to be a state value anomaly when the anomalous payload part length is within a first range, a sensor value anomaly when the anomalous payload part length is within a second range greater than the first range, and a trial attack anomaly when the anomalous payload part length is within a third range longer than the second range. For example, the first range may be a range having an upper limit of no greater than 4 bits, the second range may be a range having a lower limit of at least 8 bits and an upper limit of no greater than 16 bits, and the third range may be a range having a lower limit of 32 bits. In this manner, the type of the anomaly, which is a detail of the anomaly, can be determined based on the bit length of the anomalous part. Using the details of the anomaly identified in this manner makes it possible to respond more quickly and appropriately to the anomaly. Additionally, the anomaly detection method may further include determining an anomaly level, wherein in the determining of an anomaly level, the anomaly level is determined to be higher when a plurality of types of frames have been determined to be anomalous in the detecting and the anomalous payload part output in the outputting differs among the plurality of types of frames than when the anomalous payload part is the same among the plurality of types of frames. Through this, the level of danger of the anomaly (the danger level) can be determined from the type of the frame determined to be anomalous and information pertaining to the part of the payload that contributes to the anomaly in the frame. By using the details of the anomaly identified in this manner, when, for example, a plurality of anomalies have occurred, a response that is more appropriate in terms of safety can be carried out, i.e., prioritizing the response to an anomaly having a higher danger level. Additionally, the anomaly detection method may further include determining an anomaly level, wherein in the determining of an anomaly level, the anomaly level is determined to be higher than when a plurality of types of frames have been determined to be anomalous in the detecting and the anomaly type determined in the determining of an anomaly type is the same among the plurality of types of frames. Through this, the danger level of an anomaly can be determined from a combination of the number of types of frames determined to be anomalous on the communication network and the number of types of anomalies occurring in the frames. By using the details of the anomaly identified in this manner, when, for example, a plurality of anomalies have occurred, a response that is more appropriate in terms of safety can be carried out, i.e., prioritizing the response to an anomaly having a higher danger level. Additionally, the anomaly detection method may further include determining an anomaly level, wherein in the determining of an anomaly level, the anomaly level is determined to be lower when at least one type of frame has been determined to be anomalous in the detecting and the anomaly type determined in the determining of an anomaly type is only a trial attack anomaly than when the anomaly type determined does not include the trial attack anomaly. Through this, it can be determined, from the type of the anomaly that has occurred, whether or not the anomaly has a low danger level. By using the details of the anomaly identified in this manner, when, for example, a plurality of anomalies have occurred, a response that is more appropriate in terms of safety can be carried out, i.e., prioritizing the response to an anomaly having a higher danger level. Additionally, when the danger level is low, restrictions on the functionality of the communication network system, made as a response to the anomaly, can be loosened, which makes it possible to reduce the convenience sacrificed for the user. Additionally, the anomaly detection method may further include determining an anomaly level, wherein in the determining of an anomaly level, when at least one type of frame has been determined to be anomalous in the detecting, the anomaly level is determined based on a predetermined formula that takes, as a parameter, at least one of the type of the frame determined to be anomalous, a number of types of frames determined to be anomalous, the anomalous payload part output in the outputting, and the anomaly type determined in the determining of an anomaly type. Through this, the danger level of an anomaly can be determined from a plurality of conditions pertaining to the details of an anomaly in a frame detected as being anomalous. By using the details of the anomaly identified in this manner, when, for example, a plurality of anomalies have occurred, a response that is more appropriate in terms of safety can be carried out, i.e., prioritizing the response to an anomaly having a higher danger level. Additionally, in the determining of an anomaly type, when a plurality of the anomalous payload parts are included in a single frame and a number of intermediate bits between the plurality of the anomalous payload parts is no greater than a predetermined standard, the anomalous payload part and the intermediate bits may be collectively treated as a single anomalous payload part. This increases the likelihood of more accurately determining the anomaly type based on the anomalous part length in the payload in the frame determined to be anomalous. Additionally, the communication network system may be an in-vehicle network system. Through this, a large number of frames transmitted and received in the in-vehicle network system can be monitored, and anomalous frames included therein can be detected; furthermore, the details of an anomaly can be understood, and an appropriate response can be taken more quickly. This makes it possible to improve the safety of the automobile. Additionally, an anomaly detection device according to one embodiment of the present disclosure is an anomaly detection device that, in a communication network system, determines whether a frame, which is contained in observation data constituted by a collection of frames transmitted and received over the communication network system and observed in a predetermined period, is anomalous, and outputs an anomalous part of a payload in a frame determined to be anomalous. The anomaly detection device includes: a reference model holder that holds a data distribution of a plurality of feature amounts pertaining to a part of the payload included in the frame, the part being at least one bit; an anomaly detector that determines whether or not the frame contained in the observation data is anomalous; and an anomalous part outputter that, when the anomaly detector has detected an anomalous frame, calculates an anomaly contribution level for the plurality of feature amounts that have been extracted from the anomalous frame, and outputs an anomalous payload part, the anomalous payload part being at least one part contained in the frame and corresponding to a feature amount for which the anomaly contribution level is at least a predetermined value. The reference model holder holds the data distribution for a collection of frames that are sent and received over the communication network system, the collection being obtained at a different timing from a timing at which the observation data is obtained. The anomaly detector calculates a difference between the data distribution held by the reference model holder and a data distribution of a feature amount extracted from the frame contained in the observation data, and determines that the frame is an anomalous frame when the frame has a feature amount for which the difference is at least a predetermined value. Through this, not only are anomalous frames detected from a large number of frames being transmitted and received on the communication network, but information pertaining to anomalous parts of the payloads in the frames is obtained as well. Using the details of the anomaly identified in this manner makes it possible to respond more quickly and appropriately to the anomaly. Note that these comprehensive or specific aspects may be realized by a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM, or may be implemented by any desired combination of devices, systems, methods, integrated circuits, computer programs, and recording media. Embodiments of an anomaly detection method and an anomaly detection device according to the present disclosure will be described hereinafter with reference to the drawings. Note that the following embodiments describe comprehensive or specific examples of the present disclosure. The numerical values, shapes, materials, constituent elements, arrangements and connection states of constituent elements, steps, orders of steps, and the like described in the following embodiments are provided only for exemplary purposes, and are not intended to limit the present disclosure. Embodiment The following will describe a method for detecting an anomalous frame mixed in with frames transmitted and received in a communication network system, specifying an anomalous part in the frame, and determining the type and danger level of the anomaly. These descriptions will use, as an example, an in-vehicle network anomaly detection system including a vehicle and a server, the vehicle being provided with an in-vehicle network system in which a plurality of electronic control units (ECUs) communicate over a network configured using a CAN bus, and the server detecting an anomalous frame. 1.1 Overview of In-Vehicle Network Anomaly Detection System FIG.1is a diagram illustrating an overview of the in-vehicle network anomaly detection system according to the present embodiment. An in-vehicle network anomaly detection system is configured by connecting anomaly detection server60and vehicle10over network20, which serves as a communication path. Network20can include the Internet or a dedicated line. The in-vehicle network system provided in vehicle10includes a plurality of ECUs that communicate over an in-vehicle bus (a CAN bus). These ECUs are connected to various types of devices in the vehicle, such as control devices, sensors, actuators, user interface devices, and the like. In the present embodiment, each ECU in the in-vehicle network system communicates according to the CAN protocol. Types of frames in the CAN protocol include data frames, remote frames, overload frames, and error frames. Here, the descriptions will focus mainly on data frames. The CAN protocol defines a data frame as including a data field that stores data, a DLC (Data Length Code) that indicates the data length of the data field, and an ID field that stores an ID indicating the type based on the data stored in the data field. Note that the anomaly detection method or the anomaly detection device according to the present embodiment can also be applied in an communication network system that uses a CAN protocol frame type aside from data frame, or uses a different communication protocol entirely. 1.2 Configuration of In-Vehicle Network System FIG.2is a diagram illustrating an example of the configuration of the in-vehicle network system provided in vehicle10. The in-vehicle network system in vehicle10includes nodes such as a plurality of ECUs (ECUs100,101,200,201,300,301,302,400,401) connected to buses (CAN buses)1000,2000,3000,4000and5000, as well as gateway900that relays communication among these buses. Note that gateway900is also an ECU. Although not illustrated inFIG.2, the in-vehicle network system can include many more ECUs. An ECU is a device that includes, for example, a processor (a microprocessor), digital circuits such as memory, analog circuits, communication circuits, and the like. The memory is ROM (Read-Only Memory) and RAM (Random Access Memory), which can store a control program (computer program) executed by the processor. For example, the processor realizes various functions of the ECU by operating in accordance with the control program. Note that the computer program is a combination of a plurality of command codes for the processor to realize a predetermined function. Powertrain system ECUs pertaining to driving vehicle10, such as controlling a motor, fuel, a battery, and the like, are connected to bus1000. ECU (engine ECU)100connected to engine110and ECU (transmission ECU)101connected to transmission111are examples of the powertrain system ECUs in the present embodiment. Chassis system ECUs relating to the control of steering and braking of vehicle10, such as “turning”, “stopping”, and the like, are connected to bus2000. ECU (brake ECU)200connected to brakes210and ECU (steering ECU)201connected to steering211are examples of the chassis system ECUs in the present embodiment. ECUs related to information systems, such as functions that recognize, determine, and control driving assistance based on image information, functions related to an audio head unit, and vehicle-to-vehicle communication, are connected to bus3000. ECU300, ECU301, and ECU302, which are connected to camera310, car navigation system311, and telematics control unit (TCU)312, respectively, are examples of the ECUs related to information systems in the present embodiment. Body system ECUs related to control of vehicle equipment such as doors, air conditioning, blinkers, and the like are connected to bus4000. ECU400connected to doors410and ECU401connected to lights411are examples of the body system ECUs in the present embodiment. Diagnostic port510, which is an interface for communicating with an external diagnostic tool (fault diagnostic tool), such as OBD2 (On-Board Diagnostics second generation), is connected to bus5000. Each of the above-described ECUs (ECU100,200, and the like) obtains information indicating a state of the connected device (engine110, brakes210, and the like), and periodically transmits a data frame and the like expressing that state (data frames may be referred to simply as “frames” hereinafter) to the in-vehicle network system, i.e., to the CAN bus. Gateway900is an ECU that transfers data among a plurality of different communication paths. To describe this with reference to the example inFIG.2, gateway900is connected to bus1000, bus2000, bus3000, bus4000, and bus5000. In other words, gateway900is an ECU having a function of transferring frames received from one bus to another bus under set conditions (i.e., a destination bus selected according to the conditions). ECU302has a function of receiving and holding frames flowing in bus3000and periodically uploading those frames to anomaly detection server60. The frames are uploaded from TCU312to anomaly detection server60over network20, which includes a communication line such as a mobile phone line or the like. 1.3 Configuration of Anomaly Detection Server FIG.3is a block diagram illustrating the functional configuration of server (anomaly detection server)60. Anomaly detection server60, which is for handling an improper frame transmitted over the in-vehicle network system of vehicle10, is implemented by, for example, at least one computer including a processor, memory, a communication interface, and the like. Anomaly detection server60includes communicator610, anomaly detector620, anomalous part specifier630, attack type determiner640, attack level determiner650, result outputter660, reference model holder670, attack type determination table680, and attack level determination table690. The functions of reference model holder670, attack type determination table680, and the attack level determination table690can be realized by data held in a predetermined configuration in a storage medium such as memory or a hard disk, for example. These data will be described later using examples. Additionally, the functions of anomaly detector620, anomalous part specifier630, attack type determiner640, attack level determiner650, and result outputter660can be realized by a processor executing a control program stored in memory, for example. Communicator610is realized by a communication interface, a processor that executes a control program stored in memory, and the like. Communicator610receives information pertaining to the in-vehicle network system of vehicle10by communicating with vehicle10over network20. The information pertaining to the in-vehicle network system can include, for example, the details of frames flowing in the CAN buses of the in-vehicle network system (payload information) and information pertaining to reception timings (intervals, frequencies, and the like). Anomaly detector620determines whether data (observation data set D′) of a log of the in-vehicle network system, communicated from communicator610, is anomalous. At this time, an in-vehicle network log during normal travel (reference model data set D), held in reference model holder670, is referenced, and it is determined whether or not the observation data set D′ contains data of an anomalous frame based on a difference between the observation data set D′ and the reference model data set D. An anomaly detection method performed by density ratio estimation, for example, can be used to determine whether observation data set D′ is anomalous. Density ratio estimation is a technique to detect locations where a distribution of reference model data set D differs from a distribution of observation data set D′. For example, an anomaly caused by an attack using a data frame that does not contain an outlier is difficult to detect with methods that use only outlier detection. However, this technique can detect even this kind of anomaly based on a difference in the distribution of values from normal data. An example of the density ratio estimation algorithm will be described below. The density ratio estimation algorithm trains a classifier to classify normal data and observation data by setting a label of the data of each data frame during normal travel, which constitutes reference model data set D, to 0, and setting a label of the data of each data frame in observation data set D′ to 1. Models such as an MLP (Multi Layer Perceptron), logistic regression, random forest, the k-nearest neighbor method, and the like can be used for the classifier. When the observation data corresponding to one data frame (described in detail later) is represented by x, density ratio r(x) can be found from Bayes' theorem, using Equation 1 below. r(x)=pD(x)/pD′(x)=p(x|y=0)/p(x|y=1)=p(y=1)p(y=0|x)/p(y=0)p(y=1|x) (Equation 1) Here, p(y=1|x) represents a probability that observation data x belongs to observation data set D′, and is the output of the classifier. Additionally, p(y=0|x) represents a probability that observation data x belongs to the normal data set (i.e., reference model data set D), and is obtained by subtracting 1 from the output of the classifier. p(y=1) and p(y=0) are ratios of the sizes of the observation data set and reference model data set to the entire data set, respectively. When the absolute value of r(x) exceeds a predetermined threshold, i.e., when the classifier determines that the probability of observation data x belonging to observation data set D′ (or reference model data set D) is high, it is determined that (the data frame corresponding to) observation data x is anomalous. Observation data x is a feature amount extracted from the payload in a single data frame, e.g., a 64-dimensional feature amount in which each bit value of the data field contained in the CAN data frame is one feature. Note that each feature amount does not have to be a corresponding bit value of the data field, and can be obtained by segmenting the entire data field value at predetermined bit lengths. As such, observation data x may be, for example, a 16-dimensional feature amount taking 16 values obtained by segmenting the values of a 64-bit data field every 4 bits as a single feature amount, or an 8-dimensional feature amount taking 8 values obtained by segmenting the values every 8 bits as a single feature amount. The segmentation need not be performed at bits of a fixed length. For example, feature amounts corresponding to each of sub-fields contained in the payload, each of which has a meaning, may be extracted. The above-described classifier may be prepared and trained for each of IDs contained in the data frame, or a single classifier may be trained using data frames of a predetermined combination of IDs or data frames of all IDs. Anomaly detector620communicates the ID and the feature amount of the data frame determined to be anomalous, and information of the classifier used in the determination, to anomalous part specifier630. Based on the information communicated by anomaly detector620, anomalous part specifier630calculates a degree of contribution to the anomaly (also called an “anomaly contribution level” hereinafter) of each feature amount in the payload in the data frame determined to be anomalous (here referring to each feature amount constituting a high-dimensional feature amount extracted from one data frame). Anomaly contribution level ciof feature amount i is obtained by differentiating density ratio r(x) by input x (see Equation 2). ci=δr(x)/δx(Equation 2) Specifically, the anomaly contribution level is an amount of change in density ratio r(x) when a (bit) inversion or a small change is made to the value of feature amount i. Anomalous part specifier630calculates this anomaly contribution level for each feature amount i of the data frame determined to be anomalous, and determines that feature amount i indicating an anomaly contribution level of at least a predetermined threshold is a feature amount that contributes to the anomaly of the data frame in question. Anomalous part specifier630then specifies a bit position, in the payload, of the part indicating this feature amount as an anomalous payload part. In addition to the information communicated from anomaly detector620, anomalous part specifier630communicates the feature amount determined to be contributing to the anomaly and information on the specified anomalous payload part to attack type determiner640. Attack type determiner640has a function of determining a type of an attack that produced the anomalous frame by referring to the information communicated from anomalous part specifier630and attack type determination table680. Attack type determiner640first specifies an anomalous part length in the payload from the anomalous payload part communicated from anomalous part specifier630. For example, when a first feature amount, a second feature amount, a third feature amount, and so on correspond to the most significant bit, the second-most significant bit, the third-most significant bit, and so on of the payload, respectively, a range in which the bit positions of the anomalous part are continuous is determined to be the anomalous part contributing to a common anomaly. Then, when the first feature amount to a tenth feature amount are determined to be anomalous payload parts, the 10 bits from bit positions 1 through 10 of the payload are specified as the anomalous part length. Note that the method of specifying the anomalous part length is not limited to the method described above. For example, when the first feature amount to a fourth feature amount and a sixth feature amount to the tenth feature amount, described above, are anomalous parts, the first feature amount to the tenth feature amount, which include a fifth feature amount, may be handled as one continuous anomalous payload part, and the anomalous part length may be set to 10 bits. Although when the number of bits between two adjacent anomalous payload parts (called “intermediate bits” hereinafter) is 1, the adjacent anomalous payload parts and the intermediate bits are handled together as a single anomalous payload part in this example, the configuration is not limited thereto. The length (number of bits) of the intermediate bits handled as a single anomalous payload part by combining adjacent anomalous payload parts and intermediate bits is a matter of design, which can be determined separately. In other words, the number of bits serving as a reference for intermediate bits to be handled in this manner may be a value greater than 1. Based on the ratio of the number of bits in the intermediate bits to the total number of bits in the plurality of anomalous payload parts and the intermediate bits therebetween, those items may be handled as a single anomalous payload part when the ratio is a predetermined value or lower. Next, attack type determiner640determines an attack type in accordance with the anomalous part length (number of bits). For example, when the anomalous part length is 4 bits or less, attack type determiner640determines a “state value spoof”, assuming that through the attack, the value has been spoofed or tampered with in the part occupied by a flag indicating a state or a value indicating the state. Additionally, for example, when the anomalous part length is between 5 bits and 31 bits inclusive, attack type determiner640determines a “sensor value spoof”. Additionally, for example, when the anomalous part length is at least 32 bits, a “trial attack” is determined, which is an exploratory attack performed by injecting a random value, a value based on some analogy, a brute force attack using all possible values, or the like. Attack type determiner640executes the above-described determination on the data frames that anomaly detector620has determined to be anomalous, which have been communicated from anomalous part specifier630, in an in-vehicle network log corresponding to data frames observed in the in-vehicle network system at a predetermined time or over a predetermined length of time (also called a “predetermined period” hereinafter when there is no particular need to distinguish between the two), which have been communicated from vehicle10, and communicates results of a series of determinations of attack types to attack level determiner650. Attack level determiner650refers to the attack type and attack level determination table690communicated from attack type determiner640; determines an attack level indicating a danger level of the anomaly that has occurred by using a combination of conditions pertaining to the type of data frame in which the anomaly has occurred, conditions pertaining to the anomalous payload part, and conditions pertaining to the attack type that has been determined; and communicates a result of the determination to result outputter660. In this example, the attack level is determined according to three levels, namely low, mid, and high. Result outputter660outputs the information communicated from attack level determiner650in a data format appropriate for the application. For example, to communicate such information to an administrator of the in-vehicle network anomaly detection system as an alert, result outputter660outputs image data for displaying the fact that an anomaly has occurred due to an attack on a connected display, as well as the attack level, in the display. Additionally, for example, result outputter660may have a configuration enabling part of anomaly detection server60to function as a web server that is accessed by the administrator using software for viewing such information (e.g., a general-purpose web browser or dedicated application software). Additionally, for example, result outputter660may have a configuration enabling part of anomaly detection server60to function as a mail server that communicates such information to the administrator by email. Additionally, for example, result outputter660may output such information in a data format for recording on an electronic medium or the like as an incident log. Note that the aforementioned administrator is an example of a notification destination for anomalies occurring in the in-vehicle network system of vehicle10from the in-vehicle network anomaly detection system. A security analyst at a security operation center that has been entrusted with monitoring the in-vehicle network system may be another example of the notification destination. An example of the information output from result outputter660is illustrated inFIG.4. The example illustrated inFIG.4indicates that the time at which an anomaly occurred in the in-vehicle network system of a vehicle of model A (or at which an anomaly was detected from the in-vehicle network log) is 13:15 on Jan. 15, 2020. This example further indicates that the attack level of the attack that caused the anomaly is high; the IDs in the frames in which the anomaly was detected, i.e., the types of the data frames, are 0x100 and 0x200; for the data frame with the ID of 0x100, the part of the payload (data field) in bit positions 0 to 15 is an anomalous part caused by sensor value spoofing; and for the data frame with the ID of 0x200, the part of the payload (data field) in bit positions 33 to 36 is an anomalous part caused by state value spoofing. By receiving a notification of such information, the aforementioned administrator or security analyst can not only prioritize an order in which to respond to each anomaly according to the danger level (attack level, in the above example), but can also determine the details of the response more quickly and appropriately by understanding the type of the attack that is causing the anomaly. Reference model holder670holds a reference model indicating a data distribution of frames transmitted and received in the in-vehicle network system during normal travel of vehicle10(this will also be called a “normal data distribution model” hereinafter). The data during normal travel is data obtained at a different timing from the observation data. This data may be, for example, data collected during test travel prior to shipment of vehicle10or another vehicle having the same specifications, or may be in-vehicle network data uploaded from vehicle10, or another vehicle having the same specifications, that is determined not to be under attack.FIG.5illustrates an example of the normal data distribution model held in reference model holder670, and will be described in detail later. Attack type determination table680holds a table for determining the attack type based on the anomalous part length.FIG.6illustrates an example of the attack type determination table, and will be described in detail later. Attack level determination table690holds a table for determining the danger level using a combination of conditions pertaining to a number of types of IDs for which an anomaly has occurred, conditions pertaining to the anomalous part of the payload, and conditions pertaining to the attack type that has been determined.FIG.7illustrates an example of the attack level determination table, and will be described in detail later. 1.4 Normal Data Distribution Model FIG.5is a diagram illustrating an example of the normal data distribution model held in reference model holder670of anomaly detection server60. As illustrated inFIG.5, a frequency distribution of payload values is held in the normal data distribution model for each ID of the CAN data frames (see the “CAN ID” column in the drawing). Specifically, for the data frame having an ID of 0x100, the frequency of a payload value 0x0000000000000000 is 10, the frequency of a payload value 0x0000000000000011 is 22, the frequency of a payload value 0x00FF000000000000 is 10000, and the frequency of a payload value 0x00FF008888000011 is 8000. Additionally, for the data frame having an ID of 0x200, the frequency of a payload value 0xFF00FFFF00000088 is 50. Although the model illustrated in the example inFIG.5is in the form of a frequency distribution that uses an actual measured value of the frequency of the payload value as-is, the model may instead be in the form of a frequency distribution of values normalized for each ID, e.g., relative frequencies. The frequency distribution of normal data may also be held for each of vehicle statuses, such as stopped, traveling, and so on. The model held by reference model holder670may also be encrypted. 1.5 Attack Type Determination Table FIG.6is a diagram illustrating an example of the attack type determination table held in attack type determination table680of anomaly detection server60. According to attack type determination table680illustrated inFIG.6, when the bit length of the anomalous payload part, i.e., the part corresponding to the feature amount contributing to the anomaly in the payload in the data frame (the anomalous part length), is between 1 bit and 4 bits inclusive, attack type determiner640determines that the attack type is state value spoofing. Additionally, when the anomalous part length is between 8 bits and 31 bits inclusive, the attack type is determined to be sensor value spoofing, and when the anomalous part length is at least 32 bits, the attack type is determined to be a trial attack. 1.6 Attack Level Determination Table FIG.7is a diagram illustrating an example of the attack level determination table held in attack level determination table690of anomaly detection server60. According to attack level determination table690illustrated inFIG.7, when there are a plurality of data frames determined to be anomalous, the attack level is determined by combining a first condition pertaining to a number of types of data frames, i.e., whether the data frames have a single ID or a plurality of IDs, and a second condition pertaining to the attack type or the number of attack types and the anomalous payload part (the bit position in the payload). Regardless of whether there is one ID or a plurality of IDs for the data frame determined to be anomalous, which is the first condition, attack level determiner650determines that the attack level is low when the attack type, which is the second condition, is only a trial attack. This is because the attack is likely to have been carried out by an attacker who does not know the vehicle control commands, and has a low impact on the vehicle control. When there is one ID for the data frame determined to be anomalous, which is the first condition, attack level determiner650determines that the attack level is mid when the attack type, which is the second condition, is an attack aside from a trial attack. This is because the attack is likely to have been carried out by an attacker who has identified the type of data frame to be attacked, and is more dangerous due to its impact on vehicle control. Additionally, when there are a plurality of IDs for the data frame determined to be anomalous, which is the first condition, and the attack type, which is the second condition, is one type aside from a trial attack and the anomalous payload parts are the same, the attack level is determined to be mid. This is because the attack is likely to have been carried out by an attacker who has at least identified the part of the data field to be attacked, and the danger level is higher. When there are a plurality of IDs for the data frame determined to be anomalous, which is the first condition, and a plurality of types of attacks are being carried out in combination or the anomalous part differs for each data frame ID, which is the second condition, the attack level is determined to be high. This is because the attack is likely to be a highly-dangerous attack by an attacker who can spoof or alter the minimum number of data frames required for improper control. 1.7 Configuration of ECUs FIG.8is a diagram illustrating the configuration of ECU302and TCU312. Note that the other ECUs have the same basic configuration as ECU302, and devices connected to external device controller350differ depending on the ECU. As illustrated inFIG.8, ECU302includes frame transmitter/receiver330, frame interpreter340, external device controller350, frame generator360, and reception history holder370. The functions of these constituent elements are realized, for example, by a communication circuit, a processor, a digital circuit, or the like that executes a control program stored in memory, and the like. Frame transmitter/receiver330is connected to bus3000, and communicates a data frame received from bus3000to frame interpreter340. Frame interpreter340interprets the data frame communicated from frame transmitter/receiver330, and in accordance with a result of the interpreting, makes a control notification for an external device to external device controller350. With ECU302, the received data frame is temporarily held in reception history holder370as reception history. This reception history is uploaded, as an in-vehicle network log, to anomaly detection server60at predetermined intervals via TCU312. External device controller350has a function for controlling an external device connected to ECU302, which in the example ofFIG.8is TCU312. External device controller350also instructs frame generator360to generate a frame based on a state of the external device or details of communication with the external device. Upon receiving the instruction to generate a frame, frame generator360generates a frame and requests frame transmitter/receiver330to transmit the frame. Reception history holder370holds a history of data frames received from bus3000at predetermined intervals, i.e., the reception history.FIG.9illustrates an example of the reception history held in reception history holder370. The reception history will be described in detail later. TCU312includes server communicator380. Server communicator380communicates with anomaly detection server60over network20. For example, server communicator380uploads, to anomaly detection server60, the reception history received from ECU302. 1.8 Frame Reception History FIG.9is a diagram illustrating an example of the reception history held in reception history holder370of ECU302. As illustrated inFIG.9, a frequency distribution of payload values is held in the reception history for each ID of the CAN data frames (see the “CAN ID” column in the drawing). Specifically, for the data frame having an ID of 0x100, the frequency of a payload value 0x00FF000000000022 is 4, the frequency of a payload value 0x00FF000000000011 is 6, and the frequency of a payload value 0x00FF000000000000 is 10. Additionally, for the data frame having an ID of 0x200, the frequency of a payload value 0xFF00FFFF00000088 is 3, and the frequency of a payload value 0xFF00FFF0000000F0 is 2. Furthermore, for the data frame having an ID of 0x300, the frequency of a payload value 0x5500FF00330011E4 is 3. Although the reception history illustrated in the example inFIG.9is in the form of a frequency distribution that uses an actual measured value of the frequency of the payload value as-is, the form may instead be values normalized for each ID, e.g., relative frequencies. The frequency distribution of payload values may also be held for each of vehicle statuses, such as stopped, traveling, and so on. The reception history held by reception history holder370may also be encrypted. The data structure of the reception history is not limited to the example described here. For example, the data may be in a format in which the reception times and payload values of the data frames are arranged in chronological order. 1.9 Configuration of Gateway FIG.10illustrates the configuration of gateway900in the in-vehicle network system of vehicle10. As illustrated inFIG.10, gateway900includes frame transmitter/receiver910, frame interpreter920, transfer controller930, frame generator940, and transfer rule holder950. The functions of these constituent elements are realized, for example, by a communication circuit, a processor, a digital circuit, or the like that executes a control program stored in memory, and the like. Frame transmitter/receiver910is connected to bus1000, bus2000, bus3000, bus4000, and bus5000, and transmits/receives frames to each of the buses according to the CAN protocol. Frame transmitter/receiver910receives frames from each bus, one bit at a time, and communicates the frames to frame interpreter920. Additionally, upon receiving bus information indicating a transfer destination bus and a frame to be transmitted from frame generator940, frame transmitter/receiver910transmits that frame, one bit at a time, to the bus, among bus1000, bus2000, bus3000, bus4000, and bus5000, indicated by the bus information. Frame interpreter920interprets the values of the bits constituting the frame received from frame transmitter/receiver910so as to map those values to each field in the frame format defined by the CAN protocol. Frame interpreter920then communicates information pertaining to the received data frame to transfer controller930. If it is determined that the received frame is not in the CAN protocol, frame interpreter920notifies frame generator940that an error frame is to be transmitted. Additionally, if an error frame has been received, i.e., if a received frame is interpreted to be an error frame on the basis of the values of the bits constituting that frame, frame interpreter920discards that frame thereafter, i.e., stops interpreting the frame. In accordance with transfer rules held by transfer rule holder950, transfer controller930selects a transfer destination bus in accordance with the ID and transfer source bus of the received frame, i.e., the bus that received that frame, and makes a notification to frame generator940to request that the bus information indicating the transfer destination bus, as well as the details in the frame to be transferred, e.g., the ID, DLC, data field, and the like communicated from frame interpreter920, is to be transmitted to the transfer destination bus. In response to the transmission request from transfer controller930, frame generator940generates a frame for transmission using the frame details communicated from transfer controller930, and communicates the frame for transmission and transfer destination information based on the bus information, e.g., an identifier or the like of the transfer destination bus, to frame transmitter/receiver910. Transfer rule holder950holds transfer rule information indicating rules for transferring frames, for each of the buses. For example, the transfer rule information indicates, for each bus serving as a transfer source, the correspondence between the ID of the data frame to be transferred, which has been received from that bus, the transfer destination bus, and the ID of the data frame at the transfer destination. 1.10 Sequence of Processing Between Vehicle and Anomaly Detection Server FIG.11is a diagram illustrating an example of a processing sequence in the in-vehicle network anomaly detection system including anomaly detection server60and vehicle10. To provide more detail,FIG.10illustrates an example of processing in which an in-vehicle network log, which includes information pertaining to the payloads of data frames transmitted/received by the CAN buses in the in-vehicle network system included in vehicle10, is transmitted to anomaly detection server60, and anomaly detection server60analyzes that log. Specifically,FIG.10illustrates an example of processing performed when ECU302of vehicle10has received a data frame transmitted to bus3000. When one of the ECUs connected to bus3000in the in-vehicle network of vehicle10(camera ECU300, car navigation system ECU301, or gateway900) transmits a CAN data frame to bus3000, the data frame flows in bus3000(steps S101, S103, and S105). ECU302of vehicle10receives the data frames transmitted to bus3000in steps S101, S103, and S105, and holds a reception history of the collection of received data frames (see the example inFIG.9) (steps S102, S104, and S106). Once a predetermined period has elapsed, ECU302uploads the in-vehicle network log (denoted as “log” in the drawing), which includes information pertaining to a distribution of the payloads of the received data frames, from TCU312to anomaly detection server60over network20(step S107). Anomaly detection server60receives, from vehicle10, the in-vehicle network log transmitted from vehicle10(step S108). Then, using the received in-vehicle network log and the normal model stored in anomaly detection server60(see the example inFIG.5), anomaly detection server60analyzes the in-vehicle network log (step S109). Finally, anomaly detection server60outputs a result of analyzing the in-vehicle network log (step S110). 1.11 In-Vehicle Network Log Analysis Processing by Anomaly Detection Server FIG.12is a flowchart illustrating an example of a sequence of processing for analyzing the in-vehicle network log received from vehicle10, executed by anomaly detection server60. The processing for analyzing log information of vehicle10will be described hereinafter based on this flowchart. Using the in-vehicle network log uploaded from vehicle10, i.e., the log containing information pertaining to the distribution of payloads in the data frames transmitted and received in the in-vehicle network system of vehicle10, and the normal data distribution model held in reference model holder670of anomaly detection server60, anomaly detection server60trains a classifier to classify data observed for the purpose of detecting an anomaly (the observation data) and the normal data (step S201). Next, anomaly detection server60inputs the payload in each data frame (also called “received data” hereinafter) contained in the in-vehicle network log uploaded from vehicle10for anomaly detection processing into the classifier trained in step S201(step S202). Note that the in-vehicle network log uploaded from vehicle10in step S202is based on a collection of data frames transmitted and received in the in-vehicle network system and obtained to be observed for the purpose of actual anomaly detection (observation data). However, the in-vehicle network log uploaded from vehicle10in step S201is based on data frames transmitted and received in the in-vehicle network system on a different occasion from the observation data based on the in-vehicle network log uploaded in step S202, and is used as training data for training the classifier. If, as a result of inputting the received data into the classifier, the received data has a score belonging to the observation data that is at least a predetermined value (or a score belonging to the normal data that is less than a predetermined value), i.e., it is determined that (the data frame corresponding to) the received data is anomalous (Yes in step S203), anomaly detection server60executes step S205. However, if the received data is not anomalous (No in step S203), anomaly detection server60executes step S204. For a series of received data to be processed, anomaly detection server60confirms whether or not the anomaly determination for the corresponding data frames has ended, i.e., whether or not there is received data which has not yet been input to the classifier (step S204). If there is received data which has not yet been input to the classifier (Yes in step S204), anomaly detection server60executes step S202on the received data not yet input. However, if there is no received data which has not yet been input to the classifier (No in step S204), anomaly detection server60executes step S206. Anomaly detection server60calculates the bit position of the part indicating the feature amount contributing to the anomaly (the anomalous part), and the bit length of the anomalous part (the anomalous part length), in the payload in the data frame corresponding to the received data determined to be anomalous, and holds those items along with the ID and the payload data (step S205). Anomaly detection server60confirms whether there is a data frame determined to be anomalous for the in-vehicle network log uploaded from vehicle10(step S206). If there is a data frame determined to be anomalous (Yes in step S206), anomaly detection server60executes step S207, whereas if there is no received data determined to be anomalous (No in step S206), anomaly detection server60ends the processing. Anomaly detection server60refers to the attack type determination table stored in attack type determination table680, and for each data frame determined to be anomalous, determines the attack type from the anomalous part length (step S207). The processing of step S207will be described in detail later with reference toFIG.13. Next, anomaly detection server60determines the attack level from the combination of the number of types of IDs of the data frames determined to be anomalous, the position of the anomalous part in the payload, and the attack type (step S208). The processing of step S208will be described in detail later with reference toFIG.14. Finally, anomaly detection server60outputs a result of the determination (corresponding to step S110inFIG.11), and ends the processing. 1.12 Attack Type Determination Processing by Anomaly Detection Server FIG.13is a flowchart illustrating an example of a sequence of processing for determining the attack type in anomaly detection server60. This exemplary sequence corresponds to details of step S207in the processing for analyzing the in-vehicle network log, indicated inFIG.12. Anomaly detection server60confirms whether or not the anomalous part length of the data frame determined to be anomalous is between 1 bit and 4 bits inclusive (step S2071). If the anomalous part length is between 1 bit and 4 bits inclusive (Yes in step S2071), anomaly detection server60determines that the attack type is state value spoofing (step S2072). However, if the anomalous part length is not between 1 bit and 4 bits inclusive (No in step S2071), anomaly detection server60confirms whether or not the anomalous part length is between 5 bits and 31 bits inclusive (step S2073). If the anomalous part length is between 5 bits and 31 bits inclusive (Yes in step S2073), anomaly detection server60determines that the attack type is sensor value spoofing (step S2074). If the anomalous part length is not between 5 bits and 31 bits inclusive (No in step S2073), i.e., if the anomalous part length is at least 32 bits, anomaly detection server60determines that the attack type is trial attack (step S2075). Anomaly detection server60performs the above-described processing until there are no more data frames which have been determined to be anomalous but for which the attack type has not yet been determined. 1.13 Attack Level Determination Processing by Anomaly Detection Server FIG.14is a flowchart illustrating an example of a sequence of processing for determining the attack level in anomaly detection server60. This exemplary sequence corresponds to details of step S208in the processing for analyzing the in-vehicle network log, indicated inFIG.12. Anomaly detection server60confirms whether or not there is a data frame for which the attack type has not yet been determined (step S2081). If there is a data frame for which the attack type has not yet been determined (Yes in step S2081), anomaly detection server60stands by until there are no data frames for which the attack type has not yet been determined. If there are no data frames for which the attack type has not yet been determined (No in step S2081), anomaly detection server60confirms whether or not the determined attack type is only trial attack (step S2082). If the attack type is only trial attack (Yes in step S2082), anomaly detection server60determines that the attack level is “low” (step S2083). If the attack type is not only trial attack (No in step S2082), anomaly detection server60confirms whether or not there is only one type of ID for the data frame determined to be anomalous (step S2084). If there is only one type of ID for the data frame determined to be anomalous (Yes in step S2084), anomaly detection server60determines that the attack level is “mid” (step S2085). If there is not only one type of ID for the data frame determined to be anomalous, i.e., there are a plurality (No in step S2084), anomaly detection server60confirms whether or not the attack type determined in step S207, as well as the anomalous part, are the same among the data frames having the different IDs (step S2086). If both the attack type and the attack location are the same (Yes in step S2086), anomaly detection server60determines that the attack level is “mid” (step S2085). If not (No in step S2086), anomaly detection server60determines that the attack level is “high” (step S2087). 1.14 Effects of the Embodiment With the in-vehicle network anomaly detection system according to the present embodiment, anomaly detection server60obtains, from vehicle10, information pertaining to a distribution of payload values in frames transmitted and received in the in-vehicle network system, and an anomalous data frame is detected by comparing that distribution with a distribution of payload values in normal data frames held by anomaly detection server60. This enables anomaly detection server60to find changes in the distribution of the payload values within a predetermined period of observation for anomaly detection. Thus even if the payload in a data frame has been injected with a payload value within a normal range, as opposed to an outlier, in an attack, the data frame will be detected as anomalous based on the stated change. This anomaly detection server60has a high accuracy for detecting anomalous data frames, and can therefore increase the security of the in-vehicle network system. Furthermore, for the data frame determined to be anomalous, anomaly detection server60calculates the anomaly contribution level indicating which of the plurality of feature amounts corresponding to different parts in the payload in the data frame contributes to the anomaly. This makes it possible not only to detect anomalous data frames, but also to understand the anomalous payload part of the data frames, which makes it easier to understand the details of the attack. Furthermore, anomaly detection server60determines the type of attack that produced the anomaly based on the length of the anomalous payload part (the anomalous part length) of the data frame determined to be anomalous. This makes it possible to determine which sub-fields in the payload are being spoofed, and to understand the details of the attack more abstractly, which leads to a faster and more appropriate response to the anomaly. Furthermore, anomaly detection server60determines the attack level, which indicates the danger level, based on conditions pertaining to the attack type, which is found from the ID indicating the type of the data frame determined to be anomalous, the anomalous payload part, and the anomalous part length. This makes it possible to prioritize responses to attacks having a high danger level, such as attacks involving improper control of vehicles, which makes it possible to preemptively reduce the risk of an accident. Variations and Supplemental Descriptions Although an anomaly detection method or an anomaly detection device according to one or more aspects has been described thus far based on the embodiment, the anomaly detection method and the anomaly detection device according to the present disclosure are not intended to be limited to the embodiment. Embodiments implemented by combining constituent elements from different other embodiments and variations on the embodiments conceived by one skilled in the art may be included in the scope of one or more aspects as well, as long as they do not depart from the essential spirit of the present disclosure. Such variations on the foregoing embodiment, as well as supplements to the descriptions provided in the foregoing embodiment, will be described hereinafter. (1) Although the foregoing embodiment describes the in-vehicle network system as being based on the CAN protocol, the communication network system to which the anomaly detection method and the anomaly detection device according to the present disclosure can be applied is not limited thereto. The in-vehicle network system may be compliant with another standard, such as CAN-FD (CAN with Flexible Data rate), Ethernet (registered trademark), LIN (Local Interconnect Network), or FlexRay (registered trademark). The in-vehicle network system may have a combination of a plurality of networks that each complies with one of the stated standards. Furthermore, although the foregoing embodiment describes the anomaly detection method and the anomaly detection device according to the present disclosure as a security countermeasure technique applied in an in-vehicle network system installed in an automobile, the scope of application is not limited thereto. The anomaly detection method and the anomaly detection device according to the present disclosure are not limited to automobiles, and may also be applied in a communication network system for a mobility device such as a construction machine, an agricultural machine, a ship, a train, an aircraft, a drone, and the like. These may also be applied in a communication network system used in an industrial control system in a facility such as a factory or a building, and to a communication network system for controlling an embedded device. (2) Although the foregoing embodiment describes the cause of the detected anomaly as being an attack on the communication network system, and the type of the attack being determined, the cause of the anomaly detected by the anomaly detection method and the anomaly detection device according to the present disclosure is not limited to an attack. For example, an anomaly type caused by a malfunction, damage, or defect in various types of devices connected to the communication network, or by an external factor (e.g., temperature, humidity, or external noise), may be determined. The attack type determined by attack type determiner640in the foregoing embodiment can be said to be one example of such anomaly types. These conditions pertaining to the anomaly type, not limited to attacks, may also be used to determine the anomaly level indicating the danger level. The attack level determined by attack level determiner650in the foregoing embodiment can be said to be one example of this anomaly level. (3) Although the server performs the anomaly detection processing in the foregoing embodiment, the processing may be executed locally, within the communication network system of the vehicle or the like. For example, the processing may be performed by a GPU (Graphics Processing Unit) of a head unit constituting the in-vehicle network system. This makes it possible to increase the immediacy of the anomaly detection compared to when the processing is performed by the server. In this case, the server may aggregate the results of anomaly detection processing executed locally, such as by each vehicle. Additionally, the reference model used locally at this time may be held in advance in a storage device in the local communication network system, or may be downloaded from a server as appropriate. Additionally, the anomaly detection processing may be divided between local communication network systems and the server, e.g., with the communication network system executing the processing up to specifying the anomalous part, and the server executing the subsequent determination of the attack type and determination of the attack level. (4) Although the foregoing embodiment describes the reference model as being held in the anomaly detection server in advance, the reference model need not be held in advance. For example, log information that has been determined to be free of anomalies may be used as a reference model indicating the distribution of data when no anomalies have occurred in the next and subsequent anomaly determinations. Additionally, the reference model held in the anomaly detection server may be updated using the in-vehicle network log. (5) Although the foregoing embodiment does not describe any particular examples of the form of the anomaly detection server, the processing may be executed by a server which is local, i.e., framed in terms of the embodiment described above, a server prepared as an edge server close to the vehicle. Doing so results in a lower impact of network latency than when the anomaly detection processing is handled by a cloud server. For example, the edge server is a roadside device, the roadside device is connected to a cloud server over a network, and the vehicle uploads the in-vehicle network log to the roadside device. The roadside device may execute the anomaly detection processing on the received in-vehicle network log and return the results to the vehicle, and may also upload the results to the cloud server. (6) Although the foregoing embodiment describes an administrator or security analyst of the in-vehicle network anomaly detection system as being set as the recipient of the information communicated as an alert when an anomaly is detected in the vehicle or server, the configuration is not limited thereto. For example, the information may be provided to the car manufacturer or the ECU supplier, or to an information terminal used by a user of the vehicle, such as the driver or owner. The information may also be provided to a security provider that can be used in common among a plurality of car manufacturers. (7) Although the foregoing embodiment describes a log of the data frames received by the ECU connected to the TCU being uploaded from the TCU to the anomaly detection server, the form of the upload of the data frames from the vehicle to the anomaly detection server is not limited thereto. For example, a log of data frames received by a gateway that receives data frames from a wider range within an in-vehicle network system may be uploaded to the anomaly detection server. This log information may also be uploaded from the gateway to the anomaly detection server. (8) Although the foregoing embodiment describes the ECU as periodically uploading a log of the data frame of the in-vehicle network, the occasion or frequency of this uploading is not limited thereto. The in-vehicle network log may, for example, be uploaded in response to a request from the anomaly detection server, or may be uploaded only when an anomaly is detected by an IDS (Intrusion Detection System) installed in the vehicle. Network congestion and the anomaly detection server being overloaded delay the anomaly detection processing, which in turn leads to delays in responses taken based on the results. However, this configuration leads to a reduction in network communication volume and a reduced processing load on the anomaly detection server, which in turn suppresses delays in the response. (9) Although the foregoing embodiment describes the anomaly detection server as subjecting all data frames indicated by the in-vehicle network log uploaded from the vehicle to the anomaly detection processing, only some data frames may be subjected to the processing instead. For example, only data frames with a specific ID may be subjected to the anomaly detection processing. This reduces the processing load on the anomaly detection server. The IDs of the data frames subject to the processing may also be switched dynamically. This makes it possible for the anomaly detection server to perform the anomaly detection processing on the data frames for all the IDs while reducing the load of the anomaly detection processing, which in turn makes it possible to strike a balance between maintaining safety and avoiding delays in responding to anomalies. (10) In the foregoing embodiment, although the ECU that uploads the in-vehicle network log uploads the log information based on the payload information of all data frames received in a predetermined period, the log information uploaded to the anomaly detection server does not have to be based on the payload information of all data frames. The uploaded log information may be based on the payload information of a data frame having a specific ID, for example. This configuration leads to a reduction in network communication volume and a reduced processing load on the anomaly detection server. The IDs of the data frames to be uploaded may also be switched dynamically. This makes it possible for the anomaly detection server to perform the anomaly detection processing on the data frames for all the IDs while reducing the load of the anomaly detection processing, which in turn makes it possible to strike a balance between maintaining safety and avoiding delays in responding to anomalies. (11) Although the foregoing embodiment describes the anomaly detection server as performing the anomaly detection processing by taking all of multidimensional feature amounts corresponding to the payload values of the data frames as an input, the number of dimensions of the input feature amounts may be reduced. For example, when a counter or checksum sub-field included in the payload is known, feature amounts corresponding to those subfields may be excluded from the input for the anomaly detection processing. This makes it possible to reduce the amount of calculations by excluding parts that do not directly affect improper control from the anomaly detection processing, and execute the anomaly detection appropriately. (12) Although the foregoing embodiment describes the anomaly detection server as classifying the attack types into three types, namely sensor value spoofing, state value spoofing, and trial attack, and the classifications of these attacks are not limited thereto. For example, a classification may be used in which a compound attack of the aforementioned attacks is included in the same data frame. Furthermore, the values of the anomalous part lengths of the payload, used to determine the attack and the other anomaly types in the foregoing embodiment, are merely examples, and are not limited thereto. When a possible range of the anomalous part length in the event of a sensor value anomaly is taken as a first range, a possible range of the anomalous part length in the event of a state value anomaly is taken as a second range, and a possible range of the anomalous part length in the event of an anomaly caused by a trial attack is taken as a third range, it is assumed that the ranges will become longer in order from the first range, the second range, and the third range, and the foregoing example reflects that assumption. Additionally, the first range, the second range, and the third range do not necessarily have to be contiguous. For example, when the upper limit of the first range is 4 bits, the lower limit of the second range need not be 5 bits, and may instead be 8 bits, for example. The upper limit and lower limit of these ranges can be defined as possible values derived based on the design, specifications, compliant standard, and so on of the in-vehicle network system. Additionally, a range of an anomalous part length indicating the occurrence of a compound attack such as that described above may be used as well. (13) Although the foregoing embodiment describes the anomaly detection server as classifying the attack level as low, mid, or high, the classification method is not limited thereto. A score having more levels may be used instead, for example. The score of the attack level may be calculated using a predetermined calculation formula including parameters based on, for example, the ID in the frame determined to be anomalous, the number of IDs of frames determined to be anomalous (i.e., the number of types of data frames), the attack type, or the anomalous part length and the position of the anomalous part in the payload. This makes it possible to respond to an anomaly according to a more detailed danger level, and to prioritize the analysis more precisely. (14) Although the foregoing embodiment describes the anomaly detection server as determining the attack type based on the bit length of the anomalous part, i.e., the part of the payload contributing to the anomaly, the method for determining the attack type is not limited thereto. For example, the anomaly contribution level may further be used to determine the attack type. Additionally, the attack type may be determined by inputting an anomaly contribution level, which has been calculated for the payload in the data frame subject to the anomaly detection processing, into an attack type classifier which has been trained with anomaly contribution levels. Additionally, a database having payload sub-field information may be held, and the attack type may be determined by verifying the anomalous part against that database. (15) Although the foregoing embodiment describes an example in which one reference model is used, the configuration is not limited thereto. For example, in the case of a vehicle, different normal models may be used in accordance with the vehicle model, year, options, in-vehicle network system configuration, and so on. (16) Although the foregoing embodiment describes the reference model as being a model indicating a distribution of data obtained during normal travel of the vehicle, the details indicated by the reference model are not limited thereto. For example, the reference model may be a model indicating a distribution of data obtained during an anomaly, based on data collected from a communication network system in which an anomaly is known to be occurring. (17) Although the foregoing embodiment describes the anomaly detection server as determining the attack level based on a combination of conditions pertaining to the number of types of IDs of data frames determined to be anomalous, the determined attack type, and the position of the anomalous part in the payload, the attack level may be determined without using all of these conditions. For example, the attack level may always be determined to be mid when there is only one type of ID determined to be anomalous. This makes it possible to more flexibly calculate the attack level. (18) Although the foregoing embodiment describes the anomaly detection server as determining that a corresponding frame is anomalous when the density ratio exceeds a predetermined threshold, the predetermined threshold may be a value arising when the density ratio is at a maximum in the feature amounts of the reference model. This reduces the likelihood that a normal frame will be erroneously determined to be anomalous, and leads to a reduction in analysis costs. (19) Although the foregoing embodiment describes the anomaly detection server as holding a distribution of payload values for each ID as the reference model, a distribution of payload values may be held without separating the values by ID. This makes it possible to effectively reduce the data size of the reference model. (20) Although the foregoing embodiment describes the vehicle log communicated to the anomaly detection server as information pertaining to CAN frames, the vehicle log communicated to the anomaly detection server is not limited thereto. For example, the frames may be Ethernet frames, CAN-FD frames, or FlexRay frames, and do not have to be in-vehicle network frames. For example, GPS information indicating the current position of the vehicle, a log of accesses to an audio head unit, a log pertaining to operational processes, firmware version information, or the like may be used as well. (21) Each device in the foregoing embodiments is specifically a computer system constituted by a microprocessor, ROM, RAM, a hard disk unit, a display unit, a keyboard, a mouse, and the like. A computer program is recorded in the RAM or hard disk unit. Each device realizes the functions thereof by the microprocessor operating in accordance with the computer program. Here, the computer program is constituted by a combination of a plurality of command codes that indicate commands made to a computer to achieve a predetermined function. (22) Some or all of the constituent elements constituting the devices in the foregoing embodiments may be implemented by a single integrated circuit through system LSI (Large-Scale Integration). “System LSI” refers to very-large-scale integration in which multiple constituent elements are integrated on a single chip, and specifically, refers to a computer system configured including a microprocessor, ROM, RAM, and the like. A computer program is recorded in the RAM. The system LSI circuit realizes the functions thereof by the microprocessor operating in accordance with the computer program. The parts of the constituent elements constituting the foregoing devices may be implemented individually as single chips, or may be implemented with a single chip including some or all of the devices. Although the term “system LSI” is used here, other names, such as IC, LSI, super LSI, ultra LSI, and so on may be used, depending on the level of integration. Further, the manner in which the circuit integration is achieved is not limited to LSIs, and it is also possible to use a dedicated circuit or a general purpose processor. An FPGA (Field Programmable Gate Array) capable of post-production programming or a reconfigurable processor in which the connections and settings of the circuit cells within the LSI can be reconfigured may be used as well. Further, if other technologies that improve upon or are derived from semiconductor technology enable integration technology to replace LSI circuits, then naturally it is also possible to integrate the function blocks using that technology. Biotechnology applications are one such foreseeable example. (23) Some or all of the constituent elements constituting the foregoing devices may be constituted by IC cards or stand-alone modules that can be removed from and mounted in the apparatus. The IC card or module is a computer system constituted by a microprocessor, ROM, RAM, and the like. The IC card or module may include the above very-large-scale integration LSI circuit. The IC card or module realizes the functions thereof by the microprocessor operating in accordance with the computer program. The IC card or module may be tamper-resistant. (24) The present disclosure may be realized by the methods described above. This may be a computer program that implements these methods on a computer, or a digital signal constituting the computer program. Additionally, the present disclosure may also be computer programs or digital signals recorded in a computer-readable recording medium such as a flexible disk, a hard disk, a CD-ROM, an MO, a DVD, a DVD-ROM, a DVD-RAM, a BD (Blu-ray (registered trademark) Disc), semiconductor memory, or the like. The constituent elements may also be the digital signals recorded in such a recording medium. Additionally, the present disclosure may be realized by transmitting the computer program or digital signal via a telecommunication line, a wireless or wired communication line, a network such as the Internet, a data broadcast, or the like. Additionally, the present disclosure may be a computer system including a microprocessor and memory, where the memory records the above-described computer program and the microprocessor operates in accordance with the computer program. Additionally, the present disclosure may be implemented by another independent computer system, by recording the program or the digital signal in the recording medium and transferring the recording medium, or by transferring the program or the digital signal over the network or the like. (25) The above-described embodiments and variations may be combined as well. INDUSTRIAL APPLICABILITY According to the present disclosure, in a communication network such as an in-vehicle network system, even when an attacker has transmitted an improper frame that does not contain any outliers, whether or not that frame is anomalous can be determined. Furthermore, an anomalous part in the payload in an anomalous frame is calculated, and details such as the type and level of the anomaly can be quickly understood and responded to based on that anomalous part, which is effective in terms of improving safety. | 79,995 |
11943244 | DETAILED DESCRIPTION Currently, there is no method to test the accuracy of a machine learning based anomaly detector built into user behavior analytics (UBA). Current UBA methods artificially limit the number of users and events that a system can track due to cost constraints associated with building and maintaining systems sufficient for efficient performance when evaluating new datapoints (e.g., events). Further, traditional clustering methods that are popular with current UBA systems are laborious and resource inefficient. Embodiments of the present invention improve current UBA systems and associated anomaly detectors by reducing system requirements while maintaining highly accurate systems. Embodiments of the present invention reduce system resource requirements by utilizing a binary vector for the event categories such that each event is represented only using 188 bytes. Embodiments of the present invention improve system performance by scaling the evaluation of high-dimensional data through automatic labeling of datapoints using a rule based engine with optimized rules. Embodiments of the present invention recognize that this correlation allows the system to function in a more reliable manner while determining the accuracy of the UBA machine learning based anomaly detection system/engine. Embodiments of the present invention improve scoring reliability through the use of fuzzy labels as a third class, rather than utilizing a score, to be assigned to an existing class as applied to the evaluation of an unknown datapoint. Embodiments of the present invention reduce system requirements through the utilization of incremental training with large datasets. Embodiments of the present invention provide robust security orchestration, automation, and response capabilities. Implementation of embodiments of the invention may take a variety of forms, and exemplary implementation details are discussed subsequently with reference to the Figures. The present invention will now be described in detail with reference to the Figures. FIG.1is a functional block diagram illustrating a computational environment, generally designated100, in accordance with one embodiment of the present invention. The term “computational” as used in this specification describes a computer system that includes multiple, physically, distinct devices that operate together as a single computer system.FIG.1provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment may be made by those skilled in the art without departing from the scope of the invention as recited by the claims. Computational environment100includes server computer120connected over network102. Network102can be, for example, a telecommunications network, a local area network (LAN), a wide area network (WAN), such as the Internet, or a combination of the three, and can include wired, wireless, or fiber optic connections. Network102can include one or more wired and/or wireless networks that are capable of receiving and transmitting data, voice, and/or video signals, including multimedia signals that include voice, data, and video information. In general, network102can be any combination of connections and protocols that will support communications between server computer120, and other computing devices (not shown) within computational environment100. In various embodiments, network102operates locally via wired, wireless, or optical connections and can be any combination of connections and protocols (e.g., personal area network (PAN), near field communication (NFC), laser, infrared, ultrasonic, etc.). Server computer120can be a standalone computing device, a management server, a web server, a mobile computing device, or any other electronic device or computing system capable of receiving, sending, and processing data. In other embodiments, server computer120can represent a server computing system utilizing multiple computers as a server system, such as in a cloud computing environment. In another embodiment, server computer120can be a laptop computer, a tablet computer, a netbook computer, a personal computer (PC), a desktop computer, a personal digital assistant (PDA), a smart phone, or any programmable electronic device capable of communicating with other computing devices (not shown) within computational environment100via network102. In another embodiment, server computer120represents a computing system utilizing clustered computers and components (e.g., database server computers, application server computers, etc.) that act as a single pool of seamless resources when accessed within computational environment100. In the depicted embodiment, server computer120includes database122and program150. In other embodiments, server computer120may contain other applications, databases, programs, etc. which have not been depicted in computational environment100. Server computer120may include internal and external hardware components, as depicted and described in further detail with respect toFIG.4. Database122is a repository for data used by program150. In the depicted embodiment, database122resides on server computer120. In another embodiment, database122may reside elsewhere within computational environment100provided program150has access to database122. A database is an organized collection of data. Database122can be implemented with any type of storage device capable of storing data and configuration files that can be accessed and utilized by program150, such as a database server, a hard disk drive, or a flash memory. In an embodiment, database122contains corpus124. Corpus124contains a plurality of ground truths with associated events (i.e., information that is known to be true). In an embodiment, the contained ground truths include user and entity events contained in identity logs, device logs, domain name service (DNS) logs, application logs, database logs, and network logs. Here, program150mines UBA to reveal anomalies, even when said anomalies occur at a low frequency and over extended periods of time. In an embodiment, program150collects and stores information, associated with one or more users, from access logs, authentication logs, account changes, network logs (e.g., proxies, firewalls, IPS, and VPNs), endpoint and application logs. Program150is a program for scaling anomaly detector evaluations over high-dimensional space. In various embodiments, program150may implement the following steps: create a binary cluster of events by bootstrapping a set of ground truths contained with a rule engine applied to a set of high-dimensional datapoints, wherein the binary cluster contains two clusters each containing a plurality of high-dimensional datapoints; determine one or more peer groups for a set of unknown high-dimensional datapoints utilizing a trained multiclass classifier, wherein the high-dimensional datapoints are assigned to one or more peer groups by the trained multiclass classifier using an incremental learning algorithm in order to reduce system resources; create an activity distribution for each unknown high-dimensional datapoint associated with a user in the set of unknown high-dimensional datapoints and each peer group; calculate a deviation percentage between the activity distribution of the user and each peer group associated with the user; and to exceeding a deviation threshold, classify the user or associated high-dimensional datapoints as risky. In the depicted embodiment, program150is a standalone software program. In another embodiment, the functionality of program150, or any combination programs thereof, may be integrated into a single software program. In some embodiments, program150may be located on separate computing devices (not depicted) but can still communicate over network102. In various embodiments, client versions of program150resides on any other computing device (not depicted) within computational environment100. In the depicted embodiment, program150includes anomaly detector152. Program150is depicted and described in further detail with respect toFIGS.2and3. Anomaly detector152is representative of a model utilizing deep learning techniques to train, calculate weights, ingest inputs, and output a plurality of solution vectors (e.g., user risk scores based on associated events). Anomaly detector152is utilized by program150for risk profiling and unified user identities. Anomaly detector152profiles risk by assigning risk to different security use cases. Risk is assigned to each event associated with a user and increases risk depending on the severity and reliability of the detected event. In an embodiment, anomaly detector152generates security insights and profile risks of users. In a further embodiment, program150provides full data security and audit data visibility, real-time controls and automated workflows that span disparate data environments. In another embodiment, program150utilizes events contained in corpus124to make recommendations based on sales trends, analyze usage and user preferences for existing and future product releases, determine how users interact with an application to predict future usage and preferences, and detect compromised credentials and insider threats by locating anomalous behavior. In an embodiment, anomaly detector152is comprised of any combination of deep learning model, technique, and algorithm (e.g., decision trees, Naive Bayes classification, support vector machines for classification problems, random forest for classification and regression, linear regression, least squares regression, logistic regression). In an embodiment, anomaly detector152utilizes transferrable neural networks algorithms and models (e.g., long short-term memory (LSTM), deep stacking network (DSN), deep belief network (DBN), convolutional neural networks (CNN), compound hierarchical deep models, etc.) that can be trained with supervised or unsupervised methods. The training of anomaly detector152is depicted and described in further detail with respect toFIG.2. The present invention may contain various accessible data sources, such as database122, that may include personal storage devices, data, content, or information the user wishes not to be processed. Processing refers to any, automated or unautomated, operation or set of operations such as collection, recording, organization, structuring, storage, adaptation, alteration, retrieval, consultation, use, disclosure by transmission, dissemination, or otherwise making available, combination, restriction, erasure, or destruction performed on personal data. Program150provides informed consent, with notice of the collection of personal data, allowing the user to opt in or opt out of processing personal data. Consent can take several forms. Opt-in consent can impose on the user to take an affirmative action before the personal data is processed. Alternatively, opt-out consent can impose on the user to take an affirmative action to prevent the processing of personal data before the data is processed. Program150enables the authorized and secure processing of user information, such as tracking information, as well as personal data, such as personally identifying information or sensitive personal information. Program150provides information regarding the personal data and the nature (e.g., type, scope, purpose, duration, etc.) of the processing. Program150provides the user with copies of stored personal data. Program150allows the correction or completion of incorrect or incomplete personal data. Program150allows the immediate deletion of personal data. FIG.2depicts flowchart200illustrating operational steps of program150for scaling anomaly detector evaluations over high-dimensional space, in accordance with an embodiment of the present invention. Program150bootstraps ground truth (step202). In an embodiment, program150initiates responsive to a retrieved or inputted training set or set of ground truths. In another embodiment, program150initiates responsive to a request for UBE classification of one or more events and/or users. Program150bootstraps ground truth by identifying trigger rules, aggregating the identified trigger rules into a labeled dataset with associated binary choices; correlating the aggregated dataset with representation in high-dimensional space; and clustering the labeled high-dimensional datapoints. Program150measures cluster performance (step204). In an embodiment, responsive to the clustered high-dimensional datapoints, program150labels each datapoint with labeled generated through the identification of the trigger rules, as described in step302. Responsive to the applied labels, program150utilizes a classification metric (e.g., confusion matrix, receiver operator characteristic (ROC) curve, precision-recall curve, logarithmic loss, silhouette coefficient, etc.) to evaluate the correctness of the utilized clustering algorithm. In another embodiment, program150utilizes any desired classification metric to compare the clustering algorithm performance against the ground truth established by the identified rules. Said metrics allow program150to directly compare the measurement of ground truth between the output of anomaly detector152and the results of the ground truth generated from the rules bootstrapped in step202and steps302-308. In the previous embodiments, program150calculates said metrics as if the classifier had been trained based upon ground truth as opposed to predicted values thus fostering a reliable method of collecting evaluative metrics. Responsively, program150calculates a majority class for each cluster utilizing the aforementioned labels. In an embodiment, if program150utilizes a pure assignment algorithm, then program150skips calculating the majority class of each cluster. In this embodiment, the clustering algorithm itself serves as a predictor, thus no classifier is needed. In the situation where the corresponding clustering algorithm creates overlapping clusters (i.e., clusters with shared datapoints), program150utilizes fuzzy labelling in which program150assigns a score or probability for each of the true labels identified above. In an embodiment, program150assigns the class or label with the highest associated score or probability. For example, program150labels either an event or a user utilizing the corresponding score, or, alternately, an additional fuzzy label is created and assigned. If said fuzzy label is created, then program150recalculates classification metrics to account for the additional fuzzy label. In an embodiment, program150utilizes the fuzzy label as an alternative class in order to compare the fuzzy label to the two ground truth classes (e.g., binary cluster). If a fuzzy label matches a particular ground truth class while exceeding a certain threshold of the classification metric, then program150randomly assigns the overlapping datapoints (i.e., events) as one ground truth class of or the other. In another embodiment, program150presents the datapoints and associated information to a user and assigning said datapoints based on a user response. In another embodiment, program150discards or deletes the overlapping datapoints. In an embodiment, program150applies the aforementioned embodiments to each datapoint in the labeled dataset. Step204results in a set of true labels and a set of labels assigned by a clustering algorithm. Program150evaluates unlabeled datapoints (step206). Program150subsequently calculates a label for unknown events associated with one or more users if the clustering algorithm, utilized in the steps above, has served as a reliable predictor. In an embodiment, program150utilizes a predetermined classification metric threshold to determine if the clustering algorithm is a reliable predictor (e.g., >90% classification accuracy). In a further embodiment, if program150determines that the clustering algorithm is not reliable, then program150adjusts clustering parameters (i.e., K clusters, etc.) and recalculate classification metrics. In a further embodiment, if program150determines that the clustering algorithm is reliable, then program150clusters unknown samples (unlabeled datapoints) into one of the two existing clusters contained in the binary cluster described in step204. In an embodiment, the clusters are preconstructed per model and are not formed by collections of unknown events or users. In this embodiment, program150utilizes classification via clustering, allowing an unknown sample label to be predicted based upon which cluster the sample is most related to. Further, the clusters have been formed from the ground truth, thus this method should be more reliable. In an embodiment, program150continuously performs step202-206on a reoccurring basis at a predetermined frequency (i.e., user preference) in order to maintain the integrity of the preconstructed clusters, and thereby classification via clustering. Program150scales evaluative capabilities (step208). Program150scales the evaluative capabilities described in step206in order to support a large user base (e.g., >10000 users). The procedure described in steps202-206can be limited due to the system resource requirement of the high-dimensional datapoints, especially as the numbers of users increase. To compensate for such restrictions, program150trains and utilizes a multiclass classification algorithm, such as k-nearest-neighbors, multinomial naive Bayes, random forest, any deep learning solution with an output layer using softmax regression, or any similar algorithm which yields probabilities as outputs over high-dimensional datapoints. Program150extends the classification procedure from steps202-206from binary classification to multiclass classification. In an embodiment, program150maintains a set of defined peer groups, each peer group containing one or more users. In this embodiment, a user belongs to one or more peer groups. For example, user X could be both a developer, an administrator, or a member of both peer groups. In an embodiment, program150ensures that users are a part of one or more of peer groups, where peer groups yield a set of natural labels for a multiclass classifier. In an embodiment, program150utilizes a plurality (e.g., 200,000 users) of clustered users to train the multiclass classifier, specifically input points are used to create the user clusters prior to a dimensionality reduction. Responsively, program150collects events which are identified as related to a particular peer group. For example, during multiclass classifier training, all datapoints contributing to a user labeled as “administrator”, “developers”, “hr”, etc. are conglomerated together resulting in a set of datapoints labeled with a respective peer group. In another embodiment, events are mapped to multiple peer groups. In the situation of an event with multiple mapped peer groups, program150either duplicates the events and labels each event with a different label or program150utilizes multilabel learning in which an event belongs to multiple classes which must be predicted. In a further embodiment, program150utilizes activity distribution to perform peer group clustering to create a set of events in high dimensional space which are collected for use by the multiclass classifier. In order to account for large event datasets, program150utilizes incremental learning modifications of the aforementioned algorithms, where training will occur similarly to batch training but instead the models are incrementally fed learning training samples (e.g., clustered high-dimensional datapoints as described in step206) thus preventing the requirement of large system resources. In this embodiment, incremental algorithms account for concept drift better than batch training thus avoiding overfitting. Responsive to a trained classifier, program150evaluates each collected event to determine a respective peer group label. In an embodiment, program150utilizes the trained multiclass classifier to output a set of probabilities representing probable peer groups. In this embodiment, program150considers the top N probabilities of the result of the classification, where N is predetermined. For example, program150utilizes the top three probabilities to identify a peer group for an event. In an embodiment, if the correct peer group of a user appears in the top three probable labels, then program150considers the corresponding event to be safe or without increased risk. In this embodiment, the increase of N decreases the chance of a false positive. For example, a setting of N=1 represents an increased risk score if a user mapped to a specific peer group does not match the most probable label as determined by the classifier. In a further embodiment, each peer group has specific considerations. For example, program150handles an “administrators” group (as well as reports on peer groups not containing any valid users such as system or service accounts) as special peer group because on elevated permission requirements. In this example, if an event ranks in the top N probabilities as belonging to the “administrators” peer group and the corresponding user does not belong to that group then program150increases the user risk score and flags the event. Program150creates an activity distribution for each user utilizing the multiclass classifier and compares each activity distribution against respective peer groups (i.e., aggregated activity distributions for a peer group based on constituent datapoints (i.e., events)). In an embodiment, the activity distribution for a user or entity is a set of events associated with said user or entity. For example, each event is defined by 1,500 associated categories or features. Program150utilizes activity distribution to calculate how much one or more events associated with the user deviate from that of an associated peer group or peer groups. For example, program150calculates deviation from a peer group by calculating a percentage representing how much a user has deviated from said peer group. In an embodiment, program150utilizes the created activity distribution as a feature vector for the multiclass classifier. In an embodiment, program150utilizes a binary vector to represent event categories. For example, program150utilizes the binary vector such that an event is represented in 188 bytes. Further, in the binary vector, one bit represents the presence or absence of a particular low-level category, further reducing the required amount of system storage and system processing. Responsive to a user or event exceeding a deviation threshold based on created activity distributions, program150evaluates the user or event as risky or assigns an elevated risk score indicating the level of risk. In an embodiment, program150performs the classification procedure described above for every new event collected in a predetermined period, frequency, or pattern (e.g., every 7 days). In an embodiment, if the evaluations from clustering classifier, as described in steps202-206, and the multiclass classifier concur and exceed a concurrence threshold, then for that period the classifier is not modified. In a further embodiment, if program150detects drift (i.e., degradation of prediction accuracy due to changes in the environment) in the classifiers, then evaluations from each classifier of historical periods are used as a base for the previous event while the datapoints classified via clustering for that same period are appended (i.e., reclassified). In this embodiment, program150utilizes incremental learning to account for the multiclass classifier slowly drifting, while maintaining a historical perspective of the data. Responsively, the multiclass classifier is retrained over this new set of samples, and a new model is created. Responsively, program150deploys the newly created model and continues to utilize said model when evaluates new unknown datapoints. In an example, responsive to the detection (i.e., evaluation) of one or more plurality of risky user events through the evaluation of unknown events collected from user network behavior, program150restricts user network activity, removes user permissions, and/or notifies an administrator with information regarding the user, identified risky events, and associated probabilities. In a further example, the notification includes classification metrics associated with the utilized clustering classifier and multiclass classifier. FIG.3depicts flowchart300illustrating operational steps of program150for bootstrapping ground truths, in accordance with an embodiment of the present invention. Program150identifies trigger rules (step302). In an embodiment, program150identifies a set of rules as applied by a rule engine over a set of events as contained within corpus124. In this embodiment, program150identifies rules as applied to events which may or may not trigger said rules in order to create a set of ground truths that accurately describe the dataset. In a further embodiment, program150identifies and labels each event that trigger one or more rules (e.g., user risk score rules). In this embodiment, program150labels the remaining events (i.e., events that do not trigger any rules). In the aforementioned embodiments, said rules or set of rules are arbitrarily complex, but for each rule in the set of rules the end result is a binary choice of whether or not an associated score (i.e., user risk) increases. For example, if a rule is triggered on a specific even, an associated risk score increases (e.g., the user risk score for a user is increased), while if no rule triggered then the risk score is unchanged or reduced. Additionally, a user may trigger an arbitrary number of rules or none at all. In another embodiment, program150develops and maintains a set of all triggers identified by a plurality of events. In this embodiment, program150identifies a set of ground truths utilizing the maintained set of all triggers as expressed by the rule engine. For example, program identifies a set of ground truths relating to user risk elevation in a corporate environment. Program150aggregates identified triggers into labeled dataset with associated binary choices (step304). Responsive to the collection and identification of trigger rules, program150creates a labeled dataset of events and binary choices. For example, if an event has triggered a rule, program150labels the event as potentially malicious or risky. In another example, if the event does not trigger a rule, then program150labels the event as normal. In an embodiment, events are associated with users, and thus the final outcome of the rule evaluation is a user risk score. Program150correlates the aggregated dataset with representation in high-dimensional space (step306). In an embodiment, program150correlates the events within the labeled dataset with event representation in a high-dimensional space used by a machine learning clustering algorithm, further described in step308. Here, program150uses a high-dimensional space where the labeled dataset is directly represented in a space spanned by constituent event attributes, with each event represented as a point in the space (i.e., high-dimensional datapoint) with its position depending on respective attribute values (e.g., labels or risk scores). In this embodiment, program150translates each labeled event in the labeled dataset into a respective point in high-dimensional space to be subsequently clustered, further described in step308. For example, a translated point includes low level categories (e.g., risk categories) associated with the event and one or more corresponding rules or triggers. Program150clusters the labelled high-dimensional datapoints (step308). Responsive to a plurality of labelled high-dimensional datapoints, program150places each high-dimensional datapoint into a cluster or plurality of clusters utilizing a cluster algorithm (e.g., binary classifier, etc.). For example, program150clusters the labeled high-dimensional datapoints of events and draws said datapoints into a binary cluster containing two clusters. In an embodiment, program150utilizes a hierarchical agglomerative algorithm to force the assignment of datapoints to only two clusters. In another embodiment, program150utilizes any clustering algorithm (i.e., cluster classifier) where K is specified, such as K-Means or K-Medoids. In this embodiment, the clustering algorithm under test is the algorithm which denotes the increase in risk. Further, the cluster classifier algorithm purpose must match the purpose of the rules, described in step302. In another embodiment, program150utilizes a density based clustering algorithm to responsively merge a plurality of clusters into two distinct clusters. In another embodiment, program150utilizes the same algorithm corresponding to anomaly detector152. For example, if anomaly detector152uses a Sum of Norms algorithm, then program150utilizes the same algorithm for bootstrapping the ground truths and clustering. Step308results in a cluster classifier outputting a pair of clusters, where one cluster is composed of those events which indicate an increased risk, while the other is composed of those events which do not indicate an increased risk. FIG.4depicts block diagram400illustrating components of server computer120in accordance with an illustrative embodiment of the present invention. It should be appreciated thatFIG.4provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment may be made. Server computer120each include communications fabric404, which provides communications between cache403, memory402, persistent storage405, communications unit407, and input/output (I/O) interface(s)406. Communications fabric404can be implemented with any architecture designed for passing data and/or control information between processors (such as microprocessors, communications, and network processors, etc.), system memory, peripheral devices, and any other hardware components within a system. For example, communications fabric404can be implemented with one or more buses or a crossbar switch. Memory402and persistent storage405are computer readable storage media. In this embodiment, memory402includes random access memory (RAM). In general, memory402can include any suitable volatile or non-volatile computer readable storage media. Cache403is a fast memory that enhances the performance of computer processor(s)401by holding recently accessed data, and data near accessed data, from memory402. Program150may be stored in persistent storage405and in memory402for execution by one or more of the respective computer processor(s)401via cache403. In an embodiment, persistent storage405includes a magnetic hard disk drive. Alternatively, or in addition to a magnetic hard disk drive, persistent storage405can include a solid-state hard drive, a semiconductor storage device, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, or any other computer readable storage media that is capable of storing program instructions or digital information. The media used by persistent storage405may also be removable. For example, a removable hard drive may be used for persistent storage405. Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer readable storage medium that is also part of persistent storage405. Software and data412can be stored in persistent storage405for access and/or execution by one or more of the respective processors401via cache403. Communications unit407, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit407includes one or more network interface cards. Communications unit407may provide communications through the use of either or both physical and wireless communications links. Program150may be downloaded to persistent storage405through communications unit407. I/O interface(s)406allows for input and output of data with other devices that may be connected to server computer120. For example, I/O interface(s)406may provide a connection to external device(s)408, such as a keyboard, a keypad, a touch screen, and/or some other suitable input device. External devices408can also include portable computer readable storage media such as, for example, thumb drives, portable optical or magnetic disks, and memory cards. Software and data used to practice embodiments of the present invention, e.g., program150, can be stored on such portable computer readable storage media and can be loaded onto persistent storage405via I/O interface(s)406. I/O interface(s)406also connect to a display409. Display409provides a mechanism to display data to a user and may be, for example, a computer monitor. The programs described herein are identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature. The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, conventional procedural programming languages, such as the “C” programming language or similar programming languages, and quantum programming languages such as the “Q” programming language, Q#, quantum computation language (QCL) or similar programming languages, low-level programming languages, such as the assembly language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The terminology used herein was chosen to best explain the principles of the embodiment, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. | 41,518 |
11943245 | DETAILED DESCRIPTION OF SOME DEMONSTRATIVE EMBODIMENTS Some embodiments include systems, devices, and methods of protecting electronic and/or Internet-connected devices against fraudulent and malicious activities. The Applicants have realized that a large portion of electronic devices, end-user devices, Internet-connected devices and end-points are not protected, at all or properly, against malicious activity or attacks. Such devices are often exposed to malware, which in turn may cause identity theft, theft of personal data, unauthorized access to privileged information or to a privileged account, unauthorized use of the Internet-connected device itself, and/or other malicious or harmful activities that are not authorized by the owner or the legitimate operator of the Internet-connected device. The Applicants have also realized that the utilization of a conventional anti-virus or anti-malware software, still does not provide proper and complete protection to such Internet-connected devices. In accordance with some embodiments, a novel protection system utilizes a combination of two methods or processes or components. Some embodiments may be used in conjunction with non-encrypted Internet traffic, or in conjunction with encrypted Internet traffic, or in conjunction with both encrypted and non-encrypted Internet traffic. A first method or process or component performs anomaly detection, by detecting an abnormally sharp increase (or other irregular increase) in the number of requests per second that are outgoing from the Internet-connected device, and/or by detecting a large number of requests to access websites or domains or other online venues that are known to be associated with negative reputation or with questionable reputation or with suspicious activities (e.g., “phishing” websites, social engineering websites, malware-serving or malware-containing websites). A second method or process or components performs analysis of the behavioral variation of the Internet-connected device; such as, detecting changes in (or a deviation from) the navigation patterns that the device typically exhibited, thereby indicating a fraudulent or malicious activity. For example, an Internet-connect smoke detector or refrigerator is typically configured to access the same online destination or website or server; and new attempt(s) by such device to access a new, different, website or domain or server, may indicate that the device was compromised or that a malicious actor has taken control over the device (e.g., via malware, via a man-in-the-middle attack, via theft or switching of a SIM card, or the like). In another example, a legitimate human user may spend approximately the same time visiting the same types of websites (e.g., spending 60% of the time in Social Networks; spending 30% of the time in consuming News; and spending 10% of the time in Online Shopping); some embodiments may detect a change, or a deviation from, such exhibited usage patterns, of that user and/or of other users of the same type, thereby indicating fraudulent or malicious or unauthorized activity. Some embodiments may provide a Machine Learning (ML) based system, able to detect anomalies for a non-labeled multi-variate time series. Such system may include, for example, a data collector and mediator unit; a predictor unit; a re-training unit; and/or other suitable components as described herein. Reference is made toFIG.1, which is a schematic block diagram illustration of a system100for protecting electronic and/or Internet-connected devices against fraudulent and malicious activities, in accordance with some demonstrative embodiments. One or more end-user devices or electronic devices or Internet-connected devices, such as devices101and102and103, may be or may include, for example, a smartphone, a tablet, a laptop computer, a desktop computer, a smart-watch, a smart television, a gaming device, an Internet Protocol (IP) connected device, an Internet-of-Things (IoT) device, an Internet-connected home appliance, an Internet-connected camera or security camera, an Internet-connected sensor, an Internet-connected smoke detector, an Internet-connected vending machine, or other electronic device or Internet-connected device or device having capability to connect to the Internet105. For example, device101(e.g., a smartphone) may connect to the Internet via a Cellular Service Provider (CSP); whereas device102(e.g., a laptop computer) may connect to the Internet via an Internet Service Provider (ISP); whereas device103(e.g., a desktop computer) may connect to the Internet over a wired communication link. Accordingly, system100may include a CSP/ISP network110, which includes one or more network elements, communication units, radios, switches, hubs, wired links, wireless links, and/or other elements that together provide the functionality of a CSP network and/or of an ISP network, and which provide Internet access or Internet connectivity to devices101-103. In some embodiments, a Data Collector and Mediator Unit112is connected within the CSP network, or within the ISP network, or at an exit node of the CSP network, or at an exit node of the ISP network, or at a communication segment that connects the CSP network to the Internet, or at a communication segment that connects the ISP network to the Internet, or at a communication segment that connects the CSP/ISP network to an entry node of the Internet. In some embodiments, Data Collector and Mediator Unit112is deployed as an in-line network element or an in-line network node, between the CSP/ISP network110and the Internet105, or between the CSP/ISP network110and the public network. In other embodiments, Data Collector and Mediator Unit112is deployed in parallel to the communication segment that connects the CSP/ISP network110and the Internet105, operating in tap mode or as a network tap element. In some embodiments, Data Collector and Mediator Unit112intercepts traffic, or monitors traffic, or listens to traffic, or collects traffic, or duplicates or replicates traffic for monitoring. The monitored traffic may include packets, data packets, outgoing traffic, incoming traffic, outbound traffic, inbound traffic, payload, headers, meta-data of packets (e.g., origin, destination, packet number, packet size, timestamp), TCP/IP traffic, HTTP traffic, HTTPS traffic, FTP traffic, and/or other types of traffic. In some embodiments, Data Collector and Mediator Unit112collects or gathers traffic (e.g., packets), and replicates them with their respective timestamps; and stores them towards further analysis by the Predictor Unit120. In some embodiments, Data Collector and Mediator Unit112collects and provides traffic to the Predictor Unit120, or selectively generates and provides data records for selected monitored traffic; for example, traffic corresponding to (or associated with) a particular Internet-connected device, or traffic corresponding to (or associated with) a particular cellular subscriber or CSP subscriber or CSP customer, or traffic corresponding to (or associated with) a particular Internet subscriber or ISP subscriber or ISP customer, or traffic corresponding to (or associated with) a particular account or CSP account or ISP account, or traffic corresponding to (or associated with) a particular type of Internet-connected devices (e.g., traffic of smartphones; or, traffic of Android smartphones; or, traffic of Samsung Galaxy smartphones; or, traffic of Internet-connected smoke detectors; or the like), or traffic corresponding to a particular type of users or subscribers (e.g., traffic of Internet-connected devices of cellular service subscribers that are known to be males in the age of 21 to 35; or traffic of devices of Internet subscribers that are known to be subscribed to a particular guaranteed bandwidth level), or other types of traffic or categories of traffic. In some embodiments, the type of traffic to be monitored, may be pre-configured in the system; or may be dynamically re-configured or modified based on one or more rules or condition (for example, specifically monitoring traffic that is outgoing from smoke detectors, based on a discovery of a new exploit in smoke detectors). The Predictor Unit120operates to detect fraudulent or malicious activities, and/or to estimate or to determine that a particular traffic portion (e.g., a particular payload, or a particular set of packets) is associated with fraud or with a malicious activity or with an unauthorized use, based on Machine Learning/Deep Learning (ML/DL) analysis of the collected traffic and its features. Based on such analysis, Predictor Unit120sends a notification or a triggering signal or other signal or message to a Policy Enforcer Unit111, which may be located in the ISP/CSP network111or may be part of the ISP/CSP network111or may be operably associated with the ISP/CSP network111or may otherwise perform enforcement of a traffic-related policy with regard to traffic of ISP/CSP network111or with regard to traffic passing through ISP/CSP network111or with regard to traffic outgoing from ISP/CSP network111or with regard to traffic incoming to ISP/CSP network111. Policy Enforcer Unit111may include one or more suitable sub-units or components, for example, a firewall which may be dynamically configured or re-configured based on the analysis results, a switch, a hub, a router, a traffic discarding unit to discard packets or communication flows, a traffic blocking unit to block packets or communication flows, a traffic quarantine unit to temporarily quarantine packets or communication flows, a traffic transport delay unit to intentionally inject or add a time delay to the transport or to the passage or to the relay of particular packets or communication flows, a traffic shaping unit, a traffic limiting or constraining unit, a traffic bandwidth limiting unit or filter or filtering mechanism, a traffic steering unit, a traffic re-routing unit (e.g., to re-route certain packets or flows to alternate servers and/or through other communication routes or communication links), a traffic modification unit which may be capable of dropping and/or adding and/or replacing and/or re-writing packets or packet-portions (e.g., in order to selectively remove or discard or replace malicious components), a traffic-related billing unit or charging unit (e.g., configured to increase or to introduce a particular monetary charge to a particular Internet subscriber or Cellular subscriber due to detection of malicious activity), and/or other suitable traffic enforcement policies or operations. In some embodiments, the above-mentioned traffic-related policies or traffic enforcement policies, may be stored in a Bank of Traffic Policies148, for example, as sets of rules that pertain to each such policy and/or that describe or define each such policy; and a Traffic Policy Selector Unit147may select one or more of those traffic policies, that the Policy Enforcer Unit111then enforces towards the ISP/CSP network110and/or within the ISP/CSP network110and/or towards the traffic that pass through ISP/CSP network110. In some embodiments, Bank of Traffic Policies148and/or Traffic Policy Selector Unit147may be implemented as part of Enforcer Unit111, or as a unit that is operably associated with Enforcer Unit111, or as part of Predictor Unit120, or as a unit that is operably associated with Predictor Unit120, or as other suitable component of system100. Additionally, a Notification Generator Unit145, which may be part of Predictor Unit120or may be operably associated with it, or which may be part of Policy Enforcer Unit111or may be operably associated with it, or may be implemented elsewhere in system100, may generate a notification message or signal, and may deliver or send or transmit it to one or more pre-defined recipients and/or to one or more dynamically-selected recipients; for example, to an administrator or operator of system100or of Predictor Unit120or of Policy Enforcer Unit111, to an administrator or operator of ISP/CSP network110, to an administrator or operator of a fraud detection/fraud mitigation/fraud investigation department, or the like. In some embodiments, the recipient's identity may be dynamically determined; for example, it may be or may include an email address of a contact person or an owner of a domain name that is associated with an outgoing traffic-portion or with an incoming traffic-portion, or with a payload or a source or a destination of a particular set of packets. In some embodiments, the notification may optionally include a triggering signal or a triggering message, which may cause a remote server or a remote enforcement unit to perform one or more pre-defined operations (e.g., traffic blocking, traffic discarding, traffic quarantining, traffic steering, traffic re-routing, traffic delaying, or the like). In some embodiments, a mitigation unit, such as an Attack/Fraud Mitigation Unit146, may be triggered or activated by Predictor Unit120, or by the Notification Generator145; and may select and enforce (or deploy, or activate, or launch) one or more attack mitigation operations and/or fraud mitigation operations, which may be selected from a Bank of Mitigation Operations149which describes or defines such operations and/or rules for their activation or enforcement. The mitigation operations may include one or more of the operations described above; and/or other suitable operations, for example, adding a “fraud” label or tag or a “malicious activity” label or tag to a particular account or subscriber or device or source or destination or payload; adding a source address and/or a destination address and/or a sender and/or a receiver to a blacklist, and/or removing it from a whitelist (if it had appeared in such whitelist); enforcing a blacklist and/or a whitelist of senders and/or recipients and/or payloads, which should not be transported (blacklist) or which should be transported (whitelist); performing one or more monetary operations as a result of the mitigation operations; putting a freeze or a hold on an account; and/or other suitable operations. Returning now again to the operation of Predictor Unit120, this unit receives the data from the Data Collector and Mediator Unit112. The data may be receives as raw data; or as partially-processed data; or as data that is gathered or grouped into dataset(s) or data-clusters or data troves, for example, each dataset corresponding to a particular communication flow, or corresponding to a particular time-slot of communications, or to a particular recipient, or to a particular sender, or to particular payload (e.g., set of packets that are transported from a particular sender to a particular recipient), or to a type of payload, or to a type of recipients, or to a type of senders; or other dataset(s) which may be grouped based on one or more other parameters or traffic-related characteristics and/or sender-related characteristics and/or recipient-related characteristics and/or payload-related characteristics and/or other relevant characteristics (e.g., time-of-day; allocated time-slot or time-interval; day-of-week; calendar date; size of payload; size of packets; number or frequency of packets sent by a particular sender; number or frequency of packets that are destined to a particular recipient; or the like). The raw data and/or the grouped data (as datasets) may be received from Data Collector and Mediator Unit112, and/or may be grouped or re-grouped by Predictor Unit120into other dataset(s) based on one or more grouping criteria or rules or parameters. In some embodiments, Data Collector and Mediator Unit112aggregates traffic data that is observed or that is monitored for a time interval of T seconds; for example, T=1 second (or 2 seconds, or 5 seconds); and generates raw datasets, each dataset corresponding to the traffic of that time-interval of T seconds; and supplies those datasets to Predictor Unit120(and also to the Re-Training Unit130) for further processing. Reference is made toFIG.2, which is a schematic illustration of a dataset record200, as constructed by the Data Collector and Mediator Unit112and as provided to the Predictor Unit120and to the Re-Training Unit130, in accordance with some demonstrative embodiments. For example, dataset record200may include the following fields: (a) Client-ID, indicating an identifier of the Internet-connected device that is associated with this traffic; (b) date-stamp and time-stamp; (c) Internet Protocol (IP) address of the destination; (d) the URL that was accessed or navigated to, or the destination's URL; (e) one or more categories to which the site (or domain) belong (e.g., it is a Webmail service; it is a Social Network; it is a Streaming Video website; it is a News website); (f) the number of bytes that were downloaded from the visited server or the destination's server; (g) name or identifier of a virus or a malware (if it is detected; or a pre-defined indicator, such as “-”, if it is not detected); (h) Connection ID, indicating an identifier of the connection. Referring again toFIG.1, in some embodiments, Predictor Unit120is implemented by (or may comprise) a Machine Learning (ML)/Deep Learning (DL) unit141, able to generate ML/DL based insights or estimations, or determinations (e.g., if an estimated output is associated with a numeric certainty level that is greater than a pre-defined threshold level of certainty). For example, a Dataset(s) Generator143may receive the raw data or other data from the Data Collector and Mediator Unit112, during a particular time-window or time-interval (denoted T); and may organize the data into dataset(s), or into group(s) of data-items; which are fed into the ML/DL Unit141. A Features Extractor142operates to extract one or more features from the dataset(s), for ML/DL analysis. For example, the time-window T may be 10 seconds, or 20 seconds, or 30 seconds, or 40 seconds, or 60 seconds, or 90 seconds, or other time-window which may be manually configured by a system administrator, and/or which may be dynamically set or dynamically re-configured or dynamically modified by the Dataset(s) Generator143itself, for example, based on the volume of traffic data that is pending for analysis (e.g., dynamically setting a time-window of T seconds, which corresponds to an average traffic volume of N packets or to N payload-items; wherein N is a pre-defined value or a configurable value). The extracted features are used by the ML/DL unit141to generate the estimations or determinations or insights. Optionally, a Classification Unit144operates to classify packets or payload-items or payload-units, or other types of analyzed data or monitored data, into a class or a cluster or a type, or to classify such item(s) as belonging to a particular class or type (e.g., a type of “estimated to be associated with fraudulent or malicious activity”, or a type of “estimated not to be associated with fraudulent or malicious activity”). Predictor Unit120may include, and may utilize, one or more other components whose operations are further discussed herein; for example, a Clustering Unit152configured to create or detect or construct clusters of data-items or data-points (e.g., that are classified as Anomalous; or, that are classified as Non-Anomalous); a Data Encoding and Dimensions Reduction Unit151, configured to perform data encoding and/or dimensions reduction with regard to datasets and/or features; a Recurrent Neural Network (RNN) Unit153; a Natural Language Processing (NLP) Unit154; and/or other suitable units or components. Optionally, a Model Re-Training Unit130operates to utilize the latest collected data (e.g., collected in the past M minutes or in the past H hours) to re-train the ML/DL model(s) used by the ML/DL Unit141of Predictor Unit120. The re-training is performed periodically; for example, every 12 or 18 or 24 or 36 or 48 hours, and/or at time-intervals that correspond to a volume of analyzed traffic (e.g., corresponding to a pre-defined number N of analyzed packets or payload-items; such as, every 500,000 packets). The Model Re-Training Unit130uses its own Anomaly Detector131(e.g., similar to Anomaly Detector121) and its own Behavior Analysis Unit132(e.g., similar to Behavior Analysis Unit122), in order to generate or construct an updated model. The updated model(s), or in some situations a replacement model, is then provided by the Model Re-Training Unit130to the ML/DL Unit141of Predictor Unit120, to enable dynamic updating of the operational functionality of the ML/DL Unit141of Predictor Unit120. Some embodiments may thus detect fraudulent and/or malicious activity on (or of, or with regard to, or associated with) particular Internet-connected devices/users/accounts; by using the ML/DL unit (or other suitable AI engine) to detect traffic anomalies and/or behavioral anomalies using non-labeled multi-variate time series. Since the method uses non-labeled data, and/or in order to resolve problems of unsupervised learning, the method may include auto-encoding of raw datasets. The anomaly detection may operate based on a rule that a majority of user traffic is “normal” (e.g., legitimate, non-fraudulent, non-malicious), and those outliers or patterns that are away from the “normal” data clusters are indicators of a malicious/fraudulent anomaly. Having a temporal series of behaviors, the method may use a Recurrent Neural Network (RNN) to predict the expected behavior and compare it with the actual behavior in order to detect variations or anomalies in the behavior. As described, anomalies indicate a threat or a risk, and are then used for triggering traffic-related policy enforcement, as well as activation of fraud mitigation operations and/or malicious activity mitigation operations. The used models are dynamically updated and adapted to the changing environment (e.g., new behavior due to new interests of users; new traffic patterns due to introduction of new applications) by continuously or periodically re-training the models. Reference is made toFIG.3, which is a schematic illustration demonstrating a flow of operation of the Predictor Unit, in accordance with some embodiments. The predictor unit analyzes dataset records, ang generates predictions or estimations with regard to traffic anomalies and with regard to abnormal/normal behaviors of end-user devices. As indicated in block301, data pre-processing is performed. For example, datasets generated by the Data Collector and Mediator are passed through the Anomaly Detector which performs anomalies detection, to classify every window of traffic as either (I) “anomalous traffic” (or “abnormal traffic”, or “irregular traffic”), or (II) “non-anomalous traffic” (or “normal traffic” or “non-abnormal traffic” or “regular traffic” or “non-irregular traffic”). For example, raw datasets are grouped or aggregated, for each particular user and for time-windows or time-interval of T1 seconds (e.g., T1 being 10 or 15 or 20 or 30 seconds, or other pre-defined value). The output of this block may include, for example, an 8×T1 time series, where 8 is the number of metrics calculated per one second. In some embodiments, such metrics may include some or all of the following: (a) the day of the week (e.g., in a scale of 1 to 7); (b) the second of the day (e.g., in the range of 1 to 86,400); (c) the number of requests (e.g., HTTP requests, or HTTPS requests, or both HTTP and HTTPS requests) that were made within that particular second; (d) the number of bytes that were downloaded during that particular second; (e) the navigation time (e.g., the web browsing time; or the time-length that the device spends accessing a particular website or web-page or URL; or the time-length between requesting access to a website and then requesting access to another website, by the same end-user device); (f) list of threesome items or triple items, wherein each triple item includes, or is formatted as, (f1) Category index number (e.g., corresponding to News, or Games, or Social Network, or Streaming Video, or the like), and (f2) Content Type index number (e.g., corresponding to HTML content, or application content, or the like), and (f3) the number of occurrences of this pair of Category index number and Content Type index number during this particular second. In some embodiments, an anomaly is characterized by (or by taking into account) the amount and/or frequency of the requests made, and/or the categories of visited sites, and/or the distribution of requests along the time-windows (each time-window is of T1 seconds); and therefore, the additional three features may be extracted and analyzed, totaling 11 features (with the previous 8 features): (I) the number of requests made; (II) the top N categories (e.g., the top 3 or the top 5 categories of sites visited that were visited or accessed during the time-window (e.g., News, Social Networks, Gaming, Streaming Videos, Search Engines, Electronic Commerce websites; (III) the number of visits, that belong to those N top categories, that are known to be visits to destinations that are associated with fraudulent or malicious activity (e.g., malware sites, domains associated with malware, phishing sites, domains associated with phishing, botnet sites, domains associated with a botnet, and possibly (in some embodiments) also anti-virus sites and/or non-categorized sites). In some embodiments, additional features or group(s) of features may be extracted or monitored or analyzed, particularly for behavior analysis; for example: (a) Index of the window of traffic, which may then be used to match this window with its corresponding recurrent plot calculated in Anomalies Detection; (b) Day of the week; (c) Second of the day at which this window of traffic begins; (d) the number of requests generated in that time-window; (e) the variation in the number of requests with respect to the immediately previous window of traffic, particularly in order to detect deviation between consecutive time-windows; (f) list of occurrences for every category (for example, to avoid getting a sparse vector of categories, some embodiments may define 120 possible categories that a site can be related to, which are then reduced to 20 categories using a transformation rule, such as CategoryIndex % 20); (g) the number of different domains (or, the number of different destination IP addresses) that were during the time-window; (h) the variation in the number of domains (or destination IP addresses) from the most-previous time-window to the current time-window, in order to detect variability between consecutive time-windows; (i) a list of the number of the occurrences of the N destination domains (or, destination IP addresses) that are the most-visited during the time-window. Other suitable features may be extracted, monitored and/or analyzed. As indicated in block302, an Anomaly Detection analysis is performed to analyze a time series of these 11 features (or other suitable features), applying a scheme of sliding windows with width of T2 seconds (for example, T2 may be 600 or 900 or 1,200 seconds, or other suitable value), and with a stride of T3 seconds (e.g., T2 may be 400 or 450 or 500 seconds), to capture or to detect temporal dependencies between consecutive time-windows. Since the features have been aggregated for time-intervals of T1 seconds, every window will contain T2/T1 datasets. Anomalous traffic is handled by blocks303to307; whereas, Non-Anomalous traffic is handled by blocks308to314. Referring now to the Anomalous traffic: as indicated in block303, a Dataset_A0 is constructed, containing features for the traffic windows that have been classified as Anomalous traffic. As indicated in block304, a Data Encoder and Dimensions Reduction Unit may perform data encoding and dimensions reduction of the anomalous traffic datasets; optionally utilizing an Adam algorithm for encoder training, or other suitable adaptive moment estimation algorithm. As indicated in block305, a Dataset_A1 is thus constructed, being a dimension-reduced dataset of anomalous traffic. As indicated in block306, a clustering process is performed (Clustering A), running on the Dataset_A1, in order to classify this dataset and construct a set of clusters for anomalous traffic; optionally utilizing a Hierarchical Navigable Small World (HNSW) algorithm or other suitable clustering method. As indicated in block307, anomalies clusters (for example, M such anomalies clusters) are generated, representing anomalies recognized in the traffic; for example: a cluster A1, representing traffic anomalies that are related to visiting websites (or domains) that are known to be associated with malware; a cluster A2, representing traffic anomalies that are related to visiting websites (or domains) that are known to be associated with phishing attacks; a cluster A3, representing traffic anomalies that are related to visiting websites (or domains) whose content is non-categorized or is unknown; and so forth, a cluster A4, representing traffic anomalies that are related to activity that involved a large number of HTTP requests and/or HTTPS requests; and so forth, with a total of M such clusters of anomalous traffic. Referring now to the Non-Anomalous traffic per block302: as indicated in block308, a Dataset_B0 is constructed, containing features for the traffic windows that have been classified as Non-Anomalous traffic (or as “regular” or “normal” traffic). As indicated in block309, a Data Encoder and Dimensions Reduction Unit may perform data encoding and dimensions reduction of the non-anomalous traffic datasets; optionally utilizing an Adam algorithm for encoder training, or other suitable adaptive moment estimation algorithm. As indicated in block310, a Dataset_B1 is thus constructed, being a dimension-reduced dataset of non-anomalous traffic. As indicated in block311, a clustering process is performed (Clustering B), running on the Dataset_B1, in order to classify this dataset and to construct a set of clusters for non-anomalous traffic; optionally utilizing a Hierarchical Navigable Small World (HNSW) algorithm or other suitable clustering method. As indicated in block312, non-anomalous clusters (for example, N such non-anomalous clusters) are generated, representing behavior (e.g., human user behavior as exhibited through the browsing or navigation operations, and/or device behavior of the Internet-connected device as exhibited through its operations and network requests) that characterizes the majority of traffic that is associated with a particular type of destination or online venue. For example: a cluster B1, representing human user behavior and/or device behavior that are exhibited in conjunction with traffic that is associated with visiting or accessing Social Network websites or destinations; a cluster B2, representing human user behavior and/or device behavior that are exhibited in conjunction with traffic that is associated with visiting or accessing Gamin websites or destinations; a cluster B3, representing human user behavior and/or device behavior that are exhibited in conjunction with traffic that is associated with visiting or accessing News websites or destinations; a cluster B4, representing human user behavior and/or device behavior that are exhibited in conjunction with traffic that is associated with visiting or accessing Ecommerce websites or destinations; a cluster B5, representing human user behavior and/or device behavior that are exhibited in conjunction with traffic that is associated with visiting or accessing Search Engines; and so forth, with a total of N such clusters that are related to non-anomalous traffic. As indicated in block313, these clusters are fed into a Recurrent Neural Network (RNN), which predicting the next behavior that is expected to be observed. If the RNN-generated prediction matches the actual next behavior that is exhibited by the user/the device, then there is no behavioral variation; otherwise, behavioral variation is detected. The behaviors may be handled or processed, optionally, by utilizing a Natural Language Processing (NLP) unit; for example, the system considers the series of behaviors as a sequence of “words” that forms a language, and behaviors of anomalous traffic are regarded as banned words of the language; hence the RNN learn the language during the training; the training set includes exclusively only sequences of behaviors that are categorized as non-anomalous traffic, and does not include any anomalies (“banned words”). As indicated in block314, a predicted behavior analysis/comparison unit compares the behavior as predicted by the RNN, with the next actual behavior that is exhibited, in order to verify whether the user's behavior or the device's behavior has deviated. Returning now toFIG.1, in accordance with some embodiments, once the system is deployed and running, the DL model is periodically re-trained and updated; for example, once per week, or once per month, or once per 60 days, or the like, using the latest traffic data. The Re-Training Unit130may include and/or may utilize an Auto-Encoder (or Autoencoder) Unit130, such as, a Neural Network (NN) or a Convolutional Neural Networks (CNN) that is trained to learn efficient (e.g., reduced-dimension) representation of features. For example, for every time series, the method implements a convolution to smoothen the data, and then generates a distances vector; which is then converted into a square matrix or a recurrent plot. In a demonstrative non-limiting example, the Auto-Encoder Unit133receives as input a time series vector of 60 elements; performs a convolution to smoothen the data, and generates a vector of 56 elements; then generates a distances vector, and then a vector of 1,596 elements, which is transformed into a matrix of 56 by 56 elements or to a recurrent plot of 56 by 56. The resulting matrix may be visualized as an image, or as having data corresponding to a visual image. For example, the data may be represented in (or converted to) a three-channel format, similar to Red Green Blue (RGB) values or channels of an image. In a demonstrative example, the 11 features that were described above, or some of them, may be converted into the following three channels: (a) a first channel being the Requests Channel, indicating the total number of requests; (b) a second channel being the Frequent Categories channel, formed by the N (for example, five) most visited categories of sites or destinations; (c) a third channel being the Suspicious Categories channel, formed by two models running simultaneously for the same feature set which includes: (c1) Model-1, representing malware, phishing, antivirus, botnet, no-category; (c2) Model-2, representing malware, phishing, antivirus, botnet. The result is calculated as a Boolean arithmetic function: [TRUE, FALSE]=B (Model-1, Model-2), wherein B is defined per specific use-case (for example, in some embodiments B is an AND Boolean operator; or, in other embodiments, B is an OR Boolean operator). For demonstrative purposes, reference is made toFIGS.4A and4B, which are schematic illustrations of visualizations of data representations and their transformation, in accordance with some demonstrative embodiments. As shown inFIG.4A, a set of initial data which is represented by a graph401, is transformed into a two-dimensional matrix or image402, showing particular patterns therein. Similarly, as shown inFIG.4B, another set of initial data which is represented by a graph411, is transformed into a two-dimensional matrix or image412, showing particular patterns therein. Turning now to the operation of the Auto-Encoder Unit, the images data, or the recurrent plots, are fed to the Auto-Encoder Unit; which is a Deep Learning (DL) unit that uses a DL model, implemented as a NN or CNN for which the input and output are expected to be as similar as possible. The Auto-Encoder Unit compresses or encodes the input data into a code with a dimension reduction; and then tries to reconstruct in the output, from that compressed code, the original input. Accordingly, the Auto-Encoder Unit133may include three parts: an encoder (or, a dimension-reducing encoder); a code having dimension reduction; and a decoder to decode that code (and, to check whether the decoded output is sufficiently similar to the fed input). This is further demonstrated inFIG.5, which is a schematic illustration demonstrating an Auto-Encoder Unit500, in accordance with some demonstrative embodiments. For example, an Encoder511receives input data501, and encodes it into a reduced-dimension(s) code512. Then, a Decoder513decodes that code512to generate output data502, which—if the encoding was efficient and accurate—should be identical or almost-identical to the input data502, or sufficiently similar to the input data502(e.g., beyond a pre-defined threshold value of similarity). In accordance with some embodiments, the Auto-Encoder Unit calculates an error between (i) the input data (the input image), and (ii) the output data (the output image that was decoded based on the reduced-dimension/latent-space representation). If such error is greater than or equal to a pre-defined threshold value, then the corresponding traffic window (having traffic of T2 seconds) is considered Anomalous traffic; otherwise, it is considered Non-Anomalous traffic (or “normal” or “regular” traffic). In some embodiments, a system comprises: (a) a Data Collector and Mediator Unit, to monitor network traffic, and to generate datasets of network traffic; wherein each dataset includes network traffic that was monitored within a time-slot having a particular fixed time-length; (b) a Predictor Unit, comprising: a Features Extractor unit, to extract a plurality of features from said datasets; a Machine Learning (ML) unit, to run said features through a ML model and to classify a particular traffic-portion as being either (I) an anomalous traffic-portion that is associated with fraudulent or malicious activity, or (II) a non-anomalous traffic-portion that is not-associated with fraudulent or malicious activity; wherein the ML unit operates on both (i) anomalies in traffic patterns, and (ii) anomalies of user behavior or device behavior; (c) a fraud and malicious activity mitigation unit, configured to trigger activation of one or more pre-defined mitigation operations with regard to traffic-portions that were classified by the ML unit as being an anomalous traffic-portions that are associated with fraudulent or malicious activity. In some embodiments, the ML unit performs classification of said particular traffic-portion as anomalous or as non-anomalous, using ML analysis of a non-labeled multivariate time series. In some embodiments, rein the ML unit is configured to perform: a first ML-based analysis for anomaly detection in patterns of network traffic that was monitored within a particular time-slot, and also, a second, parallel, ML-based analysis for anomaly detection in Internet browsing or Internet navigation patterns that are exhibited by users or devices within said particular time-slot. In some embodiments, the system further comprises: a Recurrent Neural Network (RNN) unit, which is associated with said ML unit, and which is configured to detect a variation that is greater than a pre-defined variation-threshold, between (I) an RNN-generated prediction of expected behavior of said users or devices within a next time-slot, and (II) data of actual behavior of said users or devices within said next time-slot. In some embodiments, the Predictor Unit is a hybrid unit which comprises: a Traffic Patterns anomaly detector unit, configured to run a first ML model through a first ML unit on a first plurality of extracted features that correspond to characteristics of said network traffic, and to detect an anomaly in a traffic pattern of a particular dataset of a particular time-slot; a Machine Behavior anomaly detector unit, configured to run a second ML model through a second ML unit on a second plurality of extracted features that correspond to characteristics of behavior of Internet-connected devices that are associated with the traffic of said particular dataset, and to detect an anomaly in machine behavior of said particular dataset of said particular time-slot. In some embodiments, the Predictor Unit is a hybrid unit which comprises: a Traffic Patterns anomaly detector unit, configured to run a first ML model through a first ML unit on a first plurality of extracted features that correspond to characteristics of said network traffic, and to detect an anomaly in a traffic pattern of a particular dataset of a particular time-slot; a User Behavior anomaly detector unit, configured to run a second ML model through a second ML unit on a second plurality of extracted features that correspond to characteristics of Internet navigation patterns of users who utilized Internet-connected devices during said particular time-slot, and to detect an anomaly in Internet navigation patterns of said particular dataset of said particular time-slot. In some embodiments, the ML unit performs classification of said particular traffic-portion as anomalous or as non-anomalous, using ML analysis that is based at least on the following extracted features, for a particular time-slot: (I) Internet Protocol (IP) address of destinations that were accessed during said particular time-slot, and (II) URLs of destinations that were accessed during said particular time-slot, and (III) a total number of bytes that were downloaded during said particular time-slot. In some embodiments, wherein each destination that was accessed during said particular time-slot, is classified as belonging to one or more website categories out of a pre-defined list of website categories; wherein the ML unit performs said classification using ML analysis that is further based on: the N most-frequent categories that were accessed during said particular time-slot, wherein N is a pre-defined integer. In some embodiments, the system comprising: a Datasets Generator Unit, to group together monitored network traffic into datasets, wherein each dataset corresponds to a particular time-slot and to a particular Internet-connected device; wherein the Predictor Unit is configured to detect anomalous traffic based on ML analysis that takes into account at least the following extracted features: (I) a number of requests made within said particular time-slot to access Internet destinations; (II) the N most-visited categories of websites that were accessed during said particular time-slot, wherein N is a pre-defined integer; (III) a number of visits that occurred within said particular time-slot to websites that are known to be associated with fraudulent or malicious activities. In some embodiments, the ML unit performs classification of said particular traffic-portion as anomalous or as non-anomalous, using ML analysis that is based at least on the following extracted features, for a particular time-slot: a variation between (I) a number of requests made within said particular time-slot to access Internet destinations, and (II) a number of requests made within an immediately-preceding time-slot to access Internet destinations. In some embodiments, the ML unit performs classification of said particular traffic-portion as anomalous or as non-anomalous, using ML analysis that is based at least on the following extracted features, for a particular time-slot: a variation between (I) a number of IP addresses that were accessed within said particular time-slot, and (II) a number of IP addressed that were accessed within an immediately-preceding time-slot. In some embodiments, the system further comprises: a clustering unit, to cluster together datasets of monitored network traffic into a plurality of discrete dataset-clusters; wherein each dataset-cluster comprises datasets of monitored traffic that were detected to correspond to one particular type of traffic anomaly. In some embodiments, the system further comprises: a clustering unit, to cluster together datasets of monitored network traffic into a plurality of discrete dataset-clusters; wherein each dataset-cluster comprises datasets of monitored traffic that were detected to correspond to one particular type of behavioral anomaly. In some embodiments, the system further comprises: a Recurrent Neural Network (RNN) unit, to receive as input said dataset-cluster, and to construct an RNN-generated prediction of expected behavior of said users or devices within a next time-slot; wherein said ML unit detects anomalous behavior based on a variation of said expected behavior from actual behavior that is observed within a next time-slot. In some embodiments, the system further comprises: a Machine Learning Re-Training Unit, to periodically perform re-training of the ML model used by the ML unit; an Auto-Encoder Unit comprising a Convolution Neural Network (CNN), to apply a convolution to smoothen data of each time series, and to generate a distances vector, and to generate a square matrix corresponding to a recurrent plot image. In some embodiments, the Auto-Encoder Unit generates said recurrent plot image by converting data into a three-channel format, that corresponds to Red Blue Green (RGB) image format. In some embodiments, the three-channel format comprises: a first channel which is a Requests Channel, indicating a total number of Internet access requests performed within the particular time-slot; a second channel which is a Frequent Categories channel, indicating the N most visited categories of sites or destinations that were accessed during said particular time-slot, wherein N is a pre-defined integer; a third channel which is a Suspicious Categories channel, indicating whether an accessed Internet destination is (i) categorized as associated with fraudulent or malicious activity, or (ii) not categorized as associated or as unassociated with fraudulent or malicious activity. In some embodiments, said one or more pre-defined mitigation operations comprise one or more of: traffic blocking, traffic discarding, traffic quarantining, traffic re-routing, traffic steering, traffic delaying, firewall re-configuring, traffic bandwidth limiting, packet modification, packet dropping, packet discarding, packet replacement, traffic-based charging operation. In some embodiments, a method comprises: (a) monitoring network traffic, and generating datasets of network traffic; wherein each dataset includes network traffic that was monitored within a time-slot having a particular fixed time-length; (b) performing a Features Extraction process to extract a plurality of features from said datasets; in a Machine Learning (ML) unit, running said features through a ML model and classifying a particular traffic-portion as being either (I) an anomalous traffic-portion that is associated with fraudulent or malicious activity, or (II) a non-anomalous traffic-portion that is not-associated with fraudulent or malicious activity; wherein the ML unit operates on both (i) anomalies in traffic patterns, and (ii) anomalies of user behavior or device behavior; (c) triggering activation of one or more pre-defined fraud and malicious activity mitigation operations with regard to traffic-portions that were classified by the ML unit as being an anomalous traffic-portions that are associated with fraudulent or malicious activity; wherein the classifying of said particular traffic-portion as anomalous or as non-anomalous is performed using ML analysis of a non-labeled multivariate time series; wherein the method is implemented by utilizing at least a hardware processor. Some embodiments comprise a non-transitory storage medium having stored thereon instructions that, when executed by one or more hardware processors, cause the one or more hardware processors to perform a method as described above. In accordance with some embodiments, calculations, operations and/or determinations may be performed locally within a single device, or may be performed by or across multiple devices, or may be performed partially locally and partially remotely (e.g., at a remote server) by optionally utilizing a communication channel to exchange raw data and/or processed data and/or processing results. Although portions of the discussion herein relate, for demonstrative purposes, to wired links and/or wired communications, some embodiments are not limited in this regard, but rather, may utilize wired communication and/or wireless communication; may include one or more wired and/or wireless links; may utilize one or more components of wired communication and/or wireless communication; and/or may utilize one or more methods or protocols or standards of wireless communication. Some embodiments may be implemented by using a special-purpose machine or a specific-purpose device that is not a generic computer, or by using a non-generic computer or a non-general computer or machine. Such system or device may utilize or may comprise one or more components or units or modules that are not part of a “generic computer” and that are not part of a “general purpose computer”, for example, cellular transceivers, cellular transmitter, cellular receiver, GPS unit, location-determining unit, accelerometer(s), gyroscope(s), device-orientation detectors or sensors, device-positioning detectors or sensors, or the like. Some embodiments may be implemented as, or by utilizing, an automated method or automated process, or a machine-implemented method or process, or as a semi-automated or partially-automated method or process, or as a set of steps or operations which may be executed or performed by a computer or machine or system or other device. Some embodiments may be implemented by using code or program code or machine-readable instructions or machine-readable code, which may be stored on a non-transitory storage medium or non-transitory storage article (e.g., a CD-ROM, a DVD-ROM, a physical memory unit, a physical storage unit), such that the program or code or instructions, when executed by a processor or a machine or a computer, cause such processor or machine or computer to perform a method or process as described herein. Such code or instructions may be or may comprise, for example, one or more of: software, a software module, an application, a program, a subroutine, instructions, an instruction set, computing code, words, values, symbols, strings, variables, source code, compiled code, interpreted code, executable code, static code, dynamic code; including (but not limited to) code or instructions in high-level programming language, low-level programming language, object-oriented programming language, visual programming language, compiled programming language, interpreted programming language, C, C++, C#, Java, JavaScript, SQL, Ruby on Rails, Go, Cobol, Fortran, ActionScript, AJAX, XML, JSON, Lisp, Eiffel, Verilog, Hardware Description Language (HDL), BASIC, Visual BASIC, Matlab, Pascal, HTML, HTML5, CSS, Perl, Python, PHP, machine language, machine code, assembly language, or the like. Discussions herein utilizing terms such as, for example, “processing”, “computing”, “calculating”, “determining”, “establishing”, “analyzing”, “checking”, “detecting”, “measuring”, or the like, may refer to operation(s) and/or process(es) of a processor, a computer, a computing platform, a computing system, or other electronic device or computing device, that may automatically and/or autonomously manipulate and/or transform data represented as physical (e.g., electronic) quantities within registers and/or accumulators and/or memory units and/or storage units into other data or that may perform other suitable operations. Some embodiments may perform steps or operations such as, for example, “determining”, “identifying”, “comparing”, “checking”, “querying”, “searching”, “matching”, and/or “analyzing”, by utilizing, for example: a pre-defined threshold value to which one or more parameter values may be compared; a comparison between (i) sensed or measured or calculated value(s), and (ii) pre-defined or dynamically-generated threshold value(s) and/or range values and/or upper limit value and/or lower limit value and/or maximum value and/or minimum value; a comparison or matching between sensed or measured or calculated data, and one or more values as stored in a look-up table or a legend table or a legend list or a database of possible values or ranges; a comparison or matching or searching process which searches for matches and/or identical results and/or similar results among multiple values or limits that are stored in a database or look-up table; utilization of one or more equations, formula, weighted formula, and/or other calculation in order to determine similarity or a match between or among parameters or values; utilization of comparator units, lookup tables, threshold values, conditions, conditioning logic, Boolean operator(s) and/or other suitable components and/or operations. The terms “plurality” and “a plurality”, as used herein, include, for example, “multiple” or “two or more”. For example, “a plurality of items” includes two or more items. References to “one embodiment”, “an embodiment”, “demonstrative embodiment”, “various embodiments”, “some embodiments”, and/or similar terms, may indicate that the embodiment(s) so described may optionally include a particular feature, structure, or characteristic, but not every embodiment necessarily includes the particular feature, structure, or characteristic. Furthermore, repeated use of the phrase “in one embodiment” does not necessarily refer to the same embodiment, although it may. Similarly, repeated use of the phrase “in some embodiments” does not necessarily refer to the same set or group of embodiments, although it may. As used herein, and unless otherwise specified, the utilization of ordinal adjectives such as “first”, “second”, “third”, “fourth”, and so forth, to describe an item or an object, merely indicates that different instances of such like items or objects are being referred to; and does not intend to imply as if the items or objects so described must be in a particular given sequence, either temporally, spatially, in ranking, or in any other ordering manner. Some embodiments may be used in, or in conjunction with, various devices and systems, for example, a Personal Computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a Personal Digital Assistant (PDA) device, a handheld PDA device, a tablet, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a consumer device, a non-mobile or non-portable device, an appliance, a wireless communication station, a wireless communication device, a wireless Access Point (AP), a wired or wireless router or gateway or switch or hub, a wired or wireless modem, a video device, an audio device, an audio-video (A/V) device, a wired or wireless network, a wireless area network, a Wireless Video Area Network (WVAN), a Local Area Network (LAN), a Wireless LAN (WLAN), a Personal Area Network (PAN), a Wireless PAN (WPAN), or the like. Some embodiments may be used in conjunction with one way and/or two-way radio communication systems, cellular radio-telephone communication systems, a mobile phone, a cellular telephone, a wireless telephone, a Personal Communication Systems (PCS) device, a PDA or handheld device which incorporates wireless communication capabilities, a mobile or portable Global Positioning System (GPS) device, a device which incorporates a GPS receiver or transceiver or chip, a device which incorporates an RFID element or chip, a Multiple Input Multiple Output (MIMO) transceiver or device, a Single Input Multiple Output (SIMO) transceiver or device, a Multiple Input Single Output (MISO) transceiver or device, a device having one or more internal antennas and/or external antennas, Digital Video Broadcast (DVB) devices or systems, multi-standard radio devices or systems, a wired or wireless handheld device, e.g., a Smartphone, a Wireless Application Protocol (WAP) device, or the like. Some embodiments may comprise, or may be implemented by using, an “app” or application which may be downloaded or obtained from an “app store” or “applications store”, for free or for a fee, or which may be pre-installed on a computing device or electronic device, or which may be otherwise transported to and/or installed on such computing device or electronic device. Functions, operations, components and/or features described herein with reference to one or more embodiments, may be combined with, or may be utilized in combination with, one or more other functions, operations, components and/or features described herein with reference to one or more other embodiments. Some embodiments may thus comprise any possible or suitable combinations, re-arrangements, assembly, re-assembly, or other utilization of some or all of the modules or functions or components that are described herein, even if they are discussed in different locations or different chapters of the above discussion, or even if they are shown across different drawings or multiple drawings. While certain features of some demonstrative embodiments have been illustrated and described herein, various modifications, substitutions, changes, and equivalents may occur to those skilled in the art. Accordingly, the claims are intended to cover all such modifications, substitutions, changes, and equivalents. | 57,953 |
11943246 | The features and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION I. Introduction The present specification and accompanying drawings disclose one or more embodiments that incorporate the features of the present invention. The scope of the present invention is not limited to the disclosed embodiments. The disclosed embodiments merely exemplify the present invention, and modified versions of the disclosed embodiments are also encompassed by the present invention. Embodiments of the present invention are defined by the claims appended hereto. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. In the discussion, unless otherwise stated, adjectives such as “substantially” and “about” modifying a condition or relationship characteristic of a feature or features of an example embodiment of the disclosure, are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of the embodiment for an application for which it is intended. Numerous exemplary embodiments are described as follows. It is noted that any section/subsection headings provided herein are not intended to be limiting. Embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, embodiments disclosed in any section/subsection may be combined with any other embodiments described in the same section/subsection and/or a different section/subsection in any manner. II. Example Implementations Modern cloud-computing platforms include a vast number of network entities that generate large volumes of network traffic. In some instances, cloud-computing platforms may comprise thousands, or even millions, of entities. Due to such a large number of entities, and therefore a large volume of network traffic, the important tasks of monitoring network traffic and maintaining network security have increasingly become challenging to carry out. For instance, monitoring network traffic by logging and analyzing each packet is not practical on many cloud-computing platforms, given the large volume of network traffic. One solution to address this problem is to obtain sampled network data, such as network flow information in accordance with the Internet Protocol Flow Information Export (IPFIX) protocol. Even where such packet sampling is implemented, however, analyzing the sampled network data to reconstruct network usage is not straightforward. For example, a network or security analyst may need to construct an ad-hoc solution, such as a machine-learning classifier, for each type of network usage (e.g., port scanning), that is trained using labelled data. Based on the ad-hoc solution, a specific type of usage can be inferred from the sampled data. However, a network or security analyst typically must expend significant time and resources in analyzing sampled network data to obtain meaningful insights as described above. Furthermore, such a solution requires labelled data or expert knowledge such that network usage patterns must be defined or modelled in advance. As a result, network usage patterns typically are not comprehensive, leaving an unknown amount of network activity unexplained, potentially resulting in network attacks being missed or security vulnerabilities being left open. Embodiments described herein address these and other issues by providing a system for reconstructing network activity. In the system, a network activity monitor is configured to monitor network activity of a plurality of network entities across a network. A feature determiner determines sets of features for each of the network entities based on the network activity monitoring, such as features relating to the type, frequency, and/or volume of network activity. A vertex determiner determines a number of vertices that describes the sets of features in a multidimensional space, such as a convex hull. Each of the vertices may be assigned a particular usage pattern that describes the type of network usage. When at least some features for a particular network entity is obtained, the network entity may be represented as a weighted combination of the assigned usage patterns. Such a representation may be used for a number of purposes, including but not limited to network analytics, anomaly detection, alert generation, etc. Reconstructing network activity in this manner has numerous advantages, including improvement the security of a network and the entities coupled thereto. For example, because a network entity may be represented as a weighted combination of usage patterns or archetypes, it may be determined that the network entity is performing in a manner that is unintended, such as engaging in file transfers, web crawling, etc. In other examples, a weighted combination for a particular entity may include port scanning activity, which may indicate a potential or actual network attack. In yet other examples, if a particular network entity unexpectedly has a reduced amount of certain types of normal usage patterns (e.g., web server activities), it may be determined that a different entity has redirected traffic away from the particular network entity, suggesting that a man-in-the-middle (MITM) attack, or other similar attack, may be occurring. Each type of abnormal or malicious activity may be detected through reconstructing network activity in accordance with implementations, thereby reducing the likelihood of compromising the network, as well as the computers coupled thereto. Implementations described herein may also provide further improvements to the performance of a network, for instance, by enabling more accurate reconstruction and monitoring of entities coupled to a network. In some examples, based on the network activity reconstruction, network analytics may be performed to determine that network loads may be more appropriately balanced, thereby improving the overall network performance. Furthermore, network activity may be reconstructed in a manner that does not require deploying network monitoring agents at various nodes on a network, thereby reducing the number of resources needed to more accurately model a network. Example implementations are described as follows that are directed to a system for reconstructing network activity. For instance,FIG.1shows a block diagram of an example computing system100, according to an example embodiment. As shown inFIG.1, system100includes a computing device102, a network entity110, a network entity112, and a network entity114, one or more of which may be communicatively coupled by one or more networks or subnetworks (subnets). For instance, any of network entity110, network entity112, and network entity114inFIG.1may be communicatively coupled to any other network entity via network106and/or subnets108A-108N. Network entities116comprise the set of network entities of system100, including but not limited to network entity110, network entity112, and network entity114, as well as one or more other network entities not expressly illustrated inFIG.1coupled to any one or more of network106and/or subnets108A-108N. As shown inFIG.1, computing device102includes an activity reconstruction system104. As described in greater detail below, activity reconstruction system104may be configured to reconstruct latent network activity for a particular one of network entities116. System100is further described as follows. Network106and subnets108A-108N may each include one or more of any of a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a combination of communication networks, such as the Internet, and/or a virtual network. Computing device102may be communicatively coupled to any one of network entities116via network106and/or subnets108A-108N. In an implementation, computing device102, network106, subnets108A-108N, and any one of network entities116may communicate via one or more application programming interfaces (API), and/or according to other interfaces and/or techniques. Computing device102, network106, subnets108A-108N, and network entities116may each include at least one network interface that enables communications with each other. Examples of such a network interface, wired or wireless, include an IEEE 802.11 wireless LAN (WLAN) wireless interface, a Worldwide Interoperability for Microwave Access (Wi-MAX) interface, an Ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a Bluetooth™ interface, a near field communication (NFC) interface, etc. Further examples of network interfaces are described elsewhere herein. Network entities116may comprise any node of network106and/or subnets108A-108N. For instance, network entities116may include any device or machine (physical or virtual) coupled to any of network106or subnets108A-108N. In one example embodiment, network106and/or subnets108A-108N may collectively comprise a network of an organization (including but not limited to a company, business, or cloud-based subscription), and network entities116may include a node (e.g., a physical device or machine, or a virtual node) coupled to the network. In some further example embodiments, network106and/or subnets108A-108N may comprise a virtual or cloud-based network, and network entities116may comprise one or more virtual machines or nodes of the virtual or cloud-based network. In some other examples, any of network entities116may comprise a desktop computer, a portable computer, a smartphone, a tablet, a wearable computing device (e.g., a smart watch, a smart headset), a mixed and/or virtual reality device (e.g., Microsoft HoloLens™), or any other processing device. In some other example implementations, network106and/or subnets108A-108N may collectively comprise a cloud-computing network and network entities116may be nodes coupled to the cloud-computing network. Network entities116are not limited to processing devices in implementations, and may include other resources on a network, such as storage devices (e.g., physical storage devices, local storage devices, cloud-based storages, hard disk drives, solid state drives, random access memory (RAM) devices, etc.), databases, etc. Note that the variable “N” is appended to various reference numerals for illustrated components to indicate that the number of such components is variable, with any value of 2 and greater. Note that for each distinct component/reference numeral, the variable “N” has a corresponding value, which may be different for the value of “N” for other components/reference numerals. The value of “N” for any particular component/reference numeral may be less than 10, in the10s, in the hundreds, in the thousands, or even greater, depending on the particular implementation. It is noted and understood that implementations are not limited to the illustrative arrangement shown inFIG.1. Rather, an organization may comprise any number of networks, virtual networks, subnets, machines or virtual machines (or other resources) coupled in any manner. For instance, a subnet can comprise one or more additional subnets (not shown), entities can be coupled to a plurality of subnets or coupled to network106without a subnet, etc. Furthermore, network entities116and computing device102may be co-located, may be implemented on a single computing device or virtual machine, or may be implemented on or distributed across one or more additional computing devices or virtual machines not expressly illustrated inFIG.1. In some other example embodiments, computing device102may be implemented on one or more servers. For instance, such servers may be part of a particular organization or company associated with network106and/or subnets108A-108N, or may comprise cloud-based servers configured to provide network analysis and/or monitoring services for a plurality of organizations. Computing devices102may include one or more of such server devices and/or other computing devices, co-located or located remote from each other. Furthermore, althoughFIG.1depicts a single computing device102, it is understood that implementations may comprise any number of computing devices. An example computing device that may incorporate the functionality of computing device102is described below in reference toFIG.8. As described below in greater detail, activity reconstruction system104is configured to reconstruct the latent network activity for a particular network entity that is a weighted combination of different network usage patterns. In examples, activity reconstruction system104may reconstruct latent network activity for a particular network entity using monitored network activity. Monitored network activity may include, but is not limited to, a high dimensional dataset of sampled network activity that may be obtained in accordance with the IPFIX protocol, NetFlow Packet transport protocol, or any other manner of sampling network activity across a network. In some examples, a combination of different monitoring techniques may be implemented. Sampled network activity may comprise, for instance, metadata relating to packets transmitted across the network that may indicate an amount or volume of network activity between two entities. In some examples, sampled network data (e.g., in accordance with the IPFIX protocol) may include packet metadata for 1 in 4000 packets transmitted across the network, though implementations are not intended to be limited to this sampling amount. Based on such monitoring, activity reconstruction system104may generate feature sets relating to each monitored entity (e.g., through aggregation or the like) and generate a model of the network activity that may be used to reconstruct latent network activity for a particular one of network entities116. For instance, the model may be comprised of a number of vertices that describe the feature sets of each of network entities116in a multidimensional space, such as a convex hull. In example implementations, the number of vertices may be determined using an archetypal analysis configured to detect different usage archetypes in the ingested data based on a degree of variance between the sets of features and the number of vertices. Each of the vertices may be assigned a particular usage pattern or archetype. In other words, a convex hull that captures the ingested data (e.g., features relating to monitored network activity) may be inferred, with the usage patterns being represented by the vertices of the convex hull. Usage patterns or archetypes may include types or categories of network activity that are commonly observed on the network, such as a port scanning activity, a web crawler or indexer, a web server, a connection initiator (e.g., activity resembling a new connection), a login activity, a remote desktop protocol (RDP) activity, a denial of service attack, and/or a file transfer activity. This list is illustrative in nature, and usage patterns appreciated to those skilled in the art are also contemplated. As network features relating to a particular usage pattern (e.g. web server, crawler, port scanning, etc.) may be correlated with each other, vertices of a convex hull may be determined in a manner that captures various types of network activity associated with different usages. Activity reconstruction system104may be configured to obtain features corresponding to particular network entity and reconstruct network activity for the particular entity that is a weighted combination of a plurality of the assigned usage patterns. Stated differently, the reconstruction of network activity enables a determination of how much a particular network entity's network activity is attributed to each of the different usage patterns or archetypes. In this manner, latent network activity for a particular one of network entities116may be determined based on sampled network data. Using such a reconstruction, enhanced network analytics may be performed that aid in improving the security of a network, anomaly detection, and/or other analytics described herein. Furthermore, as described in greater detail later, activity reconstruction system104may reconstruct network activity in an unsupervised manner in some implementations, enabling network monitoring and analytics to be carried without requiring the generation of ad-hoc solutions for each specific type of usage pattern or labeled data. Activity reconstruction system104may operate in various ways to reconstruct latent network activity for a network entity. For instance, activity reconstruction system104may operate according toFIG.2.FIG.2shows a flowchart200of a method for reconstructing network activity, according to an example embodiment. For illustrative purposes, flowchart200and activity reconstruction system104are described as follows with respect toFIG.3.FIG.3shows a block diagram of a system300for reconstructing latent network activity, according to an example embodiment. As shown inFIG.3, system300includes one example implementation of activity reconstruction system104. Activity reconstruction system104includes a network activity monitor302that is configured to obtain network activity data330relating to a plurality of network entities on a network, a feature determiner304configured to generate a plurality of feature sets306based on the monitored network activity, a vertex determiner308, a usage pattern assignor314, a network activity reconstructor316, a network analyzer322, an anomaly detector324, a network modifier326, and an alert generator328. As shown inFIG.3, vertex determiner308and usage pattern314may be configured to determine a number of vertices310corresponding to feature sets306, where each vertex is assigned a different one of assigned usage patterns312. Network activity reconstructor316may obtain at least some of the feature sets for a particular entity, and generate a network entity representation318for the particular entity that comprises a weighed combination320of the assigned usage patterns312. Flowchart200and system300are described in further detail as follows. Flowchart200ofFIG.2begins with step202. In step202, network activity of a plurality of network entities is monitored. For instance, with reference toFIG.3, network activity monitor302may be configured to obtain network activity data330that includes data identifying network activity that is occurring, or has occurred, on one or more networks. For instance, network activity data330may include data that comprises any representation of network activity for a plurality of network entities116on network106and/or subnets108A-108N. In some example implementations, network activity data330may include packets (or fields extracted therefrom) or other metadata that identify or summarize network activity of network entities116. For example, network activity data330may comprise metadata that indicates a sampling of network activity, such as a sampling of activity in accordance with the IPFIX protocol. In some implementations, the sampling of network activity may comprise a predetermined sample of network activity at a predetermined sampling rate (e.g.,1in 4000 packets). Examples are not limited to any particular manner of sampling network activity for network entities116, however, and may include other techniques known and appreciated to those skilled in the art to monitor activity of a plurality of entities communicatively coupled to a network. In examples, network activity data330may include any information or metadata relating to network traffic between various network entities116. For instance, network activity data330may indicate, for a particular network entity, an amount of network traffic (e.g., a volume measured in bits, bytes, or any other unit, and/or a number of transmitted or received packets) to or from another entity or entities. Network activity data330may also indicate, for a particular network entity, a port or port numbers associated with traffic to or from the network entity, a type of connection, an Internet Protocol (IP) address of the particular network entity and/or another network entity involved in a communication, an indication of whether a file is being transferred or downloaded, etc. In some other examples, network activity data330may indicate one or more Transmission Control Protocol (TCP) flags associated with network traffic of a network entity that may be configured to indicate a connection state or other related information, including but not limited to any combination of a synchronization flag (SYN), an acknowledgement flag (ACK), a finished flag (FIN), an urgent flag (URG), a push flag (PSH), a reset flag (RST), and/or any other flags. Network activity monitor302may obtain network activity data330at any suitable or predetermined interval. For instance, network activity monitor302may monitor network activity for any one or more of network entities116each hour, every six hours, each day, etc. In some other implementations, network activity monitor302may obtain network activity data330in an ongoing or real-time fashion. Network activity monitor302may store monitored activity in one or more storage devices (not shown) that may be local to computing device102, and/or in one or more storage devices that may be remotely located from computing device102. In step204, a set of features is obtained for each network entity based on the monitoring. For instance, with reference toFIG.3, feature determiner304may obtain the monitored network activity from network activity monitor302, and using the monitored network activity, obtain one or more feature sets306for each network entity of network entities116. In examples, feature sets306may include, for each network entity, one or more features relating to the monitored network activity of the network entity. For instance, feature sets306may include one or more features for each entity that is determined by aggregating certain types of network activity over a period of time (e.g., one hour, six hours, one day, etc.). Examples of features that may be included in feature sets306include a quantity of packets, a volume, a quantity of unique addresses (e.g., IP addresses), and/or a quantity of ports (e.g., unique ports, rare ports, ports not in use, etc.) based on the monitored traffic. In some further examples, features may also comprise an aggregation of traffic associated with a particular flag or combination of flags that may be included in the monitored traffic, such as of one or more SYN, ACK, FIN, URG, PSH, and/or RST flags. For instance, feature set306may include, for each network entity, a quantity of packets with SYN flags, a quantity of packets with ACK flags, etc. over a certain time period (e.g., one day). These examples are illustrative only, and feature determiner304may determine any number of N features based on the monitored network traffic using any combination of the observed traffic and over any one or more time periods. In examples, therefore, feature determiner304may determine, based on the monitored traffic, a plurality of features for each of network entities116that may correspond to various types of network activity, such as traffic between certain addresses, physical machines, virtual machines, servers, etc. As the number of features generated increases, the accuracy of the network activity reconstruction (described later) may be enhanced. Thus, feature determiner304may be configured to generate features in various combinations and/or permutations such that the monitored traffic may be represented in a large number of dimensions. In step206, a number of vertices to describe the sets of features in a multidimensional space is determined. For instance, with continued reference toFIG.3, vertex determiner308may be configured to determine a number of vertices310that describes feature sets306in a multidimensional space. In some implementations, vertex determiner308may implement an algorithm to determine archetypes that adequately describe the variance between feature sets306. In other words, vertices310may be determined in a manner such that feature sets306may be adequately represented by a minimum number of usage patterns or archetypes in the ingested data. In example implementations, vertices310may be determined in a manner such that the vertices define a convex hull that describes feature sets306in a multidimensional space. For instance, a convex hull may comprise any geometric shape or polygon, wherein vertices of the convex hull wrap or encapsulate feature sets306. In some examples, the geometric shape or polygon may comprise a convex shape, such as a shape were all interior angles are less than 180 degrees. In some implementations, the vertices may comprise classes or categories of network activity. For examples, the vertices may represent archetypes or usage patterns derived from the plurality of feature sets306. By determining vertices that represent usage patterns of the monitored network activity (e.g., via feature sets306), any point within the convex hull, such as a feature set of a particular network entity, may be represented as a weighted combination of a plurality of vertices, as described later. Vertex determiner308may implement one or more algorithms known and appreciated to those skilled in the art, including but not limited to Principal Convex Hull Analysis (PCHA). For instance, using a PCHA algorithm, vertex determiner308may determine an appropriate number of vertices310and estimate a principal convex hull (PCH) of feature sets306. For instance, vertex determiner308may use one or more appropriate archetypal analysis algorithms, such as PCHA, to identify a plurality of K vertices, where K is a positive integer representing a suitable number of archetypes based on feature sets306. Using an appropriate algorithm, vertex determiner308may thereby infer a convex hull, designated A, that comprises an N×K matrix, where N is the number of features in feature sets306for each network entity and K is the number of vertices310. It is noted and understood, however, that vertex determiner308is not limited to implementing a PCHA algorithm, but may implement one or more other archetypal analysis algorithms for identifying archetypes in feature sets306for describing the feature sets in a multidimensional space. For instance, vertex determiner308may comprise any algorithm or process for mapping feature sets (e.g., monitored network activity in this example) in a multidimensional space, such that any individual feature set (of a particular network entity) may be represented as a combination or mixture of different archetypes. Vertex determiner308may determine vertices310that define a convex hull in various ways. For instance, vertex determiner308may define a convex hull by implementing one or more algorithms, such as a gift-wrapping algorithm (also referred to as a Jarvis march algorithm), a quick hull algorithm, a divide and conquer algorithm, a monotone chain algorithm, an incremental convex hull algorithm, Chan's algorithm, a Graham scan algorithm, or any other algorithms known and appreciated to those skilled in the relevant arts. It is also noted that implementations are not limited to vertex determiner308being configured to define a convex hull, but rather vertex determiner308may determine vertices310to define any other type of hull or shape that reasonably encapsulates feature sets306. For instance, another hull or shape may be defined that encapsulates most of feature sets306, while not encapsulating feature sets that may be deemed outliers. As a result, a hull may be defined in a manner that encloses most of the ingested data, while not being distorted by outliers in the ingested data. In step208, a different usage pattern is assigned to each of the vertices. For instance, with reference toFIG.3, usage pattern assignor314may be configured to assign, to each of vertices310, a different one of assigned usage patterns312. Assigned usage patterns312may comprise, for example, network usage patterns that correspond to the archetypes of feature sets306(i.e., of the monitored network activity). In example implementations, assigned usage patterns312may represent classes or categories of network activity based on the ingested network traffic. Non-limiting examples of such usage patterns include a port scanning activity, a web crawler or indexer activity, a web server activity, a connection activity (e.g., a connection initiator), a login activity, a remote desktop protocol (RDP) activity, a denial of service attack, a file transfer activity, etc. This list is illustrative only, and may include any other class or category of network activity that may be observed on network106and/or subnets108A-108N. In examples, usage pattern assignor314may infer a usage pattern for each of vertices310in various ways, including through a user input and/or an automatic assignment. In some implementations, inferred usage patterns need not be defined in advance, but rather may be inferred based on the archetypes identified from feature sets306. Additional details regarding the assignment of usage patterns to vertices will be explained in greater detail below with respect toFIG.4. In step210, at least some of the features in the set of features for a particular network entity is obtained. For instance, with reference toFIG.3, network activity reconstructor316may be configured to obtain at least some of the features in feature set306for a particular network entity among network entities116. Network activity reconstructor316may obtain such features for the particular network entity for which latent network activity is to be reconstructed. For example, network activity reconstructor316may obtain the features for a particular computing device, virtual machine, server, etc. on network106and/or subnets108A-108N. In some implementations, network activity reconstructor316may obtain features for a plurality of network entities, such as where latent activity is to be reconstructed for each of the plurality of entities. In examples, the features obtained by network activity reconstructor316for a particular network entity may be the same features that vertex determiner308uses (in addition to feature sets for other network entities) to determine vertices310. In some implementations, network activity reconstructor316may obtain all of the features for the particular network entity in feature sets306, while network activity reconstructor316may obtain only a subset of the features in feature sets306for the particular network entity. It is also noted that while network activity reconstructor316may obtain at least some features for a particular network entity that was also obtained by vertex determiner308to determine vertices310, implementations also include network activity reconstructor316obtaining at least some features from a different network entity (e.g., a network entity whose features were not used by vertex determiner308). In other words, the particular network entity need not comprise an entity that was initially monitored to determine vertices310in the multidimensional space, but instead may comprise a different network entity that was not part of the initial set of monitored entities. For instance, if a particular network entity was newly added to, or otherwise appears on, network106and/or subnets108A-108N after vertices310are determined, features associated with the newly added network entity may still be obtained by network activity reconstructor316to reconstruct latent network activity for the newly added network entity in a similar manner as described herein by representing the newly added network entity as a weighted combination of the assigned usage types or archetypes. In step212, the particular network entity is represented as a weighted combination of the usage patterns based on the at least some of the features in the set of features for the particular network entity. For instance, with reference toFIG.3, network activity reconstructor316may be configured to generate network entity representation318of the particular network entity that is a weighted combination320of assigned usage patterns312based on at least some of the features in feature set306for the particular network entity. For example, the feature set for a particular network entity may be represented as a single point in the multidimensional space that is located certain distances away from each of assigned usage patterns312. Network activity reconstructor316may determine the distance from each of the assigned usage patterns to determine the weighted combination of assigned usage patterns that represent the point in the multidimensional space. As a result, each individual network entity (whether or not the network entity was included in an initial set of monitored network entities that was used to determine vertices310) may be represented by a relative attribution of each usage pattern to determine the network entity's latent network usage, i.e., how a particular network entity has communicated over the network based on sampled network activity. By representing the particular network entity as weighted combination320that is a mixture of different assigned usage patterns312, latent network activity for the particular network entity may be reconstructed in a more granular fashion (e.g., by reconstructing individual network usages and their weights). In one non-limiting illustration, vertices310may comprise three vertices in a multidimensional space. In this example, each vertex may be assigned a different usage pattern, such as a file transfer activity, a connection activity, and a login activity. As noted earlier, such vertices (along with their assigned usage patterns) may define a convex hull, such as a hull in the shape of a triangle in this particular illustration. Each of the network entities for which network activity was monitored may be represented as a point within the triangle based on the features associated with the individual entities. Network activity reconstructor316may be configured to identify, for any of the features (or sets of features) for any of the individual network entities, a weighted combination of each of the assigned usage patterns that represent the latent network activity of the particular entity. In this illustration, for instance, weighed combination320may comprise weights that indicate that a particular network entity is represented by 50 file transfers, 100 new connections, and 10 login attempts. Alternatively, weighted combination320may indicate that a particular network entity is a combination of 30% (or a weight of 0.3) of a first assigned usage pattern, 45% (a weight of 0.45) of a second usage pattern, and 25% (a weight of 0.25) of a third assigned usage pattern. In this manner, latent network activity may be reconstructed for the entity. It is understood that this example is not intended to be limiting, and vertices310(and assigned usage patterns312) may comprise any number of vertices (and assigned usage patterns312) based on the monitored network activity. Network activity reconstructor316may operate in various ways to determine weighted combination320for a particular network entity. In one example, network activity reconstructor316may determine weighted combination320by solving a non-negative least squares (NNLS) problem that identifies the weights, or relative contribution, of each of the usage patterns for a given vector in the multidimensional space. For instance, a non-negative least square problem may be represented as follows: argmin_w(∥Aw−V∥), where w is a K-dimensional vector (with K representing the number of vertices310), A is a convex hull with a matrix dimension of N×K (with N representing the number of features, or dimensions), V is a vector (i.e., a vector corresponding to a set of features for a particular network entity) of length N (the number of features, or dimensions), and w represents weights associated with each of the vertices of the hull. By solving for the non-negative least squares, the respective weighted combination of each of the assigned usage patterns312may be determined such that a particular network entity (represented by a vector Vin a multidimensional feature space) may be represented as a mixture (e.g., a weighted sum) of various usage patterns or archetypes. In other words, each weight may resemble a respective portion of a particular usage pattern that is attributed to a particular network entity. As a result, if each vertex of vertices310is multiplied by each respective weight for a particular network entity, the resulting combination may be the representation of the vector V in the multidimensional space (i.e., representing the feature set for the particular network entity in the space). Network activity reconstructor316may determine weighted combination320for a particular network entity in various ways, including but not limited to implementing a PCHA algorithm described above which may be configured to solve for the respective weights of each usage pattern given a feature set for a particular network entity. However, any other manner known and appreciated to those skilled in the relevant arts may be implemented in network activity reconstructor316to identify a weighted combination of assigned usage patterns312for a particular network entity, identifying coefficients of each of the assigned usage patterns, etc. In this manner, network activity reconstructor316may reconstruct the latent network activity based on identified usage patterns or archetypes for each network entity across network106and/or subnets108A-108N based on each network entity's respective feature sets. Furthermore, because such a reconstruction may be carried out using sampled network data (e.g., samples of network activity obtained in accordance with an IPFIX protocol, or the like), network activity may be reconstructed without deploying agents across a network to monitor network activity, thereby conserving resources while enabling a more accurate reconstruction of network activity. Furthermore, while alternative approaches may be utilized to attempt to identify latent network activity, such as clustering, such approaches do not enable identification of identifying a particular network entity as a weighted combination of assigned usage patterns as described herein. For instance, while alternative approaches, such as clustering (e.g., k-means or the like), enable a comparison of similar items in a dimension space using a distance metric or the like, clustering approaches do not adequately enable representing network entities as mixtures of different usages. Furthermore, clustering approaches often fail to sufficiently take into account network-related features, which are typically continuous (e.g., representing volumes of traffic). In contrast, techniques described herein adequately take into account such continuous network features when modeling the network features in a multidimensional space, and allow particular network entities to be represented as specific weighted combinations of different assigned usage patterns, or archetypes, which may enable a more realistic and accurate reconstruction of latent network activity (e.g., by identifying a particular network entity as a weighted combination of a file server and a web server). As described above, weighted combination320for a particular network entity may enable various network-related actions. For example,FIG.4shows a flowchart400of a method for performing actions on a network based on a representation of a network entity, according to an example embodiment. In an implementation, the method of flowchart400may be implemented by network analyzer322, anomaly detector324, network modifier326, and/or alert generator328.FIG.4is described with continued reference toFIG.3. Other structural and operational implementations will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding flowchart400and system300ofFIG.3. Flowchart400begins with step402. In step402, analytics for the network are performed based on a representation of a particular network entity. For instance, with reference toFIG.3, network analyzer322may be configured to perform one or more network analytics for any one or more of network106and/or subnets108A-108M based on weighted combination320for a particular network entity. Network analytics may include any analysis, such as a statistical analysis, an aggregation of network usage over an extended period of time, a comparison of reconstructed network activity with other similar or dissimilar network entities, or any other analysis based on weighted combination320. In some examples, network analytics may include one or more analytics for general monitoring of network entities116and/or networks to which each of the network entities are coupled, such as monitoring that may be utilized by a network or systems administrator. In some other examples, network analyzer322may perform analytics for monitoring a plurality of network entities, such as a cloud-computing network, to ensure proper functioning of the network. In yet other examples, network analyzer322may be configured to reconstruct latent network usage that may be used as features for one or more machine-learning algorithms, such as algorithms that may enabled further analysis of network106and/or subnets108A-108N, and/or classify network entities116based on weighed combination320. These examples are not intended to be limiting, and may include any other types of network analysis that may be performed by reconstructing latent network usage for one or more of network entities116. In step404, a network anomaly is detected based on the representation of the network entity. For instance, with reference toFIG.3, anomaly detector324may detect a network anomaly in network106and/or subnets108A-108N based on weighted combination320for a particular network entity. For example, anomaly detector324may determine, based on analytics performed by network analyzer322, that one or more of network entities116, network106, and/or subnets108A-108N is not performing optimally, or may be subject to a potential security threat or an actual security attack. In one example, one or more of network entities116may be subject to a man-in-the-middle attack, or other types of Domain Name System (DNS) spoofing attacks, where network traffic is unintentionally or unexpectedly being routed away from a particular network entity. Such an attack may be identified, for instance, by determining that a particular network entity (e.g., a web server) does not comprise an expected weight associated with certain web server usage patterns. This example is illustrative only, and any other types of network anomalies may be detected based on weighted combination320. In step406, an aspect of the network may be altered based on the detected anomaly. For instance, with reference toFIG.3, network modifier326may be configured to alter an aspect of one or more of network106, subnets108A-108N, and/or network entities116based on a detected anomaly. For example, network modifier326may perform any one or more actions that may minimize or remediate the detected anomaly, as appreciated by those skilled in the relevant arts. Such actions may be implemented automatically and/or through a suitable user interface (e.g., in response to an input from a network or systems administrator). For instance, in step410, network traffic to or from a node of the network may be blocked based on a detected anomaly. For example, if a particular one of network entities116is determined to be subject to an attack, network traffic to or from the particular network entity may be blocked. Network traffic may be blocked in its entirety or even disconnected in some examples (e.g., until further safeguards or protective measures may be implemented). In some other examples, network traffic may be blocked in a more granular fashion, such as filtering certain types of network traffic, filtering communications over certain ports, and/or filtering certain IP addresses. In this manner, network modifier326may be configured to alter an aspect of the network or network entities116to allow certain network activity to take place, while preventing other traffic that may be responsible for the detected anomaly. In some further implementations, such as in an enterprise solution in which a network or systems administrator may remotely configure or has access to configuration or network settings of one of network entities116(e.g., such as remotely installing or configuring an anti-virus solution, network configuration settings, remote configuration of a network entity through a Mobile Device Management (MDM) solution, etc.), remediation actions may also include one or more remotely initiated actions. For instance, in such implementations, remediation actions may include, but are not limited to, initiating a scan, such as a virus or malware scan, of a particular network entity (such as a computing device on the network), installing remediation software (e.g., anti-virus or anti-malware packages), or any other installation or configuration of the remotely located entity such that a network anomaly may be remediated, blocking traffic, and/or filtering traffic. In some other examples, a remediation action may comprise adding a machine to a blacklist (e.g., to prevent the network entity from communicating over a network or a subnetwork over certain ports, all ports, etc.). These actions are illustrative only, and may include any other type of remediation action not expressly stated. Furthermore, any one or more of such illustrative actions could be performed automatically or be performed manually (e.g., with the aid of a network administrator that may identify such actions through a suitable interface). Furthermore, it is also noted and understood that a node for which network traffic is blocked and/or filtered may or may not be the same as the particular network entity for which anomaly detector324detected a network anomaly. For example, if it has been determined that a network anomaly is the result of an attack from a particular IP address located internal or external to a network that has resulted in an anomaly for a particular network entity (e.g., a web server), network traffic to or from the IP address may be blocked in its entirety (or a subset of network traffic therefrom). In step408, a notification corresponding to the detected anomaly is generated. For instance, with reference toFIG.3, alert generator328may be configured to generate a notification corresponding to the anomaly detected by anomaly detector324. The notification may comprise any type of alert, such as an audio alert, a visual alert, an email, a short message service (SMS) message, a multimedia messaging service (MMS) message, a haptic alert, or any other type of alert that may be presented. For instance, such an alert may be presented in a suitable user interface accessible by a network or systems administrator responsible for overseeing and/or managing network106and/or subnets108A-108N (and network entities116coupled thereto) that anomaly detector324has detected a network anomaly. In some implementations, the alert may identify the particular network anomaly detected, the type of detected anomaly, the network entity (or entities) affected by the anomaly, a location of the detected anomaly (e.g., based on a geographic location or a network topology), an IP address of an attacker, or any other information that may be associated with the detected anomaly. Furthermore, alert generator328may generate a notification as an alternative to, or in conjunction with, altering an aspect of the network described with reference to step406of flowchart400. For instance, in addition to generating a notification that an anomaly was detected, alert generator328may also be configured to indicate in the notification that one or more actions have been performed to alter an aspect of the network (e.g., blocking or filtering a node of the network) based on the detected anomaly. Furthermore, a notification may also be configured to identify any one or more network entities for which additional monitoring and/or advanced network analytics should be performed, network entities for which a remediation action should be taken, or any other related network entities that otherwise may warrant further investigation based on network analytics and/or a detected network anomaly. As described above, usage pattern assignor314may be configured to assign different usage patterns to vertices310in various ways. For example,FIG.5shows a flowchart500of a method for assigning a different usage pattern to each of a number of vertices in a multidimensional space, according to an example embodiment. In an implementation, the method of flowchart500may be implemented by usage pattern assignor314.FIG.5is described with continued reference toFIG.3. Other structural and operational implementations will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding flowchart500and system300ofFIG.3. Flowchart500begins with step502. In step502, a usage pattern is assigned to a vertex based on a user input. For instance, with reference toFIG.3, usage pattern assignor314may be configured to assign a different one of assigned usage patterns312to one or more of vertices310based on an input received through a user interface, such as an input from a network or systems administrator that may have specialized knowledge or other domain knowledge regarding the network106, subnets108A-108N, and/or network entities116. For example, usage pattern assignor314may be configured to identify vertices310and enable a selection and/or identification of an appropriate usage pattern to be assigned to each of the different one of vertices310. In some instances, usage pattern assignor314may present additional information corresponding to each of the vertices, such as a listing or summary of network activity (e.g., feature sets) related to each of the vertices to enable a user to interpret the vertices. The user interface may comprise a graphical user interface (GUI) or the like of a network management and/or monitoring platform in some instances. In other implementations, the input may be received via a configuration file, a command-line interface or command language interpreter (CLI), a voice input, etc. In step504, a usage pattern is assigned to a vertex automatically. For instance, with reference toFIG.3, usage pattern assignor314may be configured to assign a different one of assigned usage patterns312to one or more of vertices310automatically. For example, usage pattern assignor314may be assigned usage patterns automatically based on labelled data and/or correlation, such as labels that associate certain type of monitored network activity with certain types of network usage patterns. For instance, usage pattern assignor314may determine, based on a distance measure, a correlation between or more labels in a set of labeled data and one or more vertices of the multidimensional space. As an example, if certain types of nodes (e.g., nodes labeled as web servers) are close to a particular vertex, usage pattern assignor314may automatically infer that the vertex represents a web server usage pattern, and assign such a usage pattern accordingly. Implementations are not limited to these illustrative examples, and may include any other manner for automatically assigning usage patterns to one or more of vertices310. As described above, vertex determiner308may determine a number of vertices310to describe feature sets306in various ways. For instance,FIG.6shows a flowchart of a method for determining a number of vertices based on a degree of variance, according to an example embodiment. In an implementation, the method of flowchart600may be implemented by vertex determiner308.FIG.6is described with continued reference toFIG.3. Other structural and operational implementations will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding flowchart600and system300ofFIG.3. Flowchart600begins with step602. In step602, a number of vertices is determined based, at least in part, on a degree of variance between the sets of features and the number of vertices. For instance, with reference toFIG.3, vertex determiner308may be configured to determine a number of vertices310suitable for describing feature sets306based on a degree of variance between feature sets306and a number of vertices. A degree of variance, as used herein, may refer to a degree or a percentage by which variances in ingested feature sets may be explained. For example, while an increase in the number of vertices typically leads to an increase in a degree of variance, the increase may not be linear in some implementations. Rather, increasing the number of vertices beyond a certain point may result in a marginal increase in the degree of variance. In examples, therefore, vertex determiner308may be configured to determine a minimum number of vertices310that adequately explains feature sets306. Vertex determiner308may determine a number of vertices based on a degree of variance in various ways. One such example includes implementation of a PCHA algorithm and/or an elbow or knee criteria of determining a minimum number of vertices310sufficient to explain feature sets306, through which the appropriate number of vertices K may be selected in a manner that adequately explains the variance of feature sets306, and where the ability to explain additional variances in the data reduces substantially with each additional vertex beyond the selected number. In other words, vector determiner308may select an optimal number of vertices that may enable sufficient explanation of variance between feature sets until the point where additional vertices offer relatively little additional gain. By selecting a minimum number of vertices K that adequately explains the variance of feature sets306, a convex hull (or any other type of hull) may be defined in a manner that includes a polygon with a minimum number of sides. Furthermore, since the number of vertices310is based on a degree of variance between feature sets306and the number of vertices, any appropriate number of vertices310(and therefore usage patterns) may be selected, ranging from a few vertices to a relatively large number of vertices based on a desired variance. As a result, any number of usage patterns may be assigned, even large numbers, enabling latent network activity reconstruction to be determined with greater granularity. Although example embodiments are described herein as implementing an elbow or knee method, other techniques may also be implemented for determining an appropriate number of vertices given a set of features. For instance, vertex determiner308may implement other techniques, such as a silhouette method (or an average silhouette method), X-means clustering, information criterion approach, information-theoretic approach, cross-validation, a kernel matrix analysis, a gap statistic method, or any other techniques appreciated by those skilled in the relevant arts. In some example implementations, network activity reconstructor316may be configured to obtain previously assigned usage patterns for a network. For example,FIG.7shows a flowchart700of a method for obtaining a set of usage patterns for a network, according to an example embodiment. In an implementation, the method of flowchart700may be implemented by network activity reconstructor316.FIG.7is described with continued reference toFIG.3. Other structural and operational implementations will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding flowchart700and system300ofFIG.3. Flowchart700begins with step702. In step702, a set of usage patterns for a network is obtained that describes sets of features for each of a plurality of network entities, each usage pattern in the set of usage patterns corresponding to a different vertex in a multidimensional space. For example, with reference toFIG.7, network activity reconstructor316may be configured to obtain assigned usage patterns312for a network, such as network106, and/or subnets108A-108N. In this example, each usage pattern in the set of assigned usage patterns312may correspond to a different vertex (e.g., one of vertices310) in a dimensional space. The determination of vertices310in a multidimensional space may be carried out in a similar manner as described herein, or in any other suitable manner. In this manner, a computing device (such as a management console) may be configured to obtain assigned usage patterns312that feature sets306for a given network, and may utilize the obtained usage patterns to reconstruct network activity for a particular network entity in a similar manner as described previously. In other words, example embodiments include enabling one computing platform, such as a server or a set of servers, to identify vertices310and associate usage patterns with each of the vertices, while a separate console may obtain the assigned usage patterns for the network to reconstruct network usage for any one or more of network entities116. III. Example Mobile and Stationary Device Embodiments Computing device102, activity reconstruction system104, network entities116, network activity monitor302, feature determiner304, vertex determiner308, usage pattern assignor314, network activity reconstructor316, network analyzer322, anomaly detector324, network modifier326, alert generator328, flowchart200, flowchart400, flowchart500, flowchart600, and/or flowchart700may be implemented in hardware, or hardware combined with software and/or firmware, such as being implemented as computer program code/instructions stored in a physical/hardware-based computer readable storage medium and configured to be executed in one or more processors, or being implemented as hardware logic/electrical circuitry (e.g., electrical circuits comprised of transistors, logic gates, operational amplifiers, one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs)). For example, one or more of computing device102, activity reconstruction system104, network entities116, network activity monitor302, feature determiner304, vertex determiner308, usage pattern assignor314, network activity reconstructor316, network analyzer322, anomaly detector324, network modifier326, alert generator328, flowchart200, flowchart400, flowchart500, flowchart600, and/or flowchart700may be implemented separately or together in a SoC. The SoC may include an integrated circuit chip that includes one or more of a processor (e.g., a central processing unit (CPU), microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits, and may optionally execute received program code and/or include embedded firmware to perform functions. FIG.8depicts an exemplary implementation of a computing device800in which example embodiments may be implemented. For example, any of computing device102, activity reconstruction system104, network entities116, network activity monitor302, feature determiner304, vertex determiner308, usage pattern assignor314, network activity reconstructor316, network analyzer322, anomaly detector324, network modifier326, and/or alert generator328may be implemented in one or more computing devices similar to computing device800in stationary or mobile computer embodiments, including one or more features of computing device800and/or alternative features. The description of computing device800provided herein is provided for purposes of illustration, and is not intended to be limiting. Example embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s). As shown inFIG.8, computing device800includes one or more processors, referred to as processor circuit802, a system memory804, and a bus806that couples various system components including system memory804to processor circuit802. Processor circuit802is an electrical and/or optical circuit implemented in one or more physical hardware electrical circuit device elements and/or integrated circuit devices (semiconductor material chips or dies) as a central processing unit (CPU), a microcontroller, a microprocessor, and/or other physical hardware processor circuit. Processor circuit802may execute program code stored in a computer readable medium, such as program code of operating system830, application programs832, other programs834, etc. Bus806represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. System memory804includes read only memory (ROM)808and random-access memory (RAM)810. A basic input/output system812(BIOS) is stored in ROM808. Computing device800also has one or more of the following drives: a hard disk drive814for reading from and writing to a hard disk, a magnetic disk drive816for reading from or writing to a removable magnetic disk818, and an optical disk drive820for reading from or writing to a removable optical disk822such as a CD ROM, DVD ROM, or other optical media. Hard disk drive814, magnetic disk drive816, and optical disk drive820are connected to bus806by a hard disk drive interface824, a magnetic disk drive interface826, and an optical drive interface828, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer. Although a hard disk, a removable magnetic disk and a removable optical disk are described, other types of hardware-based computer-readable storage media can be used to store data, such as flash memory cards, digital video disks, RAMs, ROMs, and other hardware storage media. A number of program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These programs include operating system830, one or more application programs832, other programs834, and program data836. Application programs832or other programs834may include, for example, computer program logic (e.g., computer program code or instructions) for implementing computing device102, activity reconstruction system104, network entities116, network activity monitor302, feature determiner304, vertex determiner308, usage pattern assignor314, network activity reconstructor316, network analyzer322, anomaly detector324, network modifier326, alert generator328, flowchart200, flowchart400, flowchart500, flowchart600, and/or flowchart700(including any suitable step of flowcharts200,400,500,600, or700) and/or further example embodiments described herein. A user may enter commands and information into the computing device800through input devices such as keyboard838and pointing device840. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, a touch screen and/or touch pad, a voice recognition system to receive voice input, a gesture recognition system to receive gesture input, or the like. These and other input devices are often connected to processor circuit802through a serial port interface842that is coupled to bus806, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). A display screen844is also connected to bus806via an interface, such as a video adapter846. Display screen844may be external to, or incorporated in computing device800. Display screen844may display information, as well as being a user interface for receiving user commands and/or other information (e.g., by touch, finger gestures, virtual keyboard, etc.). In addition to display screen844, computing device800may include other peripheral output devices (not shown) such as speakers and printers. Computing device800is connected to a network848(e.g., the Internet) through an adaptor or network interface850, a modem852, or other means for establishing communications over the network. Modem852, which may be internal or external, may be connected to bus806via serial port interface842, as shown inFIG.8, or may be connected to bus806using another interface type, including a parallel interface. As used herein, the terms “computer program medium,” “computer-readable medium,” and “computer-readable storage medium” are used to refer to physical hardware media such as the hard disk associated with hard disk drive814, removable magnetic disk818, removable optical disk822, other physical hardware media such as RAMs, ROMs, flash memory cards, digital video disks, zip disks, MEMs, nanotechnology-based storage devices, and further types of physical/tangible hardware storage media. Such computer-readable storage media are distinguished from and non-overlapping with communication media (do not include communication media). Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared and other wireless media, as well as wired media. Example embodiments are also directed to such communication media that are separate and non-overlapping with embodiments directed to computer-readable storage media. As noted above, computer programs and modules (including application programs832and other programs834) may be stored on the hard disk, magnetic disk, optical disk, ROM, RAM, or other hardware storage medium. Such computer programs may also be received via network interface850, serial port interface842, or any other interface type. Such computer programs, when executed or loaded by an application, enable computing device800to implement features of example embodiments described herein. Accordingly, such computer programs represent controllers of the computing device800. Example embodiments are also directed to computer program products comprising computer code or instructions stored on any computer-readable medium. Such computer program products include hard disk drives, optical disk drives, memory device packages, portable memory sticks, memory cards, and other types of physical storage hardware. IV. Example Embodiments A system for reconstructing network activity is described herein. The system includes: one or more processors; and one or more memory devices that store program code configured to be executed by the one or more processors, the program code comprising: a network activity monitor configured to monitor network activity of a plurality of network entities; a feature determiner configured to obtain a set of features for each network entity in the plurality of network entities based on the monitoring; a vertex determiner configured to determine a number of vertices to describe the sets of features in a multidimensional space; a usage pattern assignor configured to assign a different usage pattern to each of the vertices; a network activity reconstructor configured to obtain at least some of the features in the set of features for a particular network entity, and to represent the particular network entity as a weighted combination of the usage patterns based on the at least some of the features in the set of features for the particular network entity; and a network modifier configured to alter at least one aspect of the network based at least on the representation of the particular network entity. In one implementation of the foregoing system, at least one feature in the set of features for each network entity in the plurality of network entities is determined by aggregating a certain type of network activity over a period of time. In another implementation of the foregoing system, the usage pattern assignor is configured to assign the different usage patterns to the vertices based on user input. In another implementation of the foregoing system, the usage pattern assignor is configured to automatically assign the different usage patterns to the vertices. In another implementation of the foregoing system, the vertices define a convex hull that describes the sets of features in the multidimensional space. In another implementation of the foregoing system, the different usage patterns assigned to the vertices include one or more of: a port scanning activity; a web crawler or indexer; a web server; a connection initiator; a login activity; a remote desktop protocol activity; a denial of service attack; or a file transfer activity. In another implementation of the foregoing system, the number of vertices is determined based, at least in part, on a degree of variance between the sets of features and the number of vertices. In another implementation of the foregoing system, the network modifier is configured to alter the at least one aspect of the network by at least one of: blocking network traffic to or from a node of the network; or filtering network traffic to or from the node. A computer-readable memory is described herein. The computer-readable memory has program code recorded thereon that when executed by at least one processor causes the at least one processor to perform a method comprising: monitoring network activity of a plurality of network entities; obtaining a set of features for each network entity in the plurality of network entities based on the monitoring; determining a number of vertices to describe the sets of features in a multidimensional space; assigning a different usage pattern to each of the vertices; obtaining at least some of the features in the set of features for a particular network entity; representing the particular network entity as a weighted combination of the usage patterns based on the at least some of the features in the set of features for the particular network entity; and detecting a network anomaly based at least on the representation of the particular network entity. In one implementation of the foregoing computer-readable memory, the assigning the different usage patterns to the vertices comprises automatically assigning the different usage patterns to the vertices. In another implementation of the foregoing computer-readable memory, the vertices define a convex hull that describes the sets of features in the multidimensional space. In another implementation of the foregoing computer-readable memory, the different usage patterns assigned to the vertices includes one or more of: a port scanning activity; a web crawler or indexer; a web server; a connection initiator; a login activity; a remote desktop protocol activity; a denial of service attack; or a file transfer activity. In another implementation of the foregoing computer-readable memory, the number of vertices is determined based, at least in part, on a degree of variance between the sets of features and the number of vertices. In another implementation of the foregoing computer-readable memory, the method further comprises: performing an action based, at least in part, on the detected network anomaly, the action including one or more of: blocking network traffic to or from a node of the network; filtering network traffic to or from the node; or generating a notification corresponding to the detected anomaly. A method of reconstructing network activity is described herein. The method includes: obtaining a set of usage patterns for a network that describes sets of features for each of a plurality of network entities, each usage pattern in the set of usage patterns corresponding to a different vertex in a multidimensional space; obtaining at least some of the features in the set of features for a particular network entity; and representing the particular network entity as a weighted combination of the usage patterns based on the at least some of the features in the set of features for the particular network entity. In one implementation of the foregoing method, the method further comprises: performing analytics for the network based at least on the representation of the particular network entity. In another implementation of the foregoing method, the method further comprises: detecting a network anomaly based at least on the performed analytics. In another implementation of the foregoing method, the method further comprises: performing an action based at least on the detected network anomaly, the action including one or more of: altering at least one aspect of the network; or generating a notification corresponding to the detected anomaly. In another implementation of the foregoing method, the set of usage patterns define a convex hull that describes the sets of features in the multidimensional space. In another implementation of the foregoing method, the set of usage patterns includes one or more of: a port scanning activity; a web crawler or indexer; a web server; a connection initiator; a login activity; a remote desktop protocol activity; a denial of service attack; or a file transfer activity. V. Conclusion While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims. Accordingly, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. | 75,437 |
11943247 | In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. DETAILED DESCRIPTION The following description in conjunction with the above-reference drawings sets forth a variety of embodiments for exemplary purposes, which are in no way intended to limit the scope of the described methods or systems. Those having skill in the relevant art may modify the described methods and systems in various ways without departing from the broadest scope of the described methods and systems. Thus, the scope of the methods and systems described herein should not be limited by any of the exemplary embodiments and should be defined in accordance with the accompanying claims and their equivalents. Prior to describing the disclosed methods and systems, a detailed explanation of exemplary encryption-based malicious ransomware attacks will be described with referenceFIGS.1A and2. In some implementations of malicious software programs or malware, sometimes referred to as ransomware, the programs may attempt to encrypt one or more user and/or system files on a victim computing device and delete the original plaintext versions of the files. Ransomware typically generates or displays a ransom note demanding payment in exchange for a decryption key for one or more files. As brute force decryption may require significant time and resources, many victims end up paying in order to regain access to their files. In other, potentially worse implementations, no ransom note is provided and files may simply be encrypted with no easy (albeit expensive) way to obtain the encryption key. In a similar implementation, ransomware typically communicates with a host server, which may be referred to as a command and control server. The host server may provide decryption keys to victim computers, upon payment of ransom. However, if the command and control server is shut down or blocked or otherwise inaccessible while the malware is still infecting computing devices, victims may have no ability to pay ransom requirements and/or obtain decryption keys. “Malicious” encryption, as used herein, may refer to any act of encryption of user or system data not directed by or desired by a user of a device, and may be done for the purpose of obtaining a ransom, to annoy or frustrate, for corporate espionage or sabotage, or any other such purpose. Although referred to as “malicious”, explicit malice need not be present; rather, the encryption may be simply performed against the user's interests, with no easy decryption method. FIG.1Ais a block diagram of a prior art system potentially vulnerable to ransomware attacks. A computing device10, sometimes referred to as a client computing device, victim computing device, or other similar term, may comprise a laptop computer, desktop computer, portable computer, tablet computer, wearable computer, server, workstation, appliance, or any other such device or devices. In some implementations, computing device10may comprise a virtual computing device executed by one or more physical computing devices, e.g. via a hypervisor. Computing device10may be any type and form of computing device having or maintaining data storage, including without limitation personal computers of users, servers shared by groups of users, or equipment having integrated computing devices that are not typically considered computers (e.g. medical equipment, self-driving cars, automation systems, automatic teller machines, smart televisions, etc.). Computing device10may communicate with a command and control computer20, sometimes referred to as a host computer, attacker computer, C&C server, or by other similar terms. C&C computer20may comprise a laptop computer, desktop computer, portable computer, tablet computer, wearable computer, server, workstation, appliance, or any other such device or devices. In some implementations, C&C computer20may comprise a virtual computing device executed by one or more physical computing devices, e.g. via a hypervisor. C&C computer20may be any type and form of computing device capable of exchanging keys with a victim computer10via a network30. Network30may comprise any type and form of network or networks, including a Local Area Network (LAN), Wide Area Network (WAN) or the Internet. Network30may comprise a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (e.g., Ethernet, T1, T3, 56 kb, X.25), broadband connections (e.g., ISDN, Frame Relay, ATM, etc.), wireless connections, (802.11a/b/g/n/ac, Bluetooth), cellular connections, satellite connections, or some combination of any or all of the above. A network30may comprise one or more intermediary devices, including switches, hubs, firewalls, gateways, access points, or other such devices. In some implementations, network30may be homogeneous, such as a plurality of wired links of the same type, while in other implementations, network30may be heterogeneous (e.g. a cellular link to a wired link to a satellite gateway to a wireless gateway, etc.). Victim computer10(and C&C computer20, though not illustrated) may comprise one or more processors100, one or more input/output interface(s)102, one or more network interfaces104, one or more storage devices106, and/or one or more memory units108. Processor100may comprise logic circuitry that responds to and processes instructions fetched from memory108and/or storage106. The processor100may comprise a microprocessor unit, such as: a general purpose microprocessor as sold by Intel Corporation of Santa Clara, Calif. or Advanced Micro Devices of Sunnyvale, Calif.; a specialty microprocessor as sold by Qualcomm Inc. of San Diego, Calif. or Samsung Group of Samsung Town, Seoul, South Korea; or any other processor capable of executing programmed instructions and otherwise operating as described herein, or a combination of two or more single- or multi-core processors. Processor100may execute one or more items of executable logic, such as an attack agent110and/or security agent124, discussed in more detail below. Processor100may also execute an operating system, applications, and/or other executable logic. Client device200and/or C&C computer20may comprise one or more input or output interfaces102, such as a keyboard, mouse, touchpad, touchscreen, microphone, camera, accelerometer, position tracker, joystick, or other such input devices; or a screen, audio speaker, video output, haptic output, or other such output device. Input and output interfaces102may be used to view and interact with a user interface, such as one provided by a security agent124or an operating system of the device. A network interface104may comprise a wired interface such as an Ethernet interface of any speed including 10BASET, 100BASET, Gigabit Ethernet, or any other such speed, a universal serial bus (USB) interface, a power line interface, a serial interface, or any other type of wired interface. In other implementations, network interface104may comprise a wireless interface, such as a Bluetooth, Wireless USB, 802.11 (WiFi) interface, or cellular interface, or any other type and form of wireless interface. In some implementations, a network interface104may include both a wired and wireless interface, to provide additional flexibility, reliability, or bandwidth. Network interface104may include one or more layers of a network stack, including transport layer functions, Internet layer functions, physical layer functions and interfaces, or any other such functions or interfaces. Network interface104may communicate via a network30and/or one or more other networks, including a local area network (LAN) behind a gateway or network address translator (NAT) or other such device. Accordingly, the network interface104may have an IP address that is available to a wide area network (WAN, such as the Internet) or may be behind an appliance and have an IP address directly available only to the local network. Network30may be a LAN, a WAN, a cellular network, a terrestrial or satellite network, or any combination of these or other networks. Network interface104may be used to establish a connection to C&C computer20, which may include a similar network interface104(not illustrated). Client device200and/or C&C computer20may include a storage device106, such as a hard drive, flash drive, hybrid drive, or any other type and form of data storage. Client device200and/or C&C computer20may also include a memory device108, including random access memory (RAM), dynamic RAM (DRAM), cache memory, or any other such form of data storage. Memory108and storage106may store different types of data, including system files such as operating system executable files, libraries, a registry, user interface components, or other such data provided as part of or generated by an operating system of device20. In some implementations, storage106may be considered non-volatile or long term storage, while memory108may be considered volatile or short term memory. For example, storage106may comprise a hard drive, while memory108may comprise RAM. Thus, memory108may be differentiated from storage106based on intended usage or functionality, or durability of stored data during unpowered periods. Storage106may store user data files120and system data files122. User data files120may comprise any type and form of user data files, including documents, photos, videos, audio recordings, spreadsheets, contact lists, calendars, mail archives, or any other type and form of user-specific data. Such user-specific data files120may be in any one of a variety of file formats, such as .doc, .txt, .jpeg, .pdf, and the like. System data files122may comprise any type and form of files or data other than the user data as described above, such as one or more system or application files, including executable applications, application specific libraries, templates, user interface components, settings or preferences files, application assets such as graphics or media, or any other type and form of application-related files in a variety of file formats. System data files122may include applications such as productivity or “office” applications, video games, web browsers including plug-ins or extensions, graphics or audio applications, as well as other system-specific files such as libraries, registry entries, etc., in a variety of file formats. In some implementations, ransomware may attack one or more of both user data files120and system data files122; in other implementations, it may target one or more user data files120exclusively, rather than system data files122, with the intent of encrypting user files but still allowing the device to run applications. In some implementations, this may reduce detectability, as many operating systems provide notifications of attempts to modify system files. Although shown internal to computing device10, in many implementations, storage106may be internal, external, or a combination of internal and external, including external hard drives or flash drives, network drives, or other such storage devices. Storage106may store an attack agent110, sometimes referred to as malware, ransomware, a virus, a Trojan (also referred to as a Trojan horse or Trojan virus), a worm, malicious code, or by similar terms. Attack agent110may comprise an application, service, server, daemon, or other executable logic for encrypting user data files120and/or system data files122. In many implementations, attack agent110may communicate with a command and control computer20, such as to retrieve a public key112, or to transmit a symmetric key116. Specifically, an attack agent110may be configured to generate or otherwise obtain a set of cryptographic keys, such as a symmetric key116. In many implementations, symmetric keys may be utilized for speed of encryption, as symmetric key encryption is significantly faster than public key encryption. Additionally, symmetric key encryption is not prone to message expansion (e.g. a message encrypted via symmetric key encryption may be the same size as the original unencrypted message, while the same message encrypted via public key encryption may be many times the original size of the unencrypted message). In other implementations, the attack agent110may be configured to generate a public112and private114key pair. In still other implementations, the attack agent110may be configured to generate both a public112and private114key pair and a symmetric encryption key. The attack agent110may read one or more files on the victim computer10, such as user data files120and/or system data files122, and encrypt them (e.g. via the symmetric key116). The resulting encrypted cipher text may be stored locally (e.g. in storage106), and the attack agent110may delete the clear text or original unencrypted versions of the files. During encryption, the encryption key (e.g. symmetric key116) may be stored locally, such as in storage106. In some implementations, the encryption key may not be stored in non-volatile storage106, but may be instead held in virtual memory, memory108(e.g. RAM), or even in a buffer118of a network interface104. The symmetric key116may be transmitted to a command and control server20for storage and eventual retrieval upon the occurrence of a subsequent event (for example, once a victim provides a ransom payment), or upon the passage of time. In some implementations, the symmetric key116may be stored locally and may be encrypted itself, via a public encryption key112, either as part of a public/private key pair generated locally by the attack agent110, or retrieved from C&C server20as illustrated. Once transmitted to the C&C server20, the encryption key116and/or public key112may be deleted from the victim computer10. FIG.2is a flow chart of an implementation of a malicious encryption attack200. Many variations of these attacks exist, and the illustrated implementation is provided merely for discussion purposes. In other implementations, one or more of the features shown may or may not be included, and other features or steps may be added. At step202, a victim computer10may receive and/or execute an attack agent or similar malware. The malware may be received via any means, such as downloaded via the Internet, provided in an infected file or application (e.g. a Trojan), received via a local network from another infected machine, etc. As discussed above, in some implementations, attack agent110may request generation of a public key from a command and control server20. If attack agent110does not generate a local key at step204, then at step206, it may transmit the request for the public key of the C&C server. The C&C server may receive the request at step208and reply with its public key at step210. In other implementations in which attack agent110does generate a local key, at step205, it may generate a private and public key pair. The key pair may be generated via any suitable means, such as via a random number generator or pseudo-random number generator and key generation algorithm. At step212, attack agent110may receive the public key, either locally generated or from the C&C server. At step214, attack agent110may generate a symmetric key from the received public key. The symmetric key may be generated via any suitable means, such as a pseudo-random number generator, true random number generator, retrieval of a random number or seed, or other such sources of entropy. Multiple sources of entropy may be used in some implementations, with random outputs concatenated, XOR-ed, or otherwise combined. At step216, attack agent110may encrypt one or more files on or associated with victim computer10, utilizing the symmetric key generated at step214. The encryption algorithm may be any suitable symmetric key encryption algorithm, such as the AES block cipher. As described above the encrypted files may include one or more user data files120, system data files122, and/or any other combination of these or other files. At step218, the original plaintext or clear text files may be deleted. In some implementations, a ransom note may be left (e.g. as a “read me” file or similar unencrypted file), or may be displayed by attack agent110(e.g. as a pop-up notification or similar display on screen). The note may include instructions on payment and how to retrieve a decryption key. Simultaneously with encryption, or after encryption and/or deletion have completed, at step220attack agent110may encrypt the symmetric key with the public key (either locally generated or retrieved). The encrypted key may be transmitted to C&C server20at step222, and received at step224for storage. In some implementations, C&C server20may decrypt the symmetric key (e.g. via its local private key), while in other implementations, C&C server20may simply store the encrypted symmetric key. At step226, attack agent110may delete the symmetric key and public key from local memory and/or storage. Accordingly, after the attack, the victim computer10may include only the encrypted files, ransom note, and, in implementations in which attack agent110generates the public/private key pair, the private key. Once a ransom has been paid or upon the occurrence of some other event or the passage of time, attack agent110may either retrieve the decrypted symmetric key from C&C server20(in implementations in which the public/private key pair are not locally generated), or may retrieve the encrypted symmetric key from the C&C server20and decrypt the symmetric key using the locally stored private key. Attack agent110may then decrypt the encrypted files with the symmetric key. The systems and methods of the disclosure will now be described with reference toFIGS.1B, and3-5. As opposed to the systems and methods to address malicious encryption as previously discussed, those of the disclosure may monitor data writes to disk, memory108, or network transmission buffers118for strings that may represent encryption keys or moduli, by applying one or more techniques to decode and parse the string to either identify or extract the keys, or rule out the string as containing an encryption key or modulus (e.g. at steps205,212, or214of the implementation shown inFIG.2, rather than steps202, or216or218, as in other implementations of encryption mitigation discussed above). FIG.1Bis a block diagram of an embodiment of an improved system, providing detection and mitigation of malicious encryption. In some implementations, a victim computer10may execute a security agent124. Security agent124may comprise an application, service, server, daemon, routine, or other executable logic for detecting and mitigating malicious encryption attacks. Security agent124may include functionality for testing compression ratios of data, testing whether numbers are prime or composite, performing test factorization of numbers, or other such features, discussed in more detail below. In some implementations, security agent124may include a file monitor126or write interceptor or hook to monitor data writes to memory or storage. Security agent124may identify data being written to any location, including a storage device106, memory units108(as well as virtual memory within storage106that is treated as a memory unit108), and, in some implementations, buffers118of a network stack. This latter feature may be useful for identifying encryption keys transmitted to or received from C&C server20. In some implementations, as security agent124utilizes functionality for monitoring file reads and writes, security agent124may be integrated within other software such as backup software or file sharing and synchronization software in (or executed by) victim computer10. For example, in some implementations, backup software or synchronization software may monitor a file system for new or modified files. Upon detection, in addition to performing various backup or synchronization routines, security agent124may attempt to detect whether the new or modified file includes a key or modulus. Security agent124may perform compression routines, discussed below in more detail in connection withFIGS.3A and3C. In some implementations in which security agent124is integrated with or communicates with backup or synchronization software, the backup or synchronization software may perform compression of detected new or modified data, e.g. to reduce storage or network bandwidth requirements. Thus, in some such implementations, security agent124may receive the results of this compression for analysis at steps318-320discussed below in connection withFIGS.3A and3C; and may, in some implementations, skip steps316,360, and/or362, also discussed below. Security agent124may be configured to detect when a key pair, such as an RSA key pair, is generated or received. This may be done via the unique structure and arithmetic properties of key pairs. Although primarily discussed in terms of RSA encryption, these systems and methods apply to any encryption system with consistent formats for keys or keys for which a key validation routine may be applied (e.g. a function that determines whether a numeric string is a valid key). In RSA cryptography, for example, a cypher text c may be generated via the function me(mod n), where m is the original message (in this case, a given user data file120or system data file122), e is the public exponent and n is the public modulus. Together, the exponent and modulus form the public key. Anyone with these values may thereby encrypt their message resulting in cipher text c. Only the holder of the private exponent d may perform the reverse calculation (cd=(me)d=m (mod n)) to recover the original message m. Keeping the private exponent d secret while satisfying the equations requires that n=pq for large primes p and q. These values must also be kept secret since anyone who knew them could easily calculate the private exponent d. This process of RSA key generation is quite time-consuming and is therefore infrequently performed, a fact which may be exploited by the security agent. FIG.3Ais a flow chart of an implementation of a method for detection of malicious encryption through detection of key generation or receipt. In the description to follow (as well as in the description of other flow charts or methods below), reference is made to one or more “steps.” Each of the described steps may be carried out by use of a corresponding “module” of computer programming instructions executed by the system as described with reference toFIG.1B or5. These modules of computer code may be standalone modules (for example, coded as a library function that is statically or dynamically linked to the source tree of an overarching program), or may be integrated into the source tree of an overarching program, or may be embodied as a mix of these two methodologies (for example, subfunctions of one or more of the steps set forth below may be embodied as separate library functions). Although shown in a given order, in many implementations, one or more steps may be executed in a different order than shown, and/or may be executed in parallel. In some implementations, one or more steps may be executed by different devices operating in conjunction; for example, various steps may be distributed amongst several devices in communication with each other or with a controlling or master device. At step302, security agent124may monitor data written to storage106or memory108and/or received in buffer118of a network stack to find strings that look like keys such as RSA keys. Recall that ransomware must store and communicate RSA keys in order to send the right components to the attacker. In modern usage, advances in computing power mean the RSA modulus n is generally 2048 or 4096 bits in length, however, other lengths may be utilized. Furthermore, even if an RSA modulus n is padded by additional data (e.g. header information, ransom instructions, etc.), portions of the data of predetermined lengths may be analyzed separately or in combination (e.g. via a sliding window or similar algorithm). Security agent124may be configured to provide an alert to a user or take other mitigation steps in response to an output from file monitor126monitoring the file system and memory for new data of exactly these lengths or in similar sizes to these lengths. As this approach may lead to a number of false positives, security agent124utilizes one or more techniques to validate that a string represents a plausible key modulus, with techniques applied in order of processing or memory requirements. At step304, file monitor126or security agent124may determine if the detected data matches a predetermined size range or threshold. In this case, file monitor126or security agent124may detect if the incoming write data is 2048 or 4096 bits in length, or any other such predetermined length in some implementations (e.g. 1024 bits, 8192 bits, etc.). For example, few files or data items on a modern computer are exactly 2048 or 4096 bits in length, so security agent124may identify these files or strings as suspect. If the string is one of these exact lengths, security agent124will issue an alert notification at step314. If the length of the string is not one of these exact lengths, security agent124continues further processing at step300B. Alternatively, cryptographic public keys are often encoded in a format called Privacy Enhancing Mail (PEM). This format is described in RFC 1422, entitled “Privacy Enhancement for Internet Electronic Mail: Part II: Certificate-Based Key Management,” published February 1993 by the Internet Engineering Task Force (IETF), which is incorporated herein by reference. In essence, the key is encoded in a format called Abstract Syntax Notation One (ASN.1) which is then encoded using the Base-64 representation commonly used to transfer binary data using only printable ASCII. These encoding layers slightly enlarge the file to between 800 and 2000 bytes. Thus, in some implementations, file monitor126or security agent124may determine if the detected file or data to be written is between 800 and 2000 bytes in length (though other values may be used in other systems, based on common padding or encoding of keys). Thus, in some implementations, file monitor126may cause security agent124to conduct decode test300A only if incoming data to be written is within a specified range in length, such as 800 to 2000 bytes in length. In similar implementations, file monitor126may utilize other file length or byte ranges as may indicate key encryption, in accordance with other encryption methodologies. If the data is within one or both of the sizes/ranges set forth above, in a first technique300A, security agent124may attempt to decode the data as if it were a key. For example, security agent124may include a decode module308that includes the openssl program and/or similar decryption programs to perform decoding of base-64 and ASN.1 encoding on incoming or detected data strings. In one implementation, such programs or tools may be applied to the data string as follows, with infile being a new file written to disk or data string written to memory106or a buffer118: If 800 <sizeof(infile) < 2000 bytes:openssl rsa -in infile -out puboutEnd. If decode module308successfully decodes Base-64 and ASN.1 encoding layers of the incoming or detected string, it may generate the following result: - - - - -BEGIN RSA PUBLIC KEY- - - - -MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvdeddaqcqOvBNzxPWjgJf6MzWsfPnpTxW6T79LtFxYIb71yljxnvZ9z1AM09TPNgyay3rk3oIkBrOMdwpHmeyc9qH5QffPLKwHSVodcoAPwC87A5KC86XnrLzvITEqNaV5jn0VxiPJcMWAjVbeutmYF/Uf+xiYU5daHCmnkF3FX+n04HV4jtvHcNhi5p/UrMkgBhcWFIMkZqjHrRGV/MB5BOAUeySa7NlobfrmSrw0ASIuX2w3g8hpbJoY1dKd6rjr7Y6+fUyWEFMTMjwl1z8xf0vW+LTuylW5PNFgvzXyjR78lOX/hOsGj93I5v1NXttHBjPc1EP3BYHkj/I4qPwQIDAQAB- - - - -END RSA PUBLIC KEY- - - - - At step310, security agent124may parse the output generated by decode module308. If the output includes predetermined strings (such as the “begin RSA public key” or “end RSA public key” strings or similar predetermined strings) or otherwise matches a key format at step312, then at step314, security agent124may generate or trigger an alert. Of course, the use here of openssl is merely an illustrative example. Libraries exist for most modern programming languages. Referring briefly toFIG.3B, additional details of an implementation of decode test300A are illustrated. In one implementation, security agent124may decode the data string at step350. The decoding, such as decoding from base-64 and ASN.1 encoding, may be performed by security agent124or by a third party application or routine called or executed by security agent124. In some implementations, the decoder may not perform verification on the output, and/or may not provide an exception or error code at step352. If it does not, then in some implementations, at step354, the output data may be parsed for a key identifier or compared to a key format. If the identifier is found at step356, then the string likely represents a key and an alert may be generated at step314. If not, then the method may proceed with step316(described below). In some implementations, the decoder step350may generate alerts or exceptions for non-matching strings. For example, the decoder step350may include execution of C# RSACryptoServiceProvider, to verify the presence of a key as follows: AsymmetricCipherKeyPair keyPair;using (var r = File.OpenText (@“infile”))keyPair = (AsymmetricCipherKeyPair) newPemReader(r).ReadObject( ); If this code generates an exception, then the string may not be a key and the method may proceed with step306. Otherwise, the string may be a key and the incoming write data or detected data may be parsed at step310, and an alert may be generated at step314. Of course, not all ransomware variants perform this type of encoding. Therefore, if openssl or RSACryptoServiceProvider or similar tools cannot decode the data string, then the method300may proceed to apply additional key detection tests. Returning toFIG.3Aand continuing, an entropy test300B may be used to examine the entropy or information content of the data string. Most files created by users follow one of several well-known probability distributions: English text for instance is known to have only an information content of roughly 1.3 bits per byte. This is why simple compression algorithms like Lempel-Ziv and deflate are able to function. Cryptographic key values by contrast have an extremely high information content (e.g. 7 bits per byte): the application of a comp.ression algorithm should not appreciably reduce the size of the file or string containing the modulus. The entropy test300B may be used to reduce false positives even further. For example, using the gzip tool, the string may be compressed at step316using a lossless compression tool, such as Lempel-Ziv (LZ) compression: If 250 < sizeof(infile) < 800 bytes:gzip -c infile > infile.gzEnd. Then, the sizes of the uncompressed file and the compressed file may be compared at step318, for example by using standard C# libraries or other such system tools: long length = new System.IO.FileInfo(infile).Length;long gzlength = new System.IO.FileInfo(infile.gz).Length;long ratio = length / gzlength;if (ratio > 1.1)do-not-alertEnd. If the ratio exceeds a predetermined threshold at step320(per this example, 1.1), then the string is likely not a key and security agent124may allow the write to proceed at step306(and not trigger an alert at step314). Referring toFIG.3C, an illustration of an implementation of an entropy test300B is provided in more detail. As discussed above, in some implementations, a key may be padded so as to disguise the key, or additional data may be included with the key (e.g. ransom instructions or other information). Accordingly, different portions of the data may have different compression ratios. Some compression tools may report different compression ratios for different portions of the file, or, in the implementation illustrated, compression may be applied to various portions of the file to test compression ratios. For example, LZ compression uses a dictionary-based compression model in which a dictionary or table is constructed from strings of data in the file, and subsequent repeated strings are replaced with table entries or identifiers. The more such strings are repeated, the greater the file may be compressed. The generated table and identifiers may be Huffman encoded or similarly entropy encoded, with shorter indices or fewer bits used for identifiers of the most common repeated strings. The length of the replaced string and the length of the index or position within the Huffman tree may be used to determine which parts of the file will compress well and which parts will not. For example, in one implementation, the length of a replaced string may be divided by the length of its Huffman index, such that frequently repeated (e.g. lower indices) long strings have a higher ratio than short strings or moderate length strings that are less frequently repeated (e.g. longer indices). These ratios may be aggregated in one implementation to calculate compressibility of the segment. In another implementation, portions of the file may be of predetermined size, such as 100 or 1000 bytes, and security agent124may attempt to compress each portion in sequence. In a further implementation, these portions may be compressed via a sliding window algorithm (e.g. first attempting to compress bytes 0-1000, then bytes 200-1200, etc.). In still another implementation, routines executed by security agent124or an operating system may include an abort sequence or result if compressibility is below a predetermined threshold. For example, in some implementations, a compression routine may attempt to compress some or all of the data. If, during compression, the routine determines that the compression ratio is poor (e.g. detected repeated strings for an LW dictionary are below a predetermined threshold length or frequency, or a post- to pre-compression size comparison of the portion is higher than a predetermined threshold ratio), then the routine may abort, generate an error, and/or insert an indicator in the output data (e.g. a flag in a header indicating that compression failed or was aborted). At step360, a first portion of the data may be compressed. At step318, a compression ratio of the average number of bits post-compression per byte of data prior to compression (expressed as “compressed bits/original byte”) may be calculated. If the ratio is greater than a predetermined threshold (e.g. 25%, 50%, 10%, etc.), then security agent124may determine if additional portions of the data need to be compared. If additional portions of the data need comparison, then steps360-320may be repeated iteratively. If the first data portion, and any successive compressed data portions, all have a compressed bits/original byte ratio above below the given ratio, then the method may proceed with step306to permit the write operation. As discussed above, if a compression ratio for a portion of the data is not greater than the threshold, then the portion of the data may have high entropy, indicating it is likely not, for example, English text, but may instead be an encryption key (English text having a theoretical compression ratio of up to 83%, although in practice, compression algorithms do not achieve such efficiency because of the required computation times). Thus, if the compression ratio is not greater than the threshold—i.e. if the portion of the data is not significantly compressible—then the portion of the file may be an encryption key, and the method may proceed at step322for additional confirmation of key generation if required or desired. Entropy test300B check is a heuristic approximation to the Shannon entropy, and histogram estimator of differential entropy of a probability distribution expressed as H(X)=-∑i-0N-1pilog2pi where piis the probability of character number i appearing in the stream of characters N of the message. It will be readily apparent to one skilled in the art that other methods of entropy estimation are possible, such as explicit computation of the empirical Shannon entropy (as opposed to the estimation technique described above). If the foregoing tests have not concluded that the data string is not a key or is a false positive or “do not alert” value, then security agent124may proceed to more computationally intensive checks. Specifically, returning toFIG.3A, if the entropy test300B does not rule out the string being a key, then a primality test300C may be applied to determine whether the data is an RSA modulus. Simple tests exist to determine if a given number is prime, even though these tests do not actually determine the prime factorization. For example, the Miller Rabin test or Fermat's Little Theorem can prove that a number is composite. Recall that the RSA modulus is a composite n=pq. If security agent124finds a file containing a large composite integer, it may trigger an alert; and conversely, if the string is prime, then it is definitely not a modulus. The data string may be converted to a numeric representation n, as a large integer, and compared to a test range at step322: If n < 2{circumflex over ( )}1023 or n > 2{circumflex over ( )}4097do-not-alertEnd. If the value is not within this range, then the data string would represent a modulus too small to resist factorization or may be needlessly large; that is, the tested data string is not an RSA modulus and processing may proceed at step306. If n is contained in this range, the tested data string may then be tested by security agent124to determine if n is prime at step324(if additional computational analysis is desired) or may generate an alert at step314(if additional computational steps are not necessary or desired). If n is prime, the tested data string is not an RSA modulus and processing may proceed at step306to permit the write operation. FIG.3Dillustrates an implementation of a primality test300C in more detail. Consider the character set used in inf ile as discussed above. To convert the data to a numeric representation, in some implementations, base-64 decoding may be required. Security agent124may determine at step370, on a bytewise basis. If the data in each tested byte is less than or equal to 127, then the data may be ASCII characters, which each have values less than 127. Conversely if the data is random or an encoding of a large integer, then it may have bytes greater than 127. If each byte is less than or equal to 127, then at step372, security agent124may apply Base-64 decoding to the input data as described above. The resulting decoded representation may be used as the value n, as a potential RSA modulus. From Fermat's Little Theorem, recall if an−1≠(mod n), with 1<a<(n−1) then n is not prime (otherwise, it might be prime). Therefore, the Fermat primality test proceeds by choosing prime values ai<n and iteratively testing. At step374, security agent124may select a starting value ai, which may be random or may be a predefined value (e.g. 2) and may be in the range1<ai<n. At step376, security agent124may select a starting prime (e.g. 2). At step378, security agent124may test for a Fermat witness, for example, calculating x=x^ai*(mod n). If the result does not equal “1”, then at step380, security agent124determines that the key n is composite and the method may proceed to step326for optional additional key detection processes, or (if such additional processes are not utilized) may proceed to generating an alert at step314. If the result of the foregoing calculation is “1”, security agent124determines that the key n may be prime, and the method may repeat steps376-380for additional prime numbers selected at random or in increasing order at step376. For example, in one implementation, steps376-382may be repeated iteratively for each of a set of prime numbers, such as the first 128 prime numbers (e.g. a sequence of successive primes 3, 5, 7, 11, 13, etc.). In other implementations, other numbers of iterations may be used. Each iteration reduces the likelihood of a false positive—the possibility of a false positive is roughly 1 in 2n, given n number of iterations. Given a potential 128-bit AES key, then n=128 may be reasonable. For longer keys or in other implementations, a lesser or greater number of iterations may be performed. Thus, in one embodiment, security agent124may perform the following: Let a = 2Repeat 128 times:x = xa(mod n)if (x • 1) exit;if (x = 1) let a = next primedo-not-alertEnd. If the security agent exits this routine without detecting a composite number, then n is likely prime and the process proceeds to step306to permit the write operation. If steps374-380result in identifying a composite number, then the process proceeds to step326(if additional computational analysis is desired) or may generate an alert at step314(if additional computational steps are not necessary or desired). Returning toFIG.3A, having determined that n is composite, security agent124may attempt to determine at least a partial prime factorization. In the parlance of number theory, most integers are “smooth” or composed as the product of many small primes and prime powers. For example, simply choosing a 2048-bit integer at random, the result will almost certainly be divisible by many small primes and not be of the special form used in RSA n=pq. While factorization in general is believed to be a hard problem, this is only true in the worst case n=pq with p and q of comparable size. A false positive—that is, a number encountered at random encountered by security agent124—would almost certainly have many smaller factors. Fortunately, there are several integer-factorization algorithms that are especially suited to this case when n contains small factors. Thus, at step326, security agent124may attempt to perform a partial factorization of the composite value n utilizing one or more of such integer-factorization algorithms. At step328, security agent124may determine if the composite number has small factors or factors smaller than a predetermined threshold (e.g. less than 28or 256 in one implementation, or any other such threshold). If so, then the number is likely not a modulus, and the method may proceed at step306to permit the write operation. This may reduce the number of false positives generated by security agent124. FIG.3Eis a flow chart for one implementation of a factorization test300D. Other factorizations may be applied without departing from the scope of these methods and systems. Pollard's “(p-1)” method is similarly based on Fermat's Little Theorem as discussed above, but stated slightly differently. First, in the parlance of number theory, suppose n has a prime factor p. Then, ak(p−1)=1 (mod p), with 1a<(p−1). Given any value x=1(mod p): gcd((x−1), n)≠1. However, we do not know the value of p, just that it exists. Accordingly, at step390, security agent124may determine the greatest common denominator (gcd) between the numeric representation of the data n and a test value of 2m−1(mod n), with m being the product of the primes up to a predetermined threshold (e.g. up to 232in some implementations). Security agent124may determine whether the data has small factors (step328) by determining if the greatest common denominator is greater than 1 (at step392), and if it is less than the data value n (at step394). If both are true, then the data has a “small” prime factor within the predetermined range of primes up to the threshold, and is likely not a modulus. Accordingly, the method may proceed at step306to permit the write operation. If the data is a composite number that does not have small prime factors, then it may be an RSA modulus, and at step314, security agent124may generate an alert. Accordingly, security agent124may perform the following: Let m = • (primes less than 232)g = gcd(n, 2m− 1 (mod n))if 1 < g < n do-not-alertEnd. It will be readily apparent to one skilled in the art that there are many different factoring algorithms and though the (p−1) method is illustrated here, other such algorithms may be used. These various factoring algorithms all have running time or time of execution proportional to the smallest prime in the decomposition. Since most numbers chosen at random are smooth, these factorization methods should quickly produce at least one factor and thereby eliminate a false positive for security agent124. If these methods “succeed,” the security agent permits the write operation at step306, and does not send an alert at step314. In some implementations, one or more of tests300A-300D may be skipped to reduce delay or processing load on the computing device. In one implementation, a whitelist of applications and/or servers or remote computing devices may be utilized with data generated by said applications or received from said servers or remote computing devices may bypass one or more of tests300A-300D, as the risk may be reduced to the point where (for example) if security agent124runs tests300A and300B and no key is detected, that may be sufficient. In some implementations, one or more of tests300A-300D may be performed remotely, e.g. on a cloud server or other distributed computing resource. This may reduce processor requirements on victim computer10, which may be particularly useful for mobile devices and/or Internet-of-Things appliances or devices with limited processing capability. Once an alert is triggered at step314, security agent124may take one or more actions to contain or mitigate the presumed attack.FIG.4is a flow chart of an implementation of one method for mitigation of malicious encryption as conducted by security agent124. As discussed above, security agent124may perform the method if none of the previously discussed factorization methods have succeeded in identifying a prime number in the incoming data to be written, and the primality tests show that the number is composite. As previously mentioned, such data is very likely to be an artifact of a key generation, suggesting that a ransomware attack is underway. At step402, security agent124may prevent or throttle subsequent attempts to delete data such as the user's files on disk, and/or may prevent further write commands to the disk or memory of the computing device. For example, security agent124may prevent encryption threads from finishing, may prevent deletion from proceeding, or may throttle these threads to a minimum processing speed while notifying the user. In some implementations, security agent124may generate an alert message or notification for the user, and/or may send a notification message over a network to a system administrator. In some implementations, at step406, security agent124may begin replicating data to a read-only location either on local disk or over a network. Once written, this data may be locked against further writes until allowed by an administrator or the user, preventing further attack and encryption of data. In addition, at step404, security agent124may store a snapshot of the contents of a memory unit (e.g. RAM), as well as any in-use and disused areas of virtual memory and/or buffers118. This step has important benefits: RAM and virtual memory contents may enable a future forensic investigation. If the ransomware generated the RSA key pair locally, it is possible that the corresponding private key is still found either in RAM or virtual memory. If found, the private key would enable the subsequent decryption of any encrypted files. At step408, security agent124may apply all the foregoing techniques discussed above with reference toFIGS.3A-3Eto find keys in memory, disused areas of virtual memory, and deleted files on disk. Security agent124may determine if a key is detected at step410, and if so, at step412, may decrypt any encrypted files. If a key is not found, then in some implementations, a crowd-sourced approach may be utilized to identify the key, via factoring by collision. As discussed above, the Pollard (p−1) factoring method described with reference toFIG.3Eutilizes computation of greatest common denominators. A nontrivial gcd p of two integers n and m represents a factor of both of them. The factoring method discussed above in connection withFIG.3Eblindly computes a large number hoping for a collision. Thus, at step414, having detected an RSA modulus, security agent124may transmit the modulus n to a security server, which may similarly receive and store detected RSA moduli from a plurality of infected victim computing devices. Referring briefly back toFIG.1B, a security server40may comprise any type and form of computing device, including a physical server, a cluster of servers, a virtual server executed by one or more physical servers, a server farm or cloud, or any other similar devices. Security server40may include one or more of a processor100, I/O interface102, network interface104, storage106, and memory108. In some implementations, security server40may execute a factorization host45. Factorization host45may comprise an application, server, service, daemon, routine, or other executable logic for receiving moduli from security agents124of victim computers10, storing the moduli in a database50in a storage device (not illustrated), and for calculating factorization to identify collisions. Specifically, and returning toFIG.4, at step416, responsive to the receipt of a new modulus, factorization host45may compute the pairwise greatest common denominators of all known moduli in moduli database50: For each moduli m in the database, Let g = gcd(n, m)if 1 < g < min(n,m) factor-foundEnd. If a factor is found at step418, then the key may be determined. Specifically, because we expect victim computers to be poor sources of random numbers, it is likely that we will see repeated factors and therefore be able to factor the moduli n and m. Therefore, the factorization host may transmit the value g to security agent124. At step420, security agent124may directly compute the private key, and at step412, decrypt any encrypted files. As noted above in the Background Section, if the victim can simply restore from backup422, then no files are lost; however, if backups are not frequent enough, some data may be lost. Thus, in some implementations, computing devices may transmit potential moduli to a server executing a factorization host45. Factorization host45may receive a plurality of moduli from a plurality of devices, which may be less than or equal in number to the number of moduli (e.g. some devices may provide multiple moduli). Factorization host45may store received moduli in a moduli database50. For each received moduli, factorization host45may determine whether a gcd exists for each other received moduli that is less than either of the moduli values (i.e. 1<g<min(n, m) as shown above). If so, the factorization host45may transmit the determined gcd to the security agent or agents124that provided the corresponding moduli for use decrypting encrypted data. Additionally, in some implementations, the factorization host may perform other factorization algorithms that may be too computationally expensive for a user agent or computing device. For example, the factorization host may harness many computers and computational resources like graphics cards, ASICs, FPGAs, etc. in applying factorization algorithms to newly received moduli. It will be readily apparent to one skilled in the art that the monitoring agents may be similarly applied on the user's local PC, to monitor the network stack of the PC, and also on data coming into our cloud backup service. FIG.5is a block diagram of an exemplary computing device useful for practicing the methods and systems described herein. The various devices10,20,40may be deployed as and/or executed on any type and form of computing device, such as a computer, network device or appliance capable of communicating on any type and form of network and performing the operations described herein. The computing device may comprise a laptop computer, desktop computer, virtual machine executed by a physical computer, tablet computer, such as an iPad tablet sold by Apple Inc. or Android-based tablet such as those sold by Samsung Group, smart phone or PDA such as an iPhone-brand/iOS-based smart phone sold by Apple Inc., Android-based smart phone such as a Samsung Galaxy or HTC Droid smart phone, or any other type and form of computing device capable of operating as described herein.FIG.5depicts a block diagram of a computing device500useful for practicing an embodiment of the user devices100or device of an online storage or backup provider. A computing device500may include a central processing unit501; a main memory unit502; a visual display device524; one or more input/output devices530a-530b (generally referred to using reference numeral530), such as a keyboard526, which may be a virtual keyboard or a physical keyboard, and/or a pointing device527, such as a mouse, touchpad, or capacitive or resistive single- or multi-touch input device; and a cache memory in communication with the central processing unit501. The central processing unit501is any logic circuitry that responds to and processes instructions fetched from the main memory unit502and/or storage528. The central processing unit may be provided by a microprocessor unit, such as: general purpose microprocessors sold by Intel Corporation of Santa Clara, Calif. or Advanced Micro Devices of Sunnyvale, Calif.; specialty microprocessors sold by Qualcomm Inc. of San Diego, Calif. or Samsung Group of Samsung Town, Seoul, South Korea; or any other single- or multi-core processor, or any other processor capable of executing programmed instructions and otherwise operating as described herein, or a combination of two or more single- or multi-core processors. Main memory unit502may be one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the microprocessor501, such as random access memory (RAM) of any type. In some embodiments, main memory unit502may include cache memory or other types of memory. The computing device500may support any suitable installation device516, such as a floppy disk drive, a CD-ROM drive, a CD-R/RW drive, a DVD-ROM drive, tape drives of various formats, USB/Flash devices, a hard-drive or any other device suitable for installing software and programs such as a security agent124or portion thereof. The computing device500may further comprise a storage device528, such as one or more hard disk drives or redundant arrays of independent disks, for storing an operating system and other related software, and for storing application software programs such as any program related to the security agent124. Furthermore, the computing device500may include a network interface518to interface to a Local Area Network (LAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (e.g., Ethernet, T1, T3, 56 kb, X.25), broadband connections (e.g., ISDN, Frame Relay, ATM), wireless connections, (802.11a/b/g/n/ac, Bluetooth), cellular connections, or some combination of any or all of the above. The network interface518may comprise a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, cellular modem or any other device suitable for interfacing the computing device500to any type of network capable of communication and performing the operations described herein. A wide variety of I/O devices530a-530nmay be present in the computing device500. Input devices include keyboards, mice, trackpads, trackballs, microphones, drawing tablets, and single- or multi-touch screens. Output devices include video displays, speakers, headphones, inkjet printers, laser printers, and dye-sublimation printers. The I/O devices530may be controlled by an I/O controller523as shown inFIG.5. The I/O controller may control one or more I/O devices such as a keyboard526and a pointing device527, e.g., a mouse, optical pen, or multi-touch screen. Furthermore, an I/O device may also provide storage528and/or an installation medium516for the computing device500. The computing device500may provide USB connections to receive handheld USB storage devices such as the USB Flash Drive line of devices manufactured by Twintech Industry, Inc. of Los Alamitos, Calif. The computing device500may comprise or be connected to multiple display devices524a-524n, which each may be of the same or different type and/or form. As such, any of the I/O devices530a-530nand/or the I/O controller523may comprise any type and/or form of suitable hardware, software embodied on a tangible medium, or combination of hardware and software to support, enable or provide for the connection and use of multiple display devices524a-524nby the computing device500. For example, the computing device500may include any type and/or form of video adapter, video card, driver, and/or library to interface, communicate, connect or otherwise use the display devices524a-524n. A video adapter may comprise multiple connectors to interface to multiple display devices524a-524n. The computing device500may include multiple video adapters, with each video adapter connected to one or more of the display devices524a-524n. Any portion of the operating system of the computing device500may be configured for using multiple displays524a-524n. Additionally, one or more of the display devices524a-524nmay be provided by one or more other computing devices, such as computing devices500aand500b connected to the computing device500, for example, via a network. These embodiments may include any type of software embodied on a tangible medium designed and constructed to use another computer's display device as a second display device524afor the computing device500. One ordinarily skilled in the art will recognize and appreciate the various ways and embodiments that a computing device500may be configured to have multiple display devices524a-524n. A computing device500of the sort depicted inFIG.5typically operates under the control of an operating system, such as any of the versions of the Microsoft®. Windows operating systems, the different releases of the Unix and Linux operating systems, any version of the Mac OS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, any operating systems for mobile computing devices, or any other operating system capable of running on the computing device and performing the operations described herein. The computing device500may have different processors, operating systems, and input devices consistent with the device. For example, in one embodiment, the computer500is an Apple iPhone or Motorola Droid smart phone, or an Apple iPad or Samsung Galaxy Tab tablet computer, incorporating multi-input touch screens. Moreover, the computing device500may be any workstation, desktop computer, laptop or notebook computer, server, handheld computer, mobile telephone, any other computer, or other form of computing or telecommunications device that is capable of communication and that has sufficient processor power and memory capacity to perform the operations described herein. Thus, in one aspect, the present application is directed to a method for detecting an encryption key for malicious encryption. The method includes detecting, by a security agent executed by a computing device, writing of a first item of data to memory of the computing device. The method also includes compressing, by the security agent, a first portion of the first item of data. The method further includes calculating, by the security agent, a ratio of a size of the first portion of the first item of data to a size of the compressed first portion of the first item of data. The method also includes determining that the ratio does not exceed a predetermined threshold; and responsive to the determination that the ratio does not exceed the predetermined threshold, identifying the first item of data as comprising an encryption key. The method also includes, responsive to identifying the first item of data as comprising an encryption key, generating an alert, by the security agent, indicating a likely malicious encryption attempt. In some implementations, the method includes compressing a second portion of the first item of data, different from the first portion; determining that a compression ratio of the second portion of the first item of data exceeds the predetermined threshold; and compressing the first portion of the first item of data, responsive to the determination that the compression ratio of the second portion of the first item of data exceeds the predetermined threshold. In some implementations, the method includes determining, by the security agent, that a numeric representation of the first portion of the first item of data is a composite integer; and generating the alert is performed responsive to the determination that the numeric representation of the first portion of the first item of data is a composite integer. In a further implementation, the method includes generating the numeric representation of the first portion of the first item of data via a base-64 decoding. In another further implementation, determining that the numeric representation of the first item of data is a composite integer further comprises: iteratively, for each of a predetermined set of prime numbers, determining that a result of a primality test based on said prime number, the first item of data, and a random number is negative. In yet another further implementation, the method includes determining, by the security agent, that the numeric representation of the first portion of the first item of data lacks factors within a predetermined range; and generating the alert is performed responsive to the determination that the numeric representation of the first portion of the first item of data lacks factors within the predetermined range. In some implementations, the method includes determining, by the security agent, that the first item of data meets a predetermined size requirement; responsive to the determination that the first item of data meets the predetermined size requirement, decoding the first item of data according to a predetermined encryption key encoding system; and identifying, by the security agent, an absence of an encryption key in the decoded first item of data. Compressing the first portion of the first item of data is performed responsive to identification of the absence of the encryption key in the decoded first item of data. In some implementations, the method includes detecting writing of the first item of data to a transmission buffer of a network interface of the computing device. In some implementations, the method includes detecting writing of the first item of data to a storage device of the computing device. In some implementations, the method includes determining that a length of the data is within a predetermined range of bytes. In some implementations, the method includes generating a snapshot of system memory, by the security agent; and searching the snapshot for an encryption key. In some implementations, the method includes identifying, by the security agent, an encryption modulus; and transmitting, by the security agent, the encryption modulus to a remote server, receipt of the encryption modulus triggering the remote server to calculate, for each of one or more previously received encryption moduli in a database, a greatest common denominator. The method also includes receiving, by the security agent from the remote server, a selected greatest common denominator; and calculating, by the security agent from the selected greatest common denominator and the encryption modulus, an encryption key. In another aspect, the present disclosure is directed to a method for detecting an encryption key for malicious encryption. The method includes detecting, by a security agent executed by a computing device, writing of a first item of data to memory of the computing device. The method also includes identifying, by the security agent, that the first item of data meets a predetermined size requirement. The method also includes, responsive to the identification that the first item of data meets the predetermined size requirement, decoding the first item of data according to a predetermined encryption key encoding system. The method also includes determining, by the security agent, whether the decoded first item of data includes an encryption key. The method also includes, responsive to the decoded first item of data either (i) including an encryption key, or (ii) not including an encryption key, respectively either (i) generating an alert, by the security agent, indicating a likely malicious encryption attempt, or (ii) performing at least one further key detection procedure on the first item of data. In some implementations, the method includes, responsive to the decoded first item of data not including an encryption key: compressing, by the security agent, a first portion of the first item of data; calculating, by the security agent, a ratio of a size of the first portion of the first item of data to a size of the compressed first portion of the first item of data; determining that the ratio does not exceed a predetermined threshold; and responsive to the determination that the ratio does not exceed the predetermined threshold, generating the alert, by the security agent, indicating a likely malicious encryption attempt. In some implementations, the method includes determining that the first item of data meets the predetermined size requirement by determining that a length of the data is within a predetermined range of bytes. In another aspect, the present disclosure is directed to a system for detecting an encryption key for malicious encryption. The system includes a memory unit of a computing device, the memory unit storing a first item of data; and a security agent, executed by a processor of a computing device, configured to detect writing of the first item of data to the memory unit. The security agent is also configured to identify that the first item of data meets a predetermined size requirement. The security agent is also configured to, responsive to the identification that the first item of data meets the predetermined size requirement, decode the first item of data according to a predetermined encryption key encoding system. The security agent is also configured to determine whether the decoded first item of data includes an encryption key. The security agent is also configured to, responsive to the decoded first item of data either (i) including an encryption key, or (ii) not including an encryption key, respectively either (i) generate an alert indicating a likely malicious encryption attempt, or (ii) perform at least one further key detection procedure on the first item of data. In some implementations, the security agent is also configured to, responsive to the decoded first item of data not including an encryption key, compress a first portion of the first item of data. The security agent is also configured to calculate a ratio of a size of the first portion of the first item of data to a size of the compressed first portion of the first item of data. The security agent is also configured to determine that the ratio does not exceed a predetermined threshold; and responsive to the determination that the ratio does not exceed the predetermined threshold, generate the alert indicating a likely malicious encryption attempt. In some implementations, the security agent is also configured to determine that the first item of data meets the predetermined size requirement by determining that a length of the data is within a predetermined range of bytes. In another aspect, the present disclosure is directed to a method for detecting an encryption key for malicious encryption. The method includes receiving, by a factorization host executed by a first computing device, from a second computing device, an encryption modulus. The method also includes calculating, by the factorization host, a greatest common denominator between the received encryption modulus and an additional encryption modulus, received from a third computing device and stored in a moduli database of the first computing device. The method also includes determining that the calculated greatest common denominator between the received encryption modulus and the additional encryption modulus is less than the minimum of either of the received encryption modulus and the additional encryption modulus. The method also includes transmitting, by the first computing device, the calculated greatest common denominator to the second computing device, receipt of the greatest common denominator triggering the second computing device to perform decryption of at least one file using the calculated greatest common denominator. In some implementations, the method includes receiving a plurality of encryption moduli from a plurality of additional computing devices, by the factorization host; and storing the received plurality of encryption moduli in the moduli database, by the factorization host. In such implementations, calculating the greatest common denominator between the received encryption modulus and the additional encryption modulus further comprises calculating a plurality of greatest common denominators between the received encryption modulus and each of the plurality of encryption moduli stored in the moduli database. In another aspect, the present disclosure is directed to a system for detecting an encryption key for malicious encryption. The system includes a memory unit of a computing device, the memory unit storing a first item of data; and a security agent, executed by a processor of a computing device, configured to detect writing of the first item of data to the memory unit. The security agent is further configured to, responsive to the detection: compress a first portion of the first item of data; calculate a ratio of a size of the first portion of the first item of data to a size of the compressed first portion of the first item of data; determine that the ratio does not exceed a predetermined threshold; responsive to the determination that the ratio does not exceed the predetermined threshold, identify the first item of data as comprising an encryption key; and responsive to identifying the first item of data as comprising an encryption key, generate an alert indicating a likely malicious encryption attempt. In some implementations, the security agent is further configured to: determine that a numeric representation of the first portion of the first item of data is a composite integer, and generate the alert, responsive to the determination that the numeric representation of the first portion of the first item of data is a composite integer. In a further implementation, the security agent is further configured to, iteratively, for each of a predetermined set of prime numbers, determine that a result of a primality test based on said prime number, the first item of data, and a random number is negative. In another further implementation, the security agent is further configured to: determine that the numeric representation of the first portion of the first item of data lacks factors within a predetermined range; and generate the alert, responsive to the determination that the numeric representation of the first portion of the first item of data lacks factors within the predetermined range. In still another further implementation, the security agent is further configured to: determine that the first item of data meets a predetermined size requirement; responsive to the determination that the first item of data meets the predetermined size requirement, decode the first item of data according to a predetermined encryption key encoding system; identify an absence of an encryption key in the decoded first item of data; and compress the first portion of the first item of data, responsive to identification of the absence of the encryption key in the decoded first item of data. In some implementations, the security agent is further configured to detect writing of the first item of data to a transmission buffer of a network interface of the computing device. In some implementations, the security agent is further configured to, responsive to generation of the alert, generate a snapshot of system memory, and search the snapshot for an encryption key. In some implementations, the security agent is further configured to identify an encryption modulus. The system includes a network interface of the computing device configured to: transmit the encryption modulus to a remote server, receipt of the encryption modulus triggering the remote server to calculate, for each of one or more previously received encryption moduli in a database, a greatest common denominator; and receive, from the remote server, a selected greatest common denominator. The security agent is further configured to calculate, from the selected greatest common denominator and the encryption modulus, an encryption key. It should be understood that the systems described above may provide multiple ones of any or each of those components and these components may be provided on either a standalone machine or, in some embodiments, on multiple machines in a distributed system. The systems and methods described above may be implemented as a method, apparatus or article of manufacture using programming and/or engineering techniques to produce software embodied on a tangible medium, firmware, hardware, or any combination thereof. In addition, the systems and methods described above may be provided as one or more computer-readable programs embodied on or in one or more articles of manufacture. The term “article of manufacture” as used herein is intended to encompass code or logic accessible from and embedded in one or more computer-readable devices, firmware, programmable logic, memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, SRAMs, etc.), hardware (e.g., integrated circuit chip, Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), etc.), electronic devices, a computer readable non-volatile storage unit (e.g., CD-ROM, floppy disk, hard disk drive, etc.). The article of manufacture may be accessible from a file server providing access to the computer-readable programs via a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc. The article of manufacture may be a flash memory card or a magnetic tape. The article of manufacture includes hardware logic as well as software or programmable code embedded in a computer readable medium that is executed by a processor. In general, the computer-readable programs may be implemented in any programming language, such as LISP, PERL, C, C++, C#, PROLOG, or in any byte code language such as JAVA. The software programs may be stored on or in one or more articles of manufacture as object code. | 75,609 |
11943248 | DETAILED DESCRIPTION Reference will now be made in detail to various embodiments of the subject matter described herein, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. FIG.1is a diagram illustrating an example environment100for network security testing. Referring toFIG.1, test environment100may include test system102and one or more device(s) and/or system(s) under test (SUT)116. Test system102may represent any suitable entity or entities (e.g., one or more computing platforms, nodes, or devices) associated with testing SUT116(e.g., one or more security devices). For example, test system102may generate and send traffic to SUT116and/or receive traffic from SUT116and may analyze one or more aspects associated with SUT116. SUT116may be any suitable entity or entities (e.g., devices, systems, or platforms) for receiving, processing, forwarding, and/or sending one or more messages. In some embodiments, SUT116may include one or more security devices, such as a firewall device or an intrusion protection system (IPS). For example, SUT116may include a security device (e.g., firewall or an IPS) that inspects traffic that traverses the security device (e.g., Internet protocol (IP) packets and/or network communications). In this example, SUT116may include functionality for inspecting various communications for suspicious and/or malicious data (e.g., packets that may include harmful or malicious payloads) and may include functionality for performing mitigation actions to avoid or mitigate the impact of any suspicious and/or malicious data found. In some embodiments, test system102may include a stand-alone tool, a testing device, or software executing on one or more processor(s). In some embodiments, test system102may be a single device or node or may be distributed across multiple devices or nodes. In some embodiments, test system102may include one or more modules for performing various test related functions. For example, test system102may include a traffic generator for generating test traffic and/or monitor taps located in various locations (e.g., surrounding SUT116) for monitoring communications. In some embodiments, test system102may be configured to communicate and/or interact with private network112or portions thereof, e.g., SUT116. Private network112may represent a network that includes a client device114and SUT116. In some embodiments, private network112may be a live network usable for providing services or data to end-users. In some embodiments, private network112may be a non-live or test network usable for testing purposes. Test system102may include a test controller (TC)104, a test agent106, a data storage108, and one or more taps110. TC104may be any suitable entity or entities (e.g., software executing on a processor, a field-programmable gateway array (FPGA), and/or an application-specific integrated circuit (ASIC), or a combination of software, an FPGA, and/or an ASIC) for performing one or more aspects associated with testing SUT116and/or various aspects thereof. In some embodiments, TC104may be implemented using one or more processors and/or memory. For example, TC104may utilize one or more processors (e.g., executing software stored in memory) to generate test packets for a number of message streams (e.g., flows or sessions). In another example, TC104may also utilize one or more processors to perform or initiate (e.g., via test agent106) various tests and/or analyses involving test packets and/or related responses from SUT116. In this example, TC104may send instructions to test agent106that can control (e.g., pause, restart, or stop) a test session. In some embodiments, TC104may include one or more communications interfaces (e.g., one or more receive port modules and one or more transmit port modules) for interacting with users, modules, and/or nodes. For example, port modules may include network interface cards (NICs) or other suitable hardware, firmware, and/or software for receiving or transmitting data via ports (e.g., physical or logical communication end points). In some embodiments, TC104may use one or more communications interfaces for receiving various messages and one or more communications interfaces for sending various messages. Example messages include IP messages, Ethernet frames, Ethernet messages, PDUs, datagrams, UDP messages, TCP messages, IP version 4 (v4) messages, IP version 6 (v6) messages, stream control transmission protocol (SCTP) messages, real-time transport protocol (RTP) messages, or reliable data protocol (RDP) messages, messages using a tunneling protocol, and/or other TSN related messages. In some embodiments, TC104may include or provide a communications interface for communicating with a user (e.g., a test operator). In such embodiments, a user of TC104may be any entity (e.g., an automated system or a device or system controlled or controllable by a human user) for selecting and/or configuring various aspects associated with testing and/or generating testing related metrics. For example, various user interfaces (e.g., an application programming interface (API) and a graphical user interface (GUI)) may be provided for providing configuration information, such as tests to be performed, types of metrics or statistics to be generated, attack vector data portions to be used, and/or other settings. In some embodiments, one or more user interfaces at TC104and test system102for testing SUT116and/or for providing configuration information may support automation e.g., via one or more programming languages (e.g., python, PHP, etc.), a representation state transfer (REST) API, a command line, and/or a web based GUI. For example, a user or test operator may use a web browser to interact with a web based GUI at TC104for programming or configuring one or more aspects for testing SUT116. Test agent106may be any suitable entity or entities (e.g., software executing on a processor, an ASIC, an FPGA, or a combination of software, an ASIC, and/or an FPGA) for performing one or more aspects associated with testing SUT116and/or various aspects thereof. In some embodiments, test agent106may be implemented at client device114(e.g., a computer or a mobile device) using one or more processors and/or memory. For example, test agent106may utilize one or more processors (e.g., executing software stored in memory) to generate test packets for a number of message streams (e.g., flows or sessions). In some embodiments, test agent106may communicate with test system102and/or other related entities (e.g., TC104) to receive test configuration information usable to set up and/or execute one or more test sessions. For example, test configuration information may include a script for generating and sending particular traffic and/or flows to the test participants. In this example, test agent106may configure, generate, and/or send, via client device114, test traffic based on the test configuration information. In some embodiments, TC104and/or test agent106may include functionality for accessing data storage108or other memory. Data storage108may be any suitable entity or entities (e.g., a storage device, memory, a non-transitory computer readable medium, or a storage system) for maintaining or storing information related to testing. For example, data storage108may store message capture related information, e.g., time delta information, timestamp related data, and/or other information. In this example, message capture related information may be usable to determine, derive, or compute one or more test related statistics, such time variation metrics for indicating scheduling fidelity. In some embodiments, data storage108may also contain information usable for generating statistics and/or metrics associated with one or more aspects of SUT116. For example, data storage108may contain metrics associated with one or more performance aspects of SUT116during one or more test scenarios. In this example, data storage108may maintain a particular set of computed metrics for a first test session or message stream and may maintain another set of computed metrics for a second test session or a different message stream. In some embodiments, data storage108and/or memory may be located at test system102, another node, or distributed across multiple platforms or devices. Taps110may be any suitable entities (e.g., a monitoring device, software executing on a processor, etc.) for monitoring and/or copying data that traversing a physical link, a virtual link, or a node (e.g., SUT116). For example, tap A110may be a network tap associated with a link or node that copies messages or portions thereof. In another example, tap A110may be monitoring software executing on a network node or switch located between client device114and SUT116. In some embodiments, taps110may be configured to identify and copy relevant messages or data therein based on known network address information (e.g., source address and/or destination address information) or other identifying information. In this example, taps110may store copied data (e.g., in data storage108) and/or may forward copied data to relevant entities (e.g., TC104). In some embodiments, TC104may analyze communications and/or data obtained by one or more of taps110or other test related entities and, using the data, may determine whether SUT116has detected and mitigated a suspicious and/or malicious behavior (e.g., an attack vector data portion or related traffic) or may determine whether the suspicious and/or malicious behavior was permitted to impact client device114or test agent106. It will be appreciated thatFIG.1is for illustrative purposes and that various nodes, their locations, and/or their functions described above in relation toFIG.1may be changed, altered, added, or removed. For example, some nodes and/or functions may be combined into a single entity. FIG.2is a diagram illustrating an example test system102for network security testing. In some embodiments, test system102may include functionality for emulating various nodes and/or related services. For example, test system102or a module thereof may be configured to emulate a DNS server, a web server, an SQL server, and/or an application server and may generate various types of network traffic (e.g., hypertext transfer protocol (HTTP) messages, HTTP secure (HTTPS) messages, SQL messages, representational state transfer (REST) API messages, session initiation protocol (SIP) messages, IP packets, and/or messages) that traverse SUT116. By emulating nodes and/or services outside of private network112, test system102and/or TC104can test security and/or mitigation aspects of SUT116more efficiently and comprehensively. For example, SUT116may maintain state about various communication sessions associated with client device114and may be configured to use different sets of rules for handling traffic depending on the originator of the traffic. In this example, by emulating a particular server or related application/service, test system102and/or TC104can test site-, application-, or domain-specific rules at SUT116, e.g., by determining whether SUT116detects and mitigates suspicious and/or malicious traffic (e.g., attack vector data portions) that appears to be from a given site or domain. In another example, SUT116may be configured to use more lenient rules for outgoing traffic (e.g., traffic from client device114or from nodes internal to private network112) than for incoming traffic (e.g., traffic from the web or Internet). In this example, by emulating a node or service outside of private network112, test system102and/or TC104can test rules at SUT116for traffic from sources external to private network112, e.g., by determining whether SUT116detects and mitigates suspicious and/or malicious traffic (e.g., attack vector data portions) that appears to be from external sources. Referring toFIG.2, test system102may include an emulated domain name service (DNS) server (EDNS)200and an emulated server (ES)202. EDNS200may include any suitable entity or entities (e.g., software executing on at least one processor) for emulating one or more servers usable for providing DNS resolution and/or related functions. For example, client device114(or test agent106) may request an IP address associated with ‘Twitter.com’ by sending a DNS request to EDNS200. In this example, EDNS200may respond to a DNS request with a DNS response containing an IP address that appears (e.g., to SUT116) to be associated with the domain name ‘Twitter.com’, but actually is associated with another emulated node (e.g., ES202). ES202may include any suitable entity or entities (e.g., software executing on at least one processor) for emulating one or more servers for providing an application, a mobile application, a web application or service, and/or other related functionality. For example, ES202may emulate an application server, a SQL server, a web server, or another system for providing video over IP (VOIP) service, database access, content delivery, or other services. In some embodiments, ES202may include functionality for providing a web site, media content, and/or a social network related application. In some embodiments, ES202may act or appear to be related to a particular domain name, website, or related application/service, e.g., Facebook, iTunes, Google, Twitter, Instagram, etc. For example, ES202may communicate with client device114using HTTP, HTTPS, SIP, and/or other protocols. In some embodiments, ES202may be configured to send attack vector data portions (e.g., URLs, software, media content, malformed packets, or other data) that could impact (e.g., hijack, infect, harm, or crash) or otherwise compromise a requesting entity (e.g., client device114) or traversed entity (e.g., SUT116). For example, TC104and/or a test operator may configure ES202to send malicious data in HTTP responses, HTTPS responses, SIP responses, REST API responses, IP packets, and/or other communications via various protocols during a test session. In this example, when client device114requests a webpage or related content, ES202may send an attack vector data portion in a payload portion of one or more response messages. In some embodiments, TC104and/or another test related entity may generate various messages transactions between client device114(and/or test agent106) and ES202for testing how SUT116responds to various suspicious and/or malicious traffic (e.g., attack vector data portions and/or related messages) traversing SUT116. For example, during one or more test sessions, ES202may be configured to emulate various domains or sites. In this example, ES202may send various types of suspicious and/or malicious traffic toward SUT116. Continuing with this example, TC104and/or another test related entity may utilize test related information obtained by taps110and/or other test related entities for determining whether SUT116has detected and mitigated a suspicious and/or malicious behavior (e.g., an attack vector data portion or related traffic) or for determining whether the suspicious and/or malicious behavior was permitted to impact client device114or test agent106. Referring to example communications depicted inFIG.2, test system102may be configured to test the performance of SUT116deployed to protect private network112. In some embodiments, TC104and test agent106may be configured to orchestrate the execution of a network security test case that has been provisioned by a test operator. For example, a test case may include instructions for generating test traffic by test agent106that resides at client device114in private network112protected by SUT116. In step 1, TC104may provide test configuration information to test agent106and may also signal or communicate with test agent106to initiate a test session. For example, TC104may trigger test agent106(e.g., via client device114) to generate and send test traffic, e.g., a DNS request associated with a destination domain or URL. In some embodiments, DNS settings in client device114and/or test agent106may be pre-configured to send DNS requests to EDNS200associated with test system102. In some embodiments, DNS settings in a router in private network112may be pre-configured to send DNS requests to EDNS200associated with test system102. In step 2, a DNS request may be sent from client device114to EDNS200via SUT116. For example, test agent106may trigger or otherwise instruct client device114to send a DNS request to EDNS200. In this example, the DNS request may request an IP address associated with a domain (e.g., ‘Facebook.com’). In step 3, a DNS response may be sent from EDNS200to client device114via SUT116. For example, in response to receiving a DNS request, EDNS200may generate a DNS response containing an IP address that corresponds to ES202. In step 4, an application or service request or other message may be sent from client device114to ES202via SUT116. For example, test agent106may instruct client device114to establish a session or otherwise communicate with ES202using an IP address received from EDNS200. In this example, during the session, test agent106may instruct client device114to request web related content or other data from ES202. In another example, test agent106may instruct client device114to initiate a VOIP call or other session with ES202. In some embodiments, an application or service request may include an HTTP request, an HTTPS request, a SIP request, an SQL related request, a REST API request or another message for requesting a service, an application, or related information. In step 5, an application or service response or other message containing an attack vector data portion or other suspicious and/or malicious data may be sent from ES202to client device114via SUT116. For example, ES202may receive test configuration information from TC104indicating one or more one or more suspicious and/or malicious behaviors to perform. In this example, a suspicious and/or malicious behavior may involve sending an attack vector data portion in an HTTP response to client device114via SUT116. In another example, a suspicious and/or malicious behavior may involve sending an attack vector data portion in a VOIP related response (e.g., a SIP 200 OK message) to client device114via SUT116 Example attack vector data portions may include data or URLs to data (e.g., media, viruses, botnet code, related URLs, etc.) that can impact (e.g., harm or compromise) a receiving node. In some embodiments, an application or service response may include an HTTP response, an HTTPS response, a SIP response, an SQL related response, a REST API response or another message for providing a service, an application, or related information. It will be appreciated thatFIG.2is for illustrative purposes and that various nodes, their locations, and/or their functions described above in relation toFIG.2may be changed, altered, added, or removed. For example, some nodes and/or functions may be separated into multiple entities. FIG.3is a diagram illustrating example communications for gathering communications from taps110. In some embodiments, test system102or related entities (e.g., TC104and taps110) may obtain and store measurements and/or test related data (e.g., statistics, derived metrics, captured messages, etc.) for a given test session, customer, and/or time period. In some embodiments, test related data may be sent to, provided to, and/or stored at a storage device or platform, e.g., data storage108. In some embodiments, taps110may be configured to identify and copy relevant messages or data therein based on known network address information (e.g., source address and/or destination address information) or other identifying information. In this example, taps110may store copied data (e.g., in data storage108) and/or may forward copied data to relevant entities (e.g., TC104). In some embodiments, TC104may analyze communications and/or data obtained by one or more of taps110and, using the data, may determine whether SUT116has detected and mitigated a suspicious and/or malicious behavior (e.g., an attack vector data portion or related traffic) or may determine whether the suspicious and/or malicious behavior was permitted to impact client device114or test agent106. In some embodiments, one or more taps110may be located adjacent to or in-line with SUT116. In some embodiments, taps110may be configured to monitor communications that traverse SUT116and/or communications between ES202and client device114or test agent106. In some embodiments, Referring toFIG.3, tap A110may be located between client device114and SUT116and tap B110may be located between SUT116and ES202. Depending on the originating entity or directional flow of a message, one tap may copy messages prior to processing by SUT116and another tap may copy the messages after processing by SUT116. In step301, test related data may be sent from tap A110to TC104. For example, prior to being processed by SUT116, at least some portions of an application or service request from client device114may be copied by tap A110and sent to TC104for analysis. In step302, test related data may be sent from tap B110to TC104. For example, after being processed by SUT116, at least some portions of an application or service request from client device114may be copied by tap B110and sent to TC104for analysis. In step303, test related data may be sent from tap B110to TC104. For example, prior to being processed by SUT116, at least some portions of an application or service response from ES202may be copied by tap B110and sent to TC104for analysis. In step304, test related data may be sent from tap A110to TC104. For example, after being processed by SUT116, at least some portions of an application or service response from ES202may be copied by tap A110and sent to TC104for analysis. In step405, data analysis may be performed using collected data. For example, TC104or another related entity may use test related information (e.g., copied messages and/or related reports) to determine whether SUT116is functioning correctly (e.g., as designed or expected). In some embodiments, TC104may use test related information (e.g., message data before and after processing by SUT116) to identify various actions performed by SUT116. In some embodiments, SUT116may determine, using copied data from taps110, whether actions performed by SUT116are sufficient based on a performance metric, test score, or other criteria, e.g., based on a set of preconfigured or known acceptable actions stored in data storage108. In step306, feedback may be provided to one or more relevant entities (e.g., a test operator or a related UI). For example, test system102or TC104may generate a test report for a test session with various performance statistics. In this example, one or more performance statistics may be based on SUT behaviors identified during the test session and may be computed with knowledge of expected or “acceptable” SUT behavior as defined or determined by test system102, e.g., based on test configuration information, operator instructions, and/or user preferences. In this example, the test report may be stored in memory (e.g., data storage108) and/or may be provided to a user, e.g., via a GUI. It will be appreciated thatFIG.3is for illustrative purposes and that different and/or additional messages and/or actions may be used for gathering test related data, test analysis, and/or related actions. It will also be appreciated that various messages and/or actions described herein with regard toFIG.3may occur in a different order or sequence. FIG.4is a diagram illustrating example communications for gathering communications from various testing elements. In some embodiments, test agent106and/or other test related entities (e.g., ES202or EDNS200) may be configured to monitor and store and/or report test related information associated with communications between test agent106and ES202. In some embodiments, TC104may be configured to obtain and analyze test related information from various sources. For example, TC104may be configured to use information obtained by test agent106and ES202in lieu of or in addition to test related information obtained using one or more tap(s)110. In this example, TC104may use the obtained test related information for determining whether SUT116has detected and mitigated a suspicious and/or malicious behavior (e.g., an attack vector data portion or related traffic) or for determining whether the suspicious and/or malicious behavior was permitted to impact client device114or test agent106. In some embodiments, TC104may receive test related information at various times during a test session from test related entities. For example, test agent106and ES202may provide reports or data during the test session and/or may provide data and/or reports (e.g., a condensed summary) at the end of a test session. Referring toFIG.4, test agent106may be located at client device114and SUT116may be located between test agent106and ES202. For example, for one directional flow (e.g., egress from network112), test agent106may send data about messages prior to processing by SUT116and ES202may send data about messages after processing by SUT116and, for another directional flow (e.g., ingress to network112), ES202may send data about messages prior to processing by SUT116and test agent106may send data about messages after processing by SUT116. In step401, test related data may be sent from test agent106to TC104. For example, prior to being processed by SUT116, at least some portions of an application or service request from client device114may be copied by test agent106and sent to TC104for analysis. In step402, test related data may be sent from ES202to TC104. For example, after being processed by SUT116, at least some portions of an application or service request from client device114may be copied by ES202and sent to TC104for analysis. In step403, test related data may be sent from ES202to TC104. For example, prior to being processed by SUT116, at least some portions of an application or service response from ES202may be copied by ES202and sent to TC104for analysis. In step404, test related data may be sent from test agent106to TC104. For example, after being processed by SUT116, at least some portions of an application or service response from ES202may be copied by test agent106and sent to TC104for analysis. In step405, data analysis may be performed using collected data. For example, TC104or another related entity may use test related information (e.g., copied messages and/or related reports) to determine whether SUT116is functioning correctly (e.g., as designed or expected). In some embodiments, TC104may use test related information (e.g., message data before and after processing by SUT116) to identify various actions performed by SUT116. In some embodiments, SUT116may determine, using copied data from taps110, whether actions performed by SUT116are sufficient based on a performance metric, test score, or other criteria, e.g., based on a set of preconfigured or known acceptable actions stored in data storage108. In step406, feedback may be provided to one or more relevant entities (e.g., a test operator or a related UI). For example, test system102or TC104may generate a test report for a test session with various performance statistics based on identified SUT behaviors during the test session and expected or “acceptable” SUT behavior as defined or determined by test system102, e.g., based on test configuration information, user instructions, and/or user preferences. In this example, the test report may be stored in memory (e.g., data storage108) and/or may be provided to a user, e.g., via a GUI. It will be appreciated thatFIG.4is for illustrative purposes and that different and/or additional messages and/or actions may be used for gathering test related data, test analysis, and/or related actions. For example, test agent106and/or ES202may provide additional information (e.g., a report containing computed test statistics) at the end of a test session. It will also be appreciated that various messages and/or actions described herein with regard toFIG.4may occur in a different order or sequence. FIG.5is a diagram illustrating an example process500for network security testing. In some embodiments, example process500, or portions thereof, may be performed by or at test system102, TC104, test agent106, and/or another node or module. In some embodiments, example process500may include steps502,504,506,508, and/or510. Referring to process500, in step502, a DNS request requesting an IP address associated with a domain name may be received at EDNS200from client device114. For example, test agent106may be configured to generate a DNS request to be sent from client device114to EDNS200via SUT116. In some embodiments, client device114may include test agent106, wherein test agent106may communicate with TC104to receive test configuration information. For example, test agent106may receive test configuration information from TC104. In this example, test agent106may use the test configuration information when determining when a test session is to be initiated and/or what test traffic to generate or use during the test session. In some embodiments, test configuration information may include information about a test session, address information for interacting with the emulated DNS server, one or more URLs to be requested by the client device, and/or a domain name. For example, TC104may send test configuration information to test agent106and/or other entities for testing SUT116. In this example, the test configuration information may include data for testing traffic that appears to come from a social media platform (e.g., Facebook). Continuing with this example, the test configuration information may indicate a domain name (e.g., ‘Facebook.com’) associated with the social media platform to be tested and/or various test traffic that is to traverse SUT116(e.g., from or to client device114). In step504, a DNS response including an IP address associated with ES202may be sent from EDNS200to client device114via SUT1106. In some embodiments, ES202may be configured to act or emulate a real server associated with a requested domain name. For example, ES202may appear to SUT116and/or an end-user like a web server associated with Facebook.com or YouTube.com. In some embodiments, ES202may include an application server, a web server, an SQL server, or a VOIP server. In step506, a service request may be received, using the IP address, at ES202from client device114. In some embodiments, a service request may include an HTTP request, an HTTPS request, a SIP request, or REST API request. For example, test agent106may be configured to generate an HTTP request to be sent from client device114to EDNS200via SUT116. In this example, the HTTP request may be for requesting a webpage and/or media content. In another example, test agent106may be configured to generate a SIP INVITE request to be sent from client device114to EDNS200via SUT116. In this example, the SIP INVITE request may be for requesting a call or communication session (e.g., SIP dialog) to be established. In another example, test agent106may be configured to generate a REST API request to be sent from client device114to EDNS200via SUT116. In this example, the REST API request may be for requesting data from SQL database. In step508, a service response including at least one attack vector data portion may be sent from ES202to client device114. In some embodiments, a service response may include an HTTP response, an HTTPS response, a SIP response, or REST API response. For example, ES202may be configured to generate an HTTP response to be sent from ES202to client device114via SUT116. In this example, the HTTP response may include an attack vector data portion selected from data storage108, where the selected attack vector data portion may be based on test configuration information from TC104. In another example, ES202may be configured to generate a SIP response to be sent from ES202to client device114via SUT116. In this example, the SIP response may include an attack vector data portion selected from data storage108. In another example, ES202may be configured to generate a REST API response to be sent from ES202to client device114via SUT116. In this example, the REST API response may include an attack vector data portion selected from data storage108. In some embodiments, an emulated server may select an attack vector data portion that is sent to client device114from a data store (e.g., data storage108) containing attack vector data portions. In some embodiments, the selection of an attack vector data portion by ES202may be based on test configuration information received from TC104. In some embodiments, an attack vector data portion may include a URL, a payload, or content that can compromise security or performance of the client device. For example, ES202may send service responses with malicious payloads to client device114. In this example, a malicious payload may appear to be a web page or media content but may include ransomware code or a computer virus. In step510, a performance metric associated with SUT116(e.g., a firewall or a security device) may be determined by TC104using data obtained by at least one test related entity. In some embodiments, SUT116may inspect communications between client device114and ES202. In some embodiments, SUT116may include a security device, a firewall, and/or an IPS. In some embodiments, determining a performance metric may include analyzing data obtained by at least one test related entity to determine whether a mitigation action was taken by SUT116in response to an attack vector data portion. In some embodiments, analyzing data may include determining whether a mitigation action taken by SUT116was appropriate based on a viable mitigation action known to test system102(e.g., TC104). For example, test system102or a related entity (e.g., TC104) may be aware of one or more viable mitigation actions for various attack vector data portion. In this example, test system102or a related entity (e.g., TC104) may use this knowledge when analyzing or evaluating mitigation actions taken by SUT116during a test session. In some embodiments, at least one test related entity that sends data to test system102or a related entity (e.g., TC104) may include client device114, test agent106, SUT116, ES202, EDNS200, and/or one or more taps110(e.g., a physical communications tap or a virtual communications tap). It will be appreciated that example process500is for illustrative purposes and that different and/or additional actions may be used. It will also be appreciated that various actions described herein may occur in a different order or sequence. It should be noted that test system102, TC104, and/or functionality described herein may constitute a special purpose computing device. Further, test system102, TC104, and/or functionality described herein can improve the technological field of testing network nodes by providing mechanisms for providing and/or network security testing using an emulated server. For example, a test system that is capable of emulating a web server and/or other external nodes can test various security rules of SUT116(e.g., site- and/or domain-specific rules and/or external network rules) related to inspecting and mitigating impact of suspicious and/or malicious behaviors (e.g., traffic with attack vector data portions) from different sources, e.g., nodes external to private network112. In this example, the test system may test how SUT116responds to various types of malicious traffic appearing to come from a Facebook domain, an Cloud domain, a Gmail domain, and/or other domains or content providers that SUT116may be specifically configured (e.g., using content- or domain-specific rules) to handle. The subject matter described herein for network security testing using at least one emulated server improves the functionality of test platforms and/or test tools by emulating external nodes, applications, and/or services (e.g., EDNS200and ES202) thereby allowing a test system to efficiently and effectively test how a security device (e.g., a firewall and/or an IPS) associated with inspecting network communications responds to various types of suspicious and/or malicious traffic that appears to be from external nodes. It should also be noted that a computing platform that implements subject matter described herein may comprise a special purpose computing device (e.g., a network device test device) usable to test a security device that inspects network communications. It will be understood that various details of the subject matter described herein may be changed without departing from the scope of the subject matter described herein. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the subject matter described herein is defined by the claims as set forth hereinafter. | 37,320 |
11943249 | DESCRIPTION OF EMBODIMENTS The embodiments of the present disclosure will be described in detail below with reference to examples thereof as illustrated in the accompanying drawings, throughout which same or similar elements are denoted by same or similar reference numerals. The embodiments described below with reference to the drawings are illustrative only, and are intended to explain, rather than limiting, the present disclosure. A cyberspace coordinate system creation method and apparatus based on an AS according to an embodiment of the present disclosure will be described below with reference to the accompanying drawings. The cyberspace coordinate system creation method based on the AS according to an embodiment of the present disclosure will be described first with reference to the figures. FIG.1is a flowchart illustrating a cyberspace coordinate system creation method based on an AS according to an embodiment of the present disclosure. As illustrated inFIG.1, the cyberspace coordinate system creation method based on the AS includes the following steps. At step S101, a cyberspace coordinate system is determined. Further, in an embodiment of the present disclosure, the cyberspace coordinate system is a two-dimensional coordinate system. Determining the cyberspace coordinate system includes mapping, based on a predetermined algorithm, a one-dimensional AS Number (ASN) to a two-dimensional coordinate space. Mapping, based on the predetermined algorithm, the one-dimensional ASN to the two-dimensional coordinate space includes performing dimension ascending mapping on the ASN by using a Hilbert mapping algorithm, and determining that coordinates of the cyberspace coordinate system collectively represent an AS attribute of cyberspace, which is similar to expressing national information by latitudes and longitudes in a geospatial model. Specifically, it is determined that the cyberspace coordinate system adopts the ASN as a basic vector, and maps the one-dimensional ASN to the two-dimensional coordinate system based on the predetermined algorithm. At step S102, a framework for a three-dimensional cyberspace coordinate system is constructed. Further, in an embodiment of the present disclosure, the method includes: orthogonalizing a time sequence of Internet Protocol (IP) address allocation under the AS determined as a third-dimension basic vector to a two-dimensional AS coordinate space, and analyzing and mapping an IP address of a key attribute of cyberspace. Specifically, the third-dimension basic vector is determined. The time sequence of the IP address allocation under the AS is orthogonalized to the two-dimensional AS coordinate space to construct a framework for the cyberspace coordinate system. The IP address of the key attribute of cyberspace is analyzed and mapped. Further, the framework for the three-dimensional cyberspace coordinate system is a three-dimensional coordinate system that includes a third-dimension coordinate axis perpendicular to a basic vector of a two-dimensional AS coordinate space. Orthogonalizing the time sequence of the IP address allocation under the AS determined as the third-dimension basic vector to the AS coordinate space includes: a third coordinate axis perpendicular to the two-dimensional coordinate system representing the time sequence of the IP address allocation under the AS, and a positive direction representing sequence increment. Analyzing and mapping the IP address of the key attribute of cyberspace includes: modeling cyberspace by defining a three-dimensional coordinate system space, and locating any cyberspace resource element based on an IP address of a key identifier of communication, in which a Z-axis mapping algorithm is described as sorting, based on allocation time, all IP addresses under the jurisdiction of a certain AS in ascending order, and sorting IP addresses under the same allocation time from small to large in decimal, a serial number is mapped to a third-dimension coordinate system, and the IP address can be analyzed and mapped by locating a coordinate (x, y, z) based on the above Hilbert algorithm and the Z-axis mapping algorithm; and analyzing and mapping the IP address by locating a coordinate based on a Hilbert algorithm and the Z-axis mapping algorithm. At step S103, a cyberspace map model is constructed based on the cyberspace coordinate system and the framework for the three-dimensional cyberspace coordinate system. Further, in an embodiment of the present disclosure, constructing the cyberspace map model based on the cyberspace coordinate system and the framework for the three-dimensional cyberspace coordinate system includes: determining that a hierarchical structure of a cyberspace is divided into three layers: the AS, a network, and an IP; and defining a sphere mode of the cyberspace. Specifically, the constructed cyberspace map model intuitively expresses a hierarchical network structure and an AS topological connection relationship. At step S104, an application scenario corresponding to the constructed cyberspace map model is designed, and visualization processing is performed on the application scenario. The cyberspace coordinate system based on the AS is designed to construct the cyberspace map model. The visualization expression of cyberspace is supported. Requirements such as multi-scale traversal in cyberspace, network topology visualization, and locating any object in cyberspace are supported. Hierarchical and scalable map characteristics are realized to meet visualization requirements of displaying the distribution of objects in cyberspace from different hierarchies. Also, a concept of the topological thematic map is introduced to visualize the network topology. Satisfying the requirements of the multi-scale traversal in cyberspace, object locating, and the network topology visualization includes determining that the hierarchical structure of cyberspace is divided into three layers: the AS, the network, and the IP. Compared with a geographic map model, the AS is analogous to countries, the network corresponds to provinces and cities, and the IP addresses are equivalent to house numbers of residential buildings. A hierarchical and multi-scale expression of resource elements of cyberspace, such as the network composition under the AS and an IP composition under the network, is performed based on a rectangular tree diagram, so as to meet visualization and locating requirements of different levels of network management persons. A sphere mode of cyberspace is defined to realize the network topology visualization. Take the AS topology as an example, on a basis of the number of IP addresses under the jurisdiction of the AS and the Border Gateway Protocol (BGP) data, the force-directed algorithm is used to map the AS to a new three-dimensional coordinate (X, Y, Z). The size of a sphere of the AS represents the number of IP addresses, and a flying line represents the topological connection relationship between ASes. The sphere mode of cyberspace is also suitable for an internal AS network topology and an IP topology, thereby realizing management over elements and links. Further, in an embodiment of the present disclosure, designing the application scenario corresponding to the constructed cyberspace map model, and performing the visualization processing on the application scenario includes: calculating, based on a mapping algorithm, coordinates of an attack source and a destination IP address in a three-dimensional cyberspace coordinate system based on the AS to visualize real-time attack scenarios, in which a flying line represents an attack direction, and a line thickness represents attack traffic. Specifically, designing map an application scenario and implementing visualization includes two scenarios. Scenario 1: a real-time attack scenario in cyberspace. Coordinates (X, Y, Z) of an attack source and a destination IP address in a three-dimensional cyberspace coordinate system based on the AS are calculated based on a mapping algorithm to visualize the real-time attack scenario. A flying line represents an attack direction. A line thickness represents attack traffic. The traffic characteristic of a network attack is observed by using the AS as a granularity to analyze an attack behavior, thereby assisting a security analyst in better understanding and defending against attacks. Scenario 2: a DDOS attack scenario in cyberspace. The coordinates (X, Y, Z) of the attack source and the destination IP address in the three-dimensional cyberspace coordinate system based on the AS are calculated based on the mapping algorithm. The line thickness represents attack traffic. Compared with a geographic map, the AS, the network, and IP address information of the attack source and a target can be obtained through different hierarchies of expansions, such that security problems may be quickly located, and vulnerability diagnosis and repair may be performed. In addition, drawing the above attack scenarios in the topological thematic map may visually display a topological path that an attack went through, thereby realizing topological traceability and discovery of the attack source, and guiding a change of connectivity of the Internet infrastructure when a network security attack is encountered. In the method, the cyberspace coordinate system architecture with the ASN as the basic vector is determined. Considering that cyberspace is a virtual information space, a free flow of massive information constitutes the instantaneous diversity of cyberspace. Selecting the basic vector that may constantly characterize the nature of cyberspace is very important for building the cyberspace map model. The AS, as a collection of networks and IP addresses under the control of a management agency, is equivalent to a country in geographic space, and is a basic unit of business exchanges and communications between domains in cyberspace. The Hilbert mapping algorithm is selected to realize visualization of dimension ascending of the one-dimensional ASN. Also, the framework for the cyberspace coordinate system with the time sequence of the IP address allocation under the AS determined as the third-dimension basic vector is constructed, so as to express the key attribute of cyberspace information communication (that is, IP address information). The cyberspace coordinate system creation method based on the AS of the present disclosure will be described below with reference to the figures and specific embodiments. As illustrated inFIG.2, the one-dimensional ASNs of a specified range are mapped to the two-dimensional coordinate space based on the Hilbert mapping algorithm. The AS, as the collection of networks and IP addresses under the control of the management agency, is equivalent to a country in the geographic space, and is the basic unit of business exchanges and communications between domains in cyberspace. The Hilbert mapping algorithm, as a dimension ascending algorithm, may map the one-dimensional ASNs of the specified range to a two-dimensional coordinate system space, which helps guarantee the proximity of ASNs, and construct a basic two-dimensional plane of the cyberspace coordinate system. The Hilbert mapping algorithm will be briefly introduced. An order represents a degree of expansion of Hilbert mapping and a representable range. A detailed algorithm of mapping the ASN to a two-dimensional coordinate (X, Y) is as follows. Algorithm 1: ASN2xy(ASN, ASNB, n), where the ASN is a decimalrepresentation of the ASN, ASNB is a binary representation ofthe ASN, ASNB = (h2n−1h2n−2. . . h1h0)2, and n representsan order of a Hilbert curve.1:Data = {0, 1, 2, . . . n − 1}2:<v0, v1> = Rot(ASNB, n, 0)3:for each num k∈ Datado4:xk= (v0,k× (~h2k)) ⊕ v1,k⊕ h2k+15:yk= (v0,k+ h2k) ⊕ v1,k⊕ h2k+16:Hout = <x, y> = <(xn−1xn−2. . . x0)2, (yn−1yn−2. . . y0)2>8.returnHout Algorithm 2: Rot(ASNB, n, k), where ASNB represents the binaryrepresentation of the ASN, ASNB = (h2n−1h2n−2. . . h1h0)2,n represents the order of the Hilbert curve, andk represents an initial order.1:If k == n2:v0,n−1= 0, v1,n−1= 03:else4:<v0, v1> = Rot (ASNB, n, k + 1)5:v0,k= v0,k+1⊕ h2k⊕ ~h2k+16:v1,k= v1,k+1⊕ ((~h2k) × (~h2k+1))7:return <v0,n−1v0,n−2. . . v0,0, v1,n−1v1,n−2. . . v1,0> The specified range of the ASNs is [0,22n-1], and a corresponding order of the Hilbert mapping algorithm is n.FIG.2respectively illustrates two-dimensional coordinate spaces obtained by mapping the ASNs of the specified ranges of [0,3], [0, 15], [0,63], and [0,255]. The above algorithm is applied to the entire ASN space [0,65535] for dimension ascending mapping, and a corresponding Hilbert order n is equal to 8, such that the two-dimensional cyberspace coordinate system based on the AS may be determined. FIG.3is a schematic diagram showing a framework of a cyberspace three-dimensional coordinate system based on an AS according to an embodiment of the present disclosure. As illustrated inFIG.3, on the basis of the above two-dimensional coordinate system, the time sequence of the IP addresses allocation under the AS is determined as the third-dimension basic vector, and is orthogonalized to the two-dimensional AS coordinate space. A positive direction represents sequence increment. Specifically, the Z-axis mapping algorithm is defined as follows. S1: Suppose that there are n IP addresses, {IP1, IP2, IP3, IP4, IP5, IP6, . . . , IPn}, under the jurisdiction of the AS. Also, corresponding allocation time {T1, T2, T3, T4, T5, T6, . . . , Tn} accurate to seconds is collected. S2: All IP addresses under the AS are sorted in ascending order in accordance with the allocation time. Under the same allocation time, a new IP address sequence {NIP1, NIP2, NIP3, NIP4, NIP5, NIP6, . . . , NIPn} is obtained by sorting the IP addresses based on decimal values of the IP address from small to large. A serial number is mapped to the third-dimension coordinate system. As illustrated inFIG.3, Z=10,000 represents that a certain IP address is located at a 10,000-th position in the new IP address sequence sorted based on the allocation time. FIG.4is a flowchart illustrating a cyberspace coordinate system creation method based on an AS according to a specific embodiment of the present disclosure. As illustrated inFIG.4, the cyberspace coordinate system creation method based on the AS according to an embodiment includes the following steps. (1) It is determined that the cyberspace coordinate system uses the ASN as the basic vector. The one-dimensional ASN is mapped to the two-dimensional coordinate system based on the predetermined algorithm. In an embodiment, the cyberspace coordinate system is the two-dimensional coordinate space obtained based on the above Hilbert mapping algorithm, and the ASN is determined as the basic vector to represent an AS element of cyberspace. Specifically, considering that cyberspace is a virtual information space, the free flow of massive information constitutes the instantaneous diversity of cyberspace. Selecting the basic vector that may constantly characterize the nature of cyberspace is very important for constructing the cyberspace map model. The AS, as the collection of networks and IP addresses under the control of the management agency, is equivalent to a country in the geographic space, and is the basic unit of business exchanges and communications between domains in cyberspace. The two-dimensional coordinate system of cyberspace based on the AS is constructed based on the above Hilbert mapping algorithm to determine a unified and constant backboard, thereby showing AS elements of cyberspace. (2) The framework for the three-dimensional cyberspace coordinate system is constructed. The time sequence of IP address allocation under the AS determined as the third-dimension basic vector is orthogonalized to the two-dimensional AS coordinate space. The IP address of the key attribute of cyberspace is analyzed and mapped. Specifically, since the two-dimensional coordinate system is limited to only expressing AS elements, the IP address, as a unique fingerprint allocated when a device is connected to the network in cyberspace, provides a location and a network interface identifier of a host in cyberspace, and is the key identifier of all cyberspace resource elements. Therefore, in the present disclosure, the time sequence of the IP address allocation under the AS is added on a basis of the two-dimensional coordinate system as a third-dimension vector orthogonal to the AS to construct the framework for the three-dimensional cyberspace coordinate system, thereby expressing IP information. For example, a certain IP address 166.111.8.2 belongs to AS4538. A plane coordinate (24, 76) is obtained based on an ASN2xy algorithm. All IP addresses under the jurisdiction of AS4538 are sorted in a chronological order based on the above Z-axis mapping algorithm, and a sequence number 8902 of the allocation time of 166.11.8.2 is determined as a Z-axis representation of the IP address. Therefore, in the three-dimensional coordinate system, (24, 76, 8902) will be the coordinates of the IP address, and indicates the corresponding networked device on the Internet, thereby expressing resource information of cyberspace. Specifically, compared with the cyberspace coordinate system based on IP addresses, the present disclosure may better solve essential problems of cyberspace. For example, an IP address space is too large to be fully expressed, it is difficult to find a better visualization solution for 232, i.e., about 4 billion address spaces, and the discontinuous allocation of IP address segments results in scattered IP addresses under the same AS. Taking AS4538 as an example, IP address segments under AS4538 include 101.4.0.0/14, 101.5.0.0/16, 101.77.0.0/16, 111.186.0.0/15, 114.212.0.0/16, etc. In the visualization process, the same AS is scattered in various positions of an IP address coordinate system, and thus the expression result is unsatisfying. (3) The cyberspace map model is constructed to support the visualization expression of cyberspace, and meet requirements such as the multi-scale traversal in cyberspace, the object locating, and the network topology visualization. Specifically, although the cyberspace coordinate system based on the AS may intuitively express an IP granularity, it is insufficient to show a hierarchical presentation of details of cyberspace. The requirements of the multi-scale traversal in cyberspace and locating various Internet objects cannot be met through the cyberspace coordinate system. Constructing the cyberspace map model with the concept of the geographic map model needs to satisfy map characteristics of scalability and hierarchy in consideration of different visualization requirements of different users for elements distribution in cyberspace, resource information, connection relations, and the like. Also, the cyberspace is the second largest space parallel to the geographic space. Since there are corresponding geographic maps and thematic maps in the geographic space that may express mountains, rivers, and city streets in a scalable manner, the cyberspace map model also needs to design some thematic map modules to achieve a multi-dimensional display of specific network details. The present disclosure takes the topological thematic map as an example for designing and explanation. Specifically, to support the multi-scale traversal in cyberspace and have map characteristics of scalability and hierarchy, cyberspace needs to be divided into different hierarchies. With reference to the geographic map model, the hierarchical structure of cyberspace is determined to be divided into three hierarchies, i.e., the AS, the network, and the IP. The AS is analogous to countries, the network corresponds to provinces and cities, and the IP addresses are equivalent to house numbers of residential buildings. A method based on the rectangular tree diagram supports the multi-scale traversal in cyberspace and the object locating. Through a hierarchy-by-hierarchy expansion of the cyberspace resource elements, fine-grained visualization of the network composition under a certain AS and the composition of IP addresses under the network is realized, thereby facilitating asset management and operation maintenance of different levels of network management persons. Specifically, object locating in cyberspace is supported. Hierarchical structures of resources of the AS, the network, and the IP addresses in cyberspace are abstracted into a tree. Root nodes represent all resources in cyberspace. The cyberspace is divided based on the AS to obtain a number of sub-nodes representing the AS. Large and small networks under the AS may be used as a division and an extension of AS nodes. A natural structure of the tree may be used to express an inclusion relationship between networks. For example, China Education and Research Network (Cernet) includes a backbone network, and campus networks of more than 100 colleges and universities including Tsinghua University, Peking University, Wuhan University, Zhengzhou University, and Hunan University. Each campus network may also be divided into different regional Local Area Networks (LANs), such as the LAN of CERNET under the campus network of Tsinghua University. IP resources are used as leaf nodes to fill the network nodes, thereby constructing a cyberspace resource tree. The rectangular tree diagram, as a chart structure to realize an intuitive visualization of the hierarchical structure, uses a rectangle to represent nodes in a tree-shaped hierarchical structure. A hierarchical relationship between parent and child nodes is expressed by metaphors of mutual nesting between the rectangles. (4) Application scenarios of the map are designed and visualized. The present disclosure provides a map device that displays cyberspace in multiple dimensions. Specifically, it is designed that the application scenarios of the map include the real-time attack scenario and the DDOS attack scenario in cyberspace. The above cyberspace map models respectively define concepts such as a cyberspace coordinate system based on the AS, the hierarchical structure of the cyberspace map, the topological thematic map, etc., on which basis statistics results of data are visualized, and security scenarios of cyberspace are presented in different dimensions. Scenario 1: the real-time attack scenario in cyberspace. Real-time attack data of the global Internet in a honeypot is collected. Coordinates (X, Y, Z) of the attack source and the destination IP address in the three-dimensional cyberspace coordinate system based on the AS are calculated based on the above mapping algorithm to visualize the real-time attack scenario. The flying line represents the attack direction, and the line thickness represents the attack traffic. The traffic characteristic of the network attack is observed by using the AS as a granularity to analyze the attack behavior, thereby assisting the security analyst in better understanding and defending against attacks. Scenario 2: the DDOS attack scenario in cyberspace. On a basis of data of a DDOS attack on a Tsinghua server, the coordinates (X, Y, Z) of the attack source and the destination IP address in the three-dimensional cyberspace coordinate system based on the AS are calculated based on the above mapping algorithm. The line thickness represents the attack traffic. Compared with the geographic map, obtaining information of the AS, the network, and the IP address of the attack source and the target through expansions at different hierarchies may quickly locate security problems and perform vulnerability diagnosis and repair. In addition, drawing the above attack scenarios in the topological thematic map may visually display the topological path that the attack went through, thereby realizing the topological traceability and discovery of the attack source, and guiding the change of the connectivity of the Internet infrastructure when the network security attack is encountered. FIGS.5A and5Bare a schematic diagram showing a hierarchical and multi-scale expression of resource elements of cyberspace based on a rectangular tree diagram in a cyberspace map model. Specifically, AS4538 in a cyberspace map based on the AS is clicked to visualize the resource composition of the AS node. All large networks under AS4538 are visualized. Here, the all large networks only include the Cernet network. The size of the rectangle represents that a number of IP addresses of the Cernet network is 17,170,688. A further expansion is performed to visualize the distribution of small campus networks under the Cernet network node, as illustrated inFIG.5A. Nodes of the campus network are orthogonal to each other and do not overlap. The number of IP addresses is represented by the size of the rectangle, and the label presents specific network information, e.g., a range of IP addresses of a campus network GUANGZTC-CN of the South China Normal University is 202.192.32.0-202.192.47.0, which contains 3,840 IP addresses. A campus network administrator may click on the campus network to enter an IP hierarchy and visualize IP resource nodes under nodes of the campus network. For example,FIG.5Billustrates IP address information under GUANGZTC-CN. Depending on degrees of attention of different resources received from a management person, weights are assigned to resources to which different IP addresses belong, e.g., a weight of a server is 3, a weight of a host 1, and a weight of a printer is 2. The size of the rectangle is used for distinguishing. Through hierarchical, scalable and intuitive expression, fine-grained resource information in cyberspace may be presented to meet visualization requirements of management persons at different levels. Specifically, the above cyberspace map model only realizes multi-scale visualization and locating of elements in the network resource. Considering that topological connection is an important attribute of cyberspace, the topological connection is often used to express a connection relationship of cyberspace units for realizing management over elements and links. It is necessary to introduce the concept of the topological thematic map to realize the visualization of the topology and display a structure of the topological connection. The research is mainly carried out from the AS hierarchy, the network hierarchy, and the IP hierarchy. Each cyberspace unit may be composed of cyberspace subunits. Division and dimension descending are performed on cyberspace to achieve multi-level and fine-grained cognition of cyberspace. Specifically, topology visualization is supported. When the topological thematic map is designed to achieve multi-dimensional display of specific network details, the topological connection relationship for a certain AS may be directly represented by flying lines in the cyberspace coordinate system based on the AS. The topological connection relationship of the certain AS may be drawn based on Border Gateway Protocol (BGP) data. However, considering that the topological connection relationship between global network units is relatively complicated, the drawing of a global network topology may cause severe crossing between flying lines, resulting in a poor visualization effect. In this case, an additional cyberspace sphere mode may be designed as the topological thematic map. A new mapping algorithm is adopted to remap the AS to the three-dimensional space. It is expected to find a new arrangement to make crossing between the flying lines as little as possible. The force-directed algorithm, as a classic graph layout algorithm, calculates a position that a combined force of gravity and repulsion moves each node, and considers applying the position to visualization of the AS topological thematic map for presenting a more reasonable layout. Further, the AS nodes may be used as a set of nodes in the force-directed algorithm. A BGP connection between ASes is used as an edge. The algorithm is specifically implemented as follows. Algorithm 3: TopologyMap(N, V, M, S, ks) An input N representsa set of AS nodes, V represents a set of topological connectionsbetween ASes, M represents a number of iterations of thealgorithm, S represents a size of a canvas, and ks representsa constant number for calculating gravity1:for each node n∈ N //set an initial position of a random node2:coordinate(n) = [random(x), random(y), random(z)]3:while i < M do //M iterations4:for each node n1∈ N//calculate a displacement caused by aCoulomb repulsion between two points5:for each node n2∈ N6:k = (S/N · size( ))27:distance(n1, n2) += k/∥n1 − n2∥28:for each edge v∈ V// n1, n2 are two end points of v, calculate adisplacement caused by Hooke's gravity9:distance(n1, n2) −= ks(∥n2 − n1∥)10:for each d∈ distance: //recalculate a new position of the nodebased on an offset displacement11:coordinate(n1) is updated by d12:coordinate(n2) is updated by d FIG.6illustrates a topological thematic map that intuitively expresses an AS topological connection relationship based on a force-directed algorithm in a cyberspace model. The AS is a global routing strategy unit. A traffic relationship of the AS defines a high-level global Internet topology. As illustrated inFIG.6, taking the AS topology as an example, on a basis of the number of IP addresses and the BGP data under the jurisdiction of the AS, the above force-directed algorithm is used to map the AS to the new three-dimensional coordinate (X, Y, Z). The size of a sphere of the AS represents the number of IP addresses under the AS. When a specific AS is selected, the topological connection directly connected to the specific AS will be displayed. The flying line represents the topological connection relationship. The line thickness represents traffic information. In addition, the topological thematic map is also suitable for the internal network topologies (e.g. IP topology) under the AS, thereby guiding a network management person to change the connectivity of the Internet infrastructure when a security attack is encountered, assisting the network management person in checking hardware configuration, determining a position to add a new route, and finding bottlenecks and faults in the network. In the present disclosure, by taking nature characteristics of the network, e.g., the AS and the IP address, the framework for the cyberspace coordinate system is constructed, the cyberspace map model is established, and the visualization of multi-dimensional information in cyberspace based on the unified and constant backboard is realized. The AS topology, the IP address composition, the network resource element information, the hierarchical structure, and the like are included, thereby intuitively and effectively expressing cyberspace. Compared with a method for creating an architecture of the cyberspace coordinate system on a basis of the IP addresses and logical ports, an essential problem of visualization may be better solved, i.e., the poor visualization effect caused by the discontinuous allocation of IP segments under a AS and a network, and the hierarchical structure of the network and the AS topological connection relationship may be expressed intuitively. In addition, in the present disclosure, a map device that displays cyberspace in multiple dimensions is provided. The map device is applied to visualization scenarios such as network security attacks and network management, so as to fill gaps in spatial theoretical models in cyberspace research and promote the development of cyberspace security and surveying and mapping. In the present disclosure, a map creation device for multi-dimensional display of cyberspace of the cyberspace coordinate system based on the AS is designed. FIG.7illustrates a block diagram showing a structure of a map creation device. The device includes a determination module1, a creation module2, a topological thematic map module3, a real-time attack scenario module4, and a DDOS attack scenario module5. The determination module1is configured to determine the cyberspace coordinate system based on the AS, and realize the analysis and mapping of the IP addresses of the cyberspace elements. The creation module2is configured to create the cyberspace map model, determine that the hierarchical structure of cyberspace is divided into three hierarchies: the AS, the network, and the IP, and express, based on the rectangular tree diagram, the resource elements of cyberspace in a hierarchical and scalable manner. The topological thematic map module3is configured to define the sphere mode of cyberspace to display the topological thematic map, and use the force-directed algorithm to map the AS to the new three-dimensional coordinate (X, Y, Z), so as to realize management over the cyberspace elements and links. The real-time attack scenario module4is configured to visualize the real-time attack scenario in the cyberspace map model, which may assist the security researchers in better understanding and defending against attacks. The DDOS attack scenario module5is configured to visualize the DDOS attack scenario in the cyberspace map model, which may quickly locate security issues and perform vulnerability diagnosis and repair. Further, the cyberspace coordinate system is a three-dimensional coordinate system. Further, the determining module1is specifically configured to: construct the three-dimensional cyberspace coordinate system, map the AS to the two-dimensional coordinate space based on the above predetermined algorithm, and determine orthogonalizing the time sequence of IP address allocation under the AS determined as the third-dimension basic vector to the two-dimensional AS coordinate space. The coordinate (x, y, z) of the IP address is located based on the Hilbert algorithm and the Z-axis mapping algorithm to realize the analysis and mapping of the IP address of the key attribute. Specifically,FIG.8illustrates mapping of the IP address of a key attribute of cyberspace in a three-dimensional coordinate system. By defining the three-dimensional coordinate system space to model the cyberspace, it is possible to locate any cyberspace resource element based on the IP address of the key identifier of communication.FIG.8illustrates an analysis and mapping of a global IP address space (232). Firstly, the ASN to which the IP addresses belong is located. Secondly, the coordinate (X, Y, Z) is located based on the above ASN2xy algorithm and the Z-axis mapping algorithm. The AS is used as the basic vector to express the IP address element of cyberspace. Further, the cyberspace map model is built on the cyberspace coordinate system based on the AS. The creation module2is specifically configured to construct the cyberspace map model, and support the requirements of the multi-scale traversal in cyberspace, the network topology visualization, and the object locating in cyberspace. The hierarchical structure refers to the geographic map model. The AS is analogous to countries, the network corresponds to provinces and cities, and the IP addresses are equivalent to house numbers of residential buildings. The hierarchical and scalable expression of the resource elements, such as the network composition under the AS and the IP composition under the network, of cyberspace is performed based on the rectangular tree diagram, so as to meet the visualization requirements of different levels of management persons. Further, the topological thematic map module3is specifically configured to realize the topology visualization of different hierarchical structures. Taking the AS hierarchy as an example, on a basis of the number of IP addresses and the BGP data under the jurisdiction of the AS, the force-directed algorithm is used to map the AS to the new three-dimensional coordinate (X, Y, Z). The size of a sphere of the AS represents the number of IP addresses under the AS, and the flying line represents the topological connection relationship. A network management person is guided to change the connectivity of the Internet infrastructure, check the hardware configuration, determine the position where new routers should be added, and find bottlenecks and faults in the network when a security attack is encountered. The real-time attack scenario module4is specifically configured to visualize the real-time attack scenario in the network map model, including the cyberspace coordinate system based on the AS and the topological thematic map. By observing traffic characteristics of the network attack at the AS granularity and analyzing attack behaviors, topological traceability and discovery of the attack source are realized to help the security analyst better understand and defend against attacks. FIGS.9A and9Billustrate a schematic diagram showing a real-time attack scenario module of a cyberspace map. As illustrated inFIG.9A, the global real-time attack scenario is visualized on the cyberspace coordinate system based on the AS. The coordinates (X, Y, Z) of the attack source and the destination IP address in the three-dimensional cyberspace coordinate system based on the AS are obtained based on the above mapping algorithm. The flying line represents the attack direction. The line thickness represents the attack traffic. An attack data analysis is attached. It may be observed that the IP is stacked on a corresponding AS. There are skyscrapers (large AS) and small thatched houses (small AS). A large AS launches attacks from a number of IP addresses and has large firepower. Here, the line thickness represents the firepower of an attack. Also, the large AS receives attacks from many other ASes are received due to large targets. A small AS (small thatched house) has weak firepower but suffers fewer attacks. Therefore, it is convenient to observe traffic characteristics of the network attack by using the AS as the granularity and analyze the attack behavior.FIG.9Billustrates drawing the real-time attack scenario in the AS topological thematic map. Some large ASes at the center are often both initiators of real-time attacks and attack targets of other ASes. In addition, clicking on an AS may display the topological connections between the ASes, thereby realizing the topological traceability and discovery of the attack source. The DDOS attack scenario module5is specifically configured to visualize the DDOS attack scenario in the network map model. Compared with the geographic map, the AS, the network, and the IP address information of the attack source and the target may be obtained through different hierarchies of expansions, thereby quickly locating security issues, and performing vulnerability diagnosis and repair. Also, shielding data packets transmitted from an attack source IP address is also an effective defense against a DDOS behavior. FIGS.10A and10Bare schematic diagrams showing a Distributed Denial of Service (DDOS) attack scenario module of a cyberspace map according to an embodiment of the present disclosure. As illustrated inFIG.10A, the DDOS attack scenario on the server of the Tsinghua University is visualized on the cyberspace coordinate system based on the AS. Coordinates (X, Y, Z) of DDOS puppet hosts and the destination IP addresses in the three-dimensional cyberspace coordinate system based on the AS are calculated. The flying line represents the attack direction. The line thickness represents the attack traffic. The attack data analysis is attached. Compared with the geographic map, which may only be able to refine network resources in the geographic space, the present disclosure realizes that clicking on the AS where the puppet host is located can expand, based on the rectangular tree diagram, the cyberspace resource element where the puppet host is located hierarchy by hierarchy, thereby displaying the network information, the IP segment information, and the IP information of the attack source in a fine-grained manner, and shielding the data packets transmitted from the IP address segment of the attack source for temporary protection.FIG.10Billustrates drawing the DDOS attack scenario on the AS topological thematic map. By viewing topological connection relationships of many puppet hosts, a possible location of an attacker is analyzed based on cross information, and the attacker may be traced and discovered. In addition, understanding the topology of a target host may guide the management persons to change the connectivity of the Internet infrastructure in the event of a security attack. In the cyberspace coordinate system creation method based on the AS according to the present disclosure, the three-dimensional cyberspace coordinate system is constructed by determining that the ASN is orthogonal to the time sequence of the IP address allocation under the AS, so as to realize precise locating and description of the IP address of a unique identifier in cyberspace. Compared with the cyberspace coordinate system based on the IP addresses, essential problems of the cyberspace may be better solved, e.g., a large space occupied by the IP addresses, and an unsatisfying expression effect caused by discontinuous allocation of the IP segments under the AS and the network. On the basis of the above description, constructing the cyberspace map model with the concept of the geographic map model supports the multi-scale traversal in cyberspace, the network topology visualization, and the object locating in cyberspace. Visualization requirements of the distribution and the resource information of different cyberspace objects (the AS, the network, and the IP) of different users are taken into consideration, thereby realizing characteristics of scalability and hierarchy of the map. Also, the thematic map module is designed to realize the multi-dimensional display of details of network topology, so as to meet different visualization requirements of management persons. In addition, the present disclosure also completes the design of the real-time attack and DDOS scenarios and implements the visualization of the security scenarios of map applications. Intuitive renderings are convenient for the management persons to analyze traffic and realize the topological traceability and discovery of the attack source. Compared with conventional research based on mature theoretical models, the present disclosure provides a unified backboard for the visualization of multi-dimensional information in cyberspace, thereby filling in the gaps in the theoretical model of cyberspace maps. A cyberspace coordinate system creation apparatus based on an AS according to an embodiment of the present disclosure will be described below with reference to the accompanying drawings. FIG.11is a block diagram showing a structure of a cyberspace coordinate system creation apparatus based on an AS according to an embodiment of the present disclosure. As illustrated inFIG.11, the cyberspace coordinate system creation apparatus based on the AS includes a determination module100, a first construction module200, a second construction module300, and a design visualization module400. The determination module100is configured to determine a cyberspace coordinate system. The first construction module200is configured to construct a framework for a three-dimensional cyberspace coordinate system. The second construction module300is configured to construct a cyberspace map model based on the cyberspace coordinate system and the framework for the three-dimensional cyberspace coordinate system. The design visualization module400is configured to design an application scenario corresponding to a constructed cyberspace map model, and perform visualization processing on the application scenarios. The creation apparatus can realize the visualization of multi-dimensional information of cyberspace based on a unified and constant backboard, e.g., the AS topology, the IP address composition, the network resource element information, the hierarchical structure, and the like, and is suitable for visualization of a number of security attacks on the cyberspace and network management scenarios. Further, in an embodiment of the present disclosure, the cyberspace coordinate system is a two-dimensional coordinate system. The determination module is configured to map, based on a predetermined algorithm, a one-dimensional ASN to a two-dimensional coordinate space. Mapping, based on the predetermined algorithm, the one-dimensional ASN to the two-dimensional coordinate space includes performing dimension ascending mapping on the ASN by using a Hilbert mapping algorithm, and determining that coordinates of the cyberspace coordinate system collectively represent an AS attribute of cyberspace. Further, in an embodiment of the present disclosure, the first construction module is configured to orthogonalize a time sequence of IP address allocation under the AS determined as a third-dimension basic vector to the two-dimensional AS coordinate space, and analyze and map an IP address of a key attribute of cyberspace. Further, in an embodiment of the present disclosure, the second construction module is configured to: determine that a hierarchical structure of a cyberspace is divided into three layers: the AS, a network, and an IP; and define a sphere mode of cyberspace. It should be noted that the above explanation of the embodiments of the cyberspace coordinate system creation method based on the AS is also applicable to the cyberspace coordinate system creation apparatus based on the AS according to the embodiments, and thus repeated description is omitted here. The cyberspace coordinate system creation apparatus based on the AS according to an embodiment of the present disclosure has specific advantages over conventional geographic coordinate system, the topological coordinate system, and the cyberspace coordinate system based on the IP address. The creation apparatus may realize the visualization of multi-dimensional information of cyberspace based on a unified and constant backboard, e.g., the AS topology, the IP address composition, the network resource element information, the hierarchical structure, and the like, and is suitable for visualization of a number of security attacks on the cyberspace and network management scenarios. In addition, terms such as “first” and “second” are used herein for purposes of description and are not intended to indicate or imply relative importance, or imply a number of indicated technical features. Thus, the feature defined with “first” and “second” may include one or more this feature explicitly or implicitly. In the description of the present disclosure, “a plurality of” means at least two, for example, two or three, unless specified otherwise. Reference throughout this specification to “an embodiment,” “some embodiments,” “an example,” “a specific example,” or “some examples,” means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. The appearances of the above phrases in various places throughout this specification are not necessarily referring to the same embodiment or example of the present disclosure. Furthermore, the described particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples. In addition, different embodiments or examples and features of different embodiments or examples described in the specification may be combined by those skilled in the art without mutual contradiction. Although embodiments of the present disclosure have been shown and described above, it should be understood that above embodiments are just explanatory, and cannot be construed to limit the present disclosure. For those skilled in the art, changes, alternatives, and modifications can be made to the embodiments without departing from the scope of the present disclosure. | 48,262 |
11943250 | DESCRIPTION OF EMBODIMENTS An embodiment of a testing device according to this application will be described in more detail below in conjunction with drawings. Note that this embodiment is not intended to limit the present invention. [Configuration of First Embodiment] First, a configuration of a network having a testing device according to a first embodiment will be described with reference toFIG.1.FIG.1illustrates an example configuration of a network having a testing device according to the first embodiment. As shown inFIG.1, a network1includes a testing device10and a to-be-tested system20. In addition, the to-be-tested system20includes a network device21, a security device22, and a server23. The systems and devices in the network1are connected by, for example, any type of communication network including wired or wireless local area network (LAN) and virtual private network (VPN). The testing device10includes a test packet transmitting/receiving unit121, a monitoring unit122, a management unit123, and a storage unit13. The test packet transmitting/receiving unit121transmits a test packet for security resistance test to the devices included in the to-be-tested system20. The test packet transmitting/receiving unit121also receives a packet transmitted from the to-be-tested system20in response to the test packet. The monitoring unit122monitors the load situation of the devices in the to-be-tested system20. In addition, the management unit123performs setting and acquires and analyzes information related to the test packet transmitting/receiving unit121and the monitoring unit122. For example, in the example inFIG.1, the testing device10performs the test packet transmitting/receiving unit121and monitoring unit122according to the setting by the management unit123. Note that for example, a plurality of testing devices10may be distributed so that the test packet transmitting/receiving unit121, the monitoring unit122, and the management unit123are performed distributedly by the testing devices. Here, withFIG.2, the testing device10will be described.FIG.2illustrates an example configuration of the testing device according to the first embodiment. As shown inFIG.2, the testing device10includes an interface unit11, a control unit12, and a storage unit13. The interface unit11is an interface for communication control with other devices. For example, the interface unit11transmits and receives a packet to/from other devices through a network. In addition, the interface unit11is, for example, a network interface card such as a LAN card. The interface unit11includes a test packet interface111, a monitoring interface112, and a management interface113. The test packet interface111transmits and receives a packet when performing the test packet transmitting/receiving function. In addition, the monitoring interface112transmits and receives a packet when performing the monitoring unit122in the testing device10. In addition, the management interface113transmits and receives a packet when performing the management unit123in the testing device10. The control unit12controls the entire testing device10. For example, the control unit12is an electronic circuit such as the central processing unit (CPU), micro processing unit (MPU), and graphical processing unit (GPU) or an integrated circuit such as the application specific integrated circuit (ASIC) and field programmable gate array (FPGA). The control unit12includes a test packet transmitting/receiving unit121, a monitoring unit122, and a management unit123. Note that the monitoring unit122is an example of a surveillance unit. A test scenario unit124establishes HTTP and HTTPS sessions with the to-be-tested system20such as a Web server according to a scenario written in script, etc. and then generates a test packet for the to-be-tested system20. The test scenario unit124also generates a test packet based on the cookie received from the server23to transmit a test packet carrying session information such as log-in information. The test scenario unit124performs test packets other than GET and POST Flood, including attack tests of creating and deleting a plurality of accounts for the server23, frequent log in and out from the accounts, and frequent searches, and also an attack test, such as Slow READ, of changing the TCP header on the carried session. A response unit125receives a response request corresponding to TCP authentication, HTTP authentication, and challenge response authentication performed by the security device22, identifies the received response request, and makes a response that adapts to the identified response request, in other words, a response by which the security device22authenticates the attack packet to be valid. An address distribution unit126distributes source IP addresses of the test packets to be transmitted according to a preset IP address list. By way of example, the address distribution unit126allocates, to the TCP SYN packet transmitted as the test packet, different source IP addresses according to the IP address list, and in the subsequent same TCP connection, uses the same source IP address, thus communicating using different source IP addresses in a plurality of TCP connections. In addition, if the address distribution unit126is notified of a packet filtering threshold of the to-be-tested system from the monitoring unit122, the address distribution unit126controls the number of source IP addresses and adjusts the test packet transmission per source IP address not to correspond to the packet filtering threshold of the to-be-tested system. A transmission unit127transmits a test packet for increasing processing load to the server23protected by the security device22, the security device22performing authentication of a packet transmitted to the to-be-protected device. When the transmission unit127transmits a test packet, if the security device22has a packet discard function with a packet signature, the transmission unit127sets packet information such as a user agent to be the same as that of a general browser in order to prevent the test packet from being determined as not the general browser and discarded according to the packet information such as the user agent. By way of example, a packet transmitting function of the general browser may be used. The monitoring unit122monitors situations of packet filtering and processing load of the security device22or server23to which an attack packet authenticated valid by the security device22is transmitted. As monitoring of the packet filtering situation, the monitoring unit122monitors the number of test packets, the byte amount, and the number of sessions per unit of time per source IP address, and the response packet from the to-be-tested system. Then the monitoring unit122knows the source IP address that comes to receive no response packet even if it is transmitting a test packet, although other source IP address test packets receive a response packet. As the packet filtering threshold of the to-be-tested system, the monitoring unit122records the number of test packets, the byte amount, the number of sessions, and the time stamp that are transmitted at the time immediately before the relevant source IP address comes to receive no response packet. The monitoring unit122then notifies the control unit12of those values. The storage unit13stores various types of information used in performing the control unit. For example, the storage unit13is a semiconductor memory device such as random access memory (RAM) and flash memory or a storage device such as a hard disk and an optical disk, etc. The testing device10may perform a packet load test on the devices included in the to-be-tested system20. Here, the packet load test by the testing device10will be described with reference to the packet load test on the security device22and server23by way of example. In transmitting the packet to the server23, the to-be-tested system20allows the security device22to pass the normal browser communication and block out an attack packet by a bot or an attack tool. For example, if the security device22senses transmission of a packet to the server23, the device22makes an authentication request for the relevant packet. For example, the TCP authentication, HTTP authentication, and challenge response authentication are requested. The security device22also monitors the number of packets, the byte amount, the number of sessions per unit of time per source IP address, etc. If they exceed a predetermined threshold, the security device22registers the relevant source IP address in a blacklist. This is based on that the source of the packet is a general browser operated by a person, the operator makes a response that adapts to the response request, and the number of packets and byte amount per unit of time transmitted by a general browser operated by a person does not correspond to a predetermined threshold. In addition, simple packet transmission such as SYN Flood and GET Flood for testing the processing load on the server23may only measure the processing load of a part of the server processing that addresses the denial-of-service attack. Thus, with the conventional attack tool that is intended for the packet load test, it has been difficult to perform the packet load test that measures the processing load at each stage of the server23and security device22. First, with reference toFIG.3, a multi-stage protect function will be described.FIG.3illustrates the multi-stage protect function. As shown inFIG.3, in transmitting the packet to the server23, the security device22needs to perform limitation of the number of source packets and authentication at a plurality of stages. The security device22performs, for example, the TCP authentication, HTTP authentication, challenge response authentication, limitation of the number of source packets per unit of time, limitation of the number of source bytes per unit of time, and limitation of the number of sessions per unit of time. For example, if the security device22senses the transmission of the packet to the server23, the device22monitors, for the relevant packet, the number of packets and the number of sessions, etc. per source IP address. If the source of the packet clears the threshold based on the number of packets and the number of sessions, etc. transmitted by the general browser operated by a person, then the security device22may allow the packet to pass the function of limiting the number of source packets. For example, if the threshold to be passed is set as 6 packets/sec or below and 6 sessions/sec or below, the security device22determines that the source IP address meeting the threshold to be passed is communication from the general browser and passes it. Meanwhile, if the transmitted packet is intended for an SYN Flood attack by a spoofed source, the security device22discards the relevant packet at the stage of TCP authentication. Therefore, even if the packet is transmitted by the attack tool intended for the packet load test on the server23, the security device22senses, at a predetermined stage, that the transmission of the relevant packet is the attack and discards the relevant packet. Additionally, even if there is an attack tool that may respond to the TCP authentication, HTTP authentication, and challenge/response, the attack tool may be determined to be the attack according to the limitation of the number of packets, the limitation of the number of bytes, and the limitation of the number of sessions per unit of time by the source packet limitation. Thus, the relevant source IP address may be registered in a blacklist and the packet may be discarded. Thus, with the conventional attack tool that is intended for the packet load test, it has been difficult to perform the packet load test on the server23and security device22. In contrast, the testing device according to the first embodiment may allow for the packet load test on the server23and security device22. Here, with reference toFIG.4, a description is given of operations when the testing device10performs the packet load test on the server23or security device22. FIG.4is a sequence diagram for illustrating the packet load test by the testing device according to the first embodiment. First, the testing device10sets the attack packet and monitoring (step S101). In so doing, the testing device10sets to transmit test packets in which, for example, a large amount of test packets log into the server after the HTTP connection and a large amount of searches are performed. In addition, the testing device10sets monitoring that performs, for example, response confirmation to ping and traceback of the example server23or HTTP response confirmation. In addition, as monitoring of the packet filtering situation, the testing device10monitors the number of test packets, the byte amount, and the number of sessions per unit of time per source IP address, and the response packet from the to-be-tested system. The testing device10then knows the source IP address that comes to receive no response packet even if it is transmitting a test packet, although other source IP address test packets receive a response packet. The testing device10sets to record, as the packet filtering threshold of the to-be-tested system, the source IP address that comes to receive no response packet, the number of test packets, the byte amount, the number of sessions, and the time stamp that are transmitted at the time immediately before the relevant source IP address comes to receive no response packet, then notify the control unit12of those values. Then, the transmission unit127in the testing device10transmits the test packet from the test packet interface111. In so doing, first, the transmission unit127transmits a TCP SYN packet to the IP address 10.0.0.1 of the server23to establish TCP connection with the server23(step S102). In response, the security device22makes a TCP authentication response request to determine whether the SYN packet transmitted to the server23is the attack packet (step S103). Note that if the TCP connection is established, an SYN/ACK packet is transmitted to the source of the SYN packet. Here, it is known that even if an invalid packet is transmitted to the SYN packet, for example, the attack tool does not make a response that adapts to the invalid packet and transmits the SYN packet again. Thus, for the TCP authentication, the security device22transmits to the testing device10, invalid packets such as, for example, an SYN/ACK packet with a cookie, an SYN/ACK packet including invalid ACK sequence number, an ACK packet, and an RST packet. Then, if a response is returned that adapts to the transmitted invalid packet, the security device22allows the SYN packet to pass the TCP authentication. Here, the response unit125in the testing device10makes a response to the security device22that adapts to the TCP authentication response request (step S104). For example, if an SYN/ACK packet including an SYN packet with a cookies is transmitted, the response unit125identifies that the relevant packet is an SYN/ACK packet with a cookie. Then, the response unit125transmits to the security device22an ACK packet with a sequence number that is set based on the contents of the relevant cookie. Note that it is considered that an attack tool intended for the SYN Flood attack makes no response even if the security device22transmits an SYN/ACK packet with a cookie. The testing device10may thus establish the TCP connection. with the server23and prevent the test packet transmitted by the transmission unit127from being discarded at the stage of TCP authentication. Then, the testing device10may perform the packet load test on the security device22and server23in the authentication at the stage before the TCP authentication. If the TCP connection is established, the transmission unit127transmits an HTTP request packet to the server23(step S105). The security device22makes an HTTP authentication response request to determine whether the HTTP request packet transmitted to the server23is the test packet (step S106). Here, the response unit125makes a response to the security device22that adapts to the HTTPS authentication (step S107). For example, the response unit125identifies that the response from the security device22is a redirect response. Then, the response unit125transmits an HTTP request packet to a redirect destination that is specified to a value such as a uniform resource identifier (URI) indicated by a Location header in the redirect response. Note that it is considered. that an attack tool that does not make a response adapting to the redirect response does not refer to the Location header or transmit the HTTP request packet to the redirect destination. Additionally, in order to determine whether the HTTP request packet transmitted to the transmission server23is the attack packet, the security device22makes an HTTP authentication response request using an HTTP cookie or JavaScript (registered trademark) (step S108). In the HTTP authentication using the HTTP cookie or JavaScript, the security device22requests, for example, the testing device10to perform processing of reading the contents in the cookie and returning the read result using the program written in JavaScript. Then, if the performed result of the relevant program is returned in a predetermined time, the security device22allows the HTTP request packet to pass the HTTP authentication. Here, the response unit125makes a response to the security device22that adapts to the HTTP authentication using an HTTP cookie or JavaScript (step S109). For example, the response unit125identifies that data transmitted from the security device22is a run command in JavaScript. Then, the response unit125notifies the security device22of the contents in the cookie obtained as a result of performing the program written in JavaScript. Note that it is considered that an attack tool that does not make a response adapting to the HTTP authentication using the JavaScript and cookie makes no response to the HTTP authentication using the HTTP cookie or JavaScript. The testing device10may thus pass the HTTP authentication, thus preventing the attack packet transmitted by the transmission unit127from being discarded at the stage of HTTP authentication. Then, the testing device10may perform the packet load test on the security device22and server23in the authentication at a stage before the HTTP authentication. Additionally, if the HTTP authentication is performed, the transmission unit127transmits a HTTP request packet to the server23(step S110). In order to determine whether the HTTP request packet transmitted to the server23is the attack packet, the security device22makes a challenge response authentication response request (step S111). When performing the challenge response authentication, the security device22requests, for example, the testing device10bto perform a mouse movement on a predetermined path or the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). When a response is returned that adapts to the mouse movement or the CAPTCHA, the security device22allows the HTTP request packet to pass the authentication by the challenge response authentication. Here, the response unit125makes a response to the security device22that adapts to the challenge response authentication (step S112). For example, the response unit125identifies that the security device22indicates a mouse movement path. Then, the response unit125reads the path indicated as the mouse movement path and transmits to the security device22the same signal as that generated when a mouse is moved along the read path. The response unit125also identifies that the security device22indicates the CAPTCHA. Then, the response unit125transmits to the security device22text data converted from the CAPTCHA by an image-to-text service or OCR, etc. Note that it is considered that an attack tool that does not make a response adapting to the challenge response authentication makes no response to the challenge response authentication by the mouse movement or CAPTCHA. The testing device10may thus pass the challenge response authentication, thus preventing the test packet transmitted by the transmission unit127from being discarded at the stage of challenge response authentication. The testing device10transmits the test packet to the server23in the to-be-tested system20(step S113). The testing device10also receives the response packet from the to-be-tested system20(step S114). The testing device10may thus perform the packet load test on the server23. Increase of the source IP addresses of the test packets that may be transmitted from the single testing device10may allow for simulating the denial-of-service attack from a plurality of attackers and transmitting the test packets from a plurality of IP addresses without preparing multiple control unit. In the testing device, thus reducing the test resources. If, for example, the test packet to be transmitted is the TCP SYN packet, the address distribution unit126allocates source IP addresses different from those of the previous TCP SYN packets sequentially according to the IP address list set by the management unit123and allocates the same source IP addresses to the relevant TCP connections. This may thus allow for transmitting the test packets from a plurality of source IP addresses while maintaining the IP address consistency of the TCP connection. Then, on the test packet transmitted from the testing device10and the response packet from the to-be-tested system20to the test packet, the monitoring unit122monitors and analyzes the situation of the packet filtering of the to-be-tested system20for the test packet (step S115). As monitoring of the packet filtering situation, the monitoring unit122monitors the number of test packets, the byte amount, and the number of sessions per unit of time per source IP address, and the response packet from the to-be-tested system. The monitoring unit122then knows the source IP address that comes to receive no response packet even if it is transmitting a test packet, although other source IP address test packets receive a response packet. The monitoring unit122sets to record, as the packet filtering threshold of the to-be-tested system20, the source IP address that comes to receive no response packet, the number of test packets, the byte amount, the number of sessions, and the time stamp that are transmitted at the time immediately before the relevant source IP address comes to receive no response packet, then notify the control unit12of those values. If the address distribution unit126in the control unit12is notified of the packet filtering threshold of the to-be-tested system20from the monitoring unit122, the address distribution unit126controls the number of source IP addresses and adjusts the test packet transmission per source IP address not to correspond to the packet filtering threshold of the to-be-tested system. For example, the address distribution unit126stops, for a certain amount of time, transmission from the source IP address that comes to receive no response packet and is determined. to be packet filtered. The address distribution unit126then transmits a test packet from a new source IP address that has not been packet filtered yet. The address distribution unit126also performs packet transmission per source IP address in a limited range that does not correspond to the packet filtering. The testing device10may thus pass the source packet limitation as shown inFIG.3, thus preventing the test racket transmitted by the transmission unit127from being discarded at the stage of source packet limitation. The testing device10may thus perform the packet load test on the subject server23. Simple packet transmission such as SYN Flood and GET Flood as the packet load test on the server23may only measure the processing load of a part of the server processing that addresses the denial-of-service attack. Then, the test scenario unit124establishes the HTTP and HTTPS sessions with the to-be-tested system20such as a Web server according to a scenario written in script, etc. The test scenario unit124then generates a test packet based on the cookie received from the server23to transmit the test packet carrying session information such as log-in information to the server23. The test scenario unit124performs test packets other than GET and POST Flood, including attack tests of creating and deleting a plurality of accounts for the server23, frequent log in and out from the accounts, and frequent searches, and also an attack test, such as Slow READ, of changing the TCP header on the carried session. This may allow for measuring, on the server23, simple server processing load such as HTTP GET packet processing load and HTTP POST packet processing load as well as performing load test for processing load such as server23log-in information encryption and decryption processing load, search processing load, and database processing load. Meanwhile, the monitoring unit122makes a monitoring response request to the server23. For example, the monitoring unit122makes response confirmation to ping or traceback of the server23or HTTP response confirmation according to the setting by the testing device10. Then, the server23responds to the monitoring response request while processing the attack packet. Then, the monitoring unit122outputs the monitoring results from the monitoring interface112. Additionally, the testing device10analyzes the monitoring results and instructs the testing device10to change the scenario as necessary. Specifically, the testing device10analyzes the response time and response contents of the server23while taking correlation between the received monitoring results and test traffic, which is a type or amount of the attack packet. The testing device10records and analyzes, in a time series, the response time change and response message of the server23, the test traffic contents when no response is received, and the test traffic contents when the response is restored, etc. and understands the function of high processing load. As the scenario change, for example, the management unit123changes the amount of test packets transmitted by the transmission unit127depending on the situation of the processing load of the security device22or server23. Specifically, if the processing load of the security device22or server23is at a predetermined level or more, the management unit123increases the amount of test packets transmitted to the security device22or server23by the transmission unit127. Then, the management unit123understands the function of high processing load and changes the scenario of the test traffic. The management unit123then extracts the test traffic condition at which the function of high processing load has the maximum load, according to the response time change and response message of the server23when the scenario is changed, the test traffic contents when no response is received, and the test traffic contents when the response is restored. Note that the testing device10may test and analyse a plurality of to-be-tested instruments including other than the server23and understand the instrument of high processing load among the to-be-tested instruments. For example, as the testing device10increases the amount of log-in attack packets, the processing load of the server23increases and the HTTP response time increases. Then, the testing device10records the amount of attack packets when the server23makes the HTTP 404 error response in which the server is connected but cannot display the web page and the amount of attack packets when the server23cannot respond. The testing device10may thus understand the resistance of the server23against the log-in attack. In addition, as the test packet increases in its amount, the security device22may detect the attack and discard the relevant attack packet, thus stopping the increase of the processing load of the server23. In so doing, the testing device10understands, from the monitoring results, that increasing the attack packets to the server23does not increase the processing load of the server23. In this case, the testing device10may test if the processing load increases by transmitting, by processing of the address distribution unit126, the test packets from different source IP addresses in a range that does not correspond to the packet filtering threshold. Additionally, the denial-of-service attack packet, etc, may be transmitted to the server23from not only the single testing device10but a plurality of testing devices according to the scenario. Thus, a countermeasure for a large amount of attacks per source IP address and the countermeasure effectiveness of cache, etc. may be studied and further monitored to understand the denial-of-service limitation, bottleneck, and test traffic pattern at that time, etc. This may determine whether the responses of the server23monitored by a plurality of testing devices are different due to the filter setting to the testing device10by the network device21, security device22, or server23itself, or the load of the server23. Note that the testing device10may quit the authentication on the way and perform the load test on the processing of the security device22at any authentication stage. For example, the testing device10may make a response that adapts to the TCP authentication response request by the security device22, and then does not make a response that adapts to the HTTP authentication response request by the security device22. The testing device10may thus perform the load test on the processing of the security device22at the HTTP authentication stage. Likewise, the testing device10may perform the load test on the security device22at each authentication stage to identify the authentication stage that is the bottleneck. [Effects of First Embodiment] The test scenario unit124in the testing device10establishes the HTTP and HTTPS sessions with the to-be-tested device such as a Web server and then generates a test packet for performing log in and search, etc. to the to-be-tested device according to the scenario. The address distribution unit126distributes source IP addresses of test packets to be transmitted according to a preset IP address list, uses the same source IP address in the same connection, and changes the packet amount per source IP address to avoid the packet filtering depending on the packet filtering situation of the security device22and server23. According to the packet generated by the test scenario unit124and the source IP address setting by the address distribution unit126, the transmission unit127transmits a test packet that increases processing load. In addition, the response unit125receives a response request corresponding to the authentication performed by the security device22, identifies the received response request, and makes a response that adapts to the identified response request, in other words, a response by which the security device22authenticates the attack packet to be valid. In addition, the monitoring unit122monitors the situation of the processing load of the server or the authentication function of high processing load of the security device22to which the attack packet authenticated to be valid by the security device22is transmitted. As described above, the testing device10according to the first embodiment may pass authentication by making a response that adapts to the response request corresponding to the authentication, avoid the packet filtering per source IP address, and then test the security resistance by applying load to decryption processing of the to-be-tested instrument and a plurality of points including database and the like. In addition, testing a plurality of stages of authentication and a plurality of instruments may identify the bottleneck. In addition, every time the response unit125receives a response request corresponding to up to any stage of authentication among the stages of authentication performed stepwise by the security device22, the response unit125identifies the received response request and makes a response that adapts to the identified response request, in other words, a response by which the security system authenticates the test packet to be valid. The security device22may thus be tested at its any stage. The transmission unit127transmits to the server23such as a Web server, the test packet and a packet generated by operation of the Web browser. The test may thus be done in a situation close to the actual attack. The management unit123changes the amount of attack packets transmitted by transmission unit127depending on the situation of the processing load of the security device22or server23. This may allow for understanding of the operation of the to-be-tested instrument depending on the processing load. If the processing load of the security device22or server23is at a predetermined level or more, the management unit123changes the contents of the test packets transmitted to the security device22or server23by the transmission unit127. This may allow for understanding of the limitation of the processing load of the to-be-tested instrument. [Other Embodiments] If the server23is a server other than the Web server, such as a DNS server, or if the network device21or security device22is studied, the testing device10transmits the normal packet and a denial-of-service attack packet according to the protocol and application being served by the to-be-tested instrument. In so doing, the security device22may transmit a request of DNS authentication, etc. such as a TCP retransmission request, but the testing device10atransmits a packet according to the request. Thus, even if additional authentication is performed, the security resistance and bottleneck may be studied on the to-be-tested instrument. [System Configuration, etc.] In addition, the elements of the devices shown are ideational functions and may not be necessarily configured as physically shown. In other words, specific aspects of the distribution and integration of the devices are not limited to those as shown, and all or some of the devices may be configured by functionally or physically distributing or integrating them in any unit depending on various loads and utilization or the like. Additionally, for the processing functions performed by the devices, all or any part of the functions may be achieved by a CPU and a program analyzed and performed by the CPU or achieved as hardware with a wired logic. In addition, among the processing described in this embodiment, all or part of the processing described as being done automatically may be done manually or all or part of the processing described as being done manually may be done automatically in a well-known manner. In addition, information including the processing procedure, control procedure, specific names, and various types of data and parameters described in the above description and drawings may be arbitrarily changed unless otherwise described. [Program] In addition, a program written in a language executable by a computer may be created for processing performed by the testing device described in the above embodiment. For example, a program written in a language executable by a computer may be created for processing performed by the testing device according to the embodiment. In this case, the same effects as in the above embodiment may be provided by a computer executing the program. An example computer that performs a program will be described below. FIG.5illustrates a computer that performs a program. A computer1000includes, for example, a memory1010and a CPU1020. The computer1000also includes a hard disk drive interface1030, a disk drive interface1040, a serial port interface1050, a video adapter1060, and a network interface1070. These components are connected via a bus1080. The memory1010includes a read only memory (ROM)1011and a RAM1012. The ROM1011stores, for example, a boot program such as basic input output system (BIOS). The hard disk drive interface1030is connected to a hard disk drive1090. The disk drive interface1040is connected to a disk drive1100. For example, a removable storage medium such as a magnetic disk and an optical disk is inserted in the disk drive1100. The serial port interface1050is connected to, for example, a mouse1051and a keyboard1052. The video adapter1060is connected to, for example, a display1061. The hard disk drive1090stores, for example, an OS1091, an application program1092, a program module1093, and program data1094. In other words, a program defining the processing of the devices is implemented as the program module1093in which a computer executable code is described. The program module1093is stored in, for example, the hard disk drive1090. For example, the program module1093for performing the same processing as in the function configuration in the devices is stored in the hard disk drive1090. Note that the hard disk drive1090may be replaced with a solid state drive (SSD). In addition, data used in the processing of the above embodiment is stored as the program data1094in, for example, the memory1010and hard disk drive1090. Then, the CPU1020reads out the program module1093and program data1094stored in the memory1010and hard disk drive1090to the RAM1012as necessary and performs them. Note that the program module1093and program data1094are not limited to being stored in the hard disk drive1090, and may also be stored in, for example, a removable storage medium and read out by the CPU1020via the disk drive1100, etc. Alternatively, the program module1093and program data1094may be stored is other computers connected via a network and WAN. Then, the program module1093and program data1094may be read out by the CPU1020from other computers via the network interface1070. REFERENCE SIGNS LIST 1Network10Testing device11Interface unit12Control unit13Storage unit20To-be-tested system21Network device22Security device23Server111Test packet interface112Monitoring interface113Management interface121Test packet transmitting/receiving unit.122Monitoring unit123Management unit124Test scenario unit125Response unit126Address distribution unit127Transmission unit | 38,442 |
11943251 | DETAILED DESCRIPTION Exemplary embodiments are described with reference to the accompanying drawings. The figures are not necessarily drawn to scale. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. For example, with this detailed description provides a few examples, these implementations are provided as examples only and are not restrictive of the claim concepts that follow or any of the descriptions herein. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It should also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. In the following description, various working examples are provided for illustrative purposes. However, it is to be understood that the present disclosure may be practiced without one or more of these details. It is intended that one or more aspects of any mechanism may be combined with one or more aspect of any other mechanisms, and such combinations are within the scope of this disclosure. Various embodiments are described herein with reference to a system, method, device, or computer readable medium. It is intended that the disclosure of one is a disclosure of all. For example, it is to be understood that disclosure of a computer readable medium described herein also constitutes a disclosure of methods implemented by the computer readable medium, and systems and devices for implementing those methods, via for example, at least one processor. It is to be understood that this form of disclosure is for ease of discussion only, and one or more aspects of one embodiment herein may be combined with one or more aspects of other embodiments herein, within the intended scope of this disclosure. Embodiments described herein may refer to a non-transitory computer readable medium containing instructions that when executed by at least one processor, cause the at least one processor to perform a method. Non-transitory computer readable medium may include any medium capable of storing data in any memory in a way that may be read by any computing device with a processor to carry out methods or any other instructions stored in the memory. The non-transitory computer readable medium may be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software may preferably be implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine having any suitable architecture. Preferably, the machine may be implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described in this disclosure may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium may be any computer readable medium except for a transitory propagating signal. Memory employed herein may include a Random Access Memory (RAM), a Read-Only Memory (ROM), a hard disk, an optical disk, a magnetic medium, a flash memory, other permanent, fixed, volatile or non-volatile memory, or any other mechanism capable of storing instructions. The memory may include one or more separate storage devices collocated or disbursed, capable of storing data structures, instructions, or any other data. The memory may further include a memory portion containing instructions for the processor to execute. The memory may also be used as a working scratch pad for the processors or as a temporary storage. Some embodiments may involve at least one processor. A processor may be any physical device or group of devices having electric circuitry that performs a logic operation on input or inputs. For example, the at least one processor may include one or more integrated circuits (IC), including application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field-programmable gate array (FPGA), server, virtual server, or other circuits suitable for executing instructions or performing logic operations. The instructions executed by at least one processor may, for example, be pre-loaded into a memory integrated with or embedded into the controller or may be stored in a separate memory. In some embodiments, the at least one processor may include more than one processor. Each processor may have a similar construction, or the processors may be of differing constructions that are electrically connected or disconnected from each other. For example, the processors may be separate circuits or integrated in a single circuit. When more than one processor is used, the processors may be configured to operate independently or collaboratively. The processors may be coupled electrically, magnetically, optically, acoustically, mechanically or by other means that permit them to interact. Consistent with the present disclosure, disclosed embodiments may involve a network. A network may constitute any type of physical or wireless computer networking arrangement used to exchange data. For example, a network may be the Internet, a private data network, a virtual private network using a public network, a Wi-Fi network, a LAN or WAN network, and/or other suitable connections that may enable information exchange among various components of the system. In some embodiments, a network may include one or more physical links used to exchange data, such as Ethernet, coaxial cables, twisted pair cables, fiber optics, or any other suitable physical medium for exchanging data. A network may also include a public switched telephone network (“PSTN”) and/or a wireless cellular network. A network may be a secured network or unsecured network. In other embodiments, one or more components of the system may communicate directly through a dedicated communication network. Direct communications may use any suitable technologies, including, for example, BLUETOOTH™, BLUETOOTH LE™ (BLE), Wi-Fi, near field communications (NFC), or other suitable communication methods that provide a medium for exchanging data and/or information between separate entities. Certain embodiments disclosed herein may also include a computing device for cloud cybersecurity, the computing device may include processing circuitry communicatively connected to a network interface and to a memory, wherein the memory contains instructions to be executed. The computing devices may be devices such as mobile devices, desktops, laptops, tablets, or any other devices capable of processing data. Such computing devices may include a display such as an LED display, augmented reality (AR), virtual reality (VR) display. “Software” as used herein refers broadly to any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, may cause the processing system to perform the various functions described in further detail herein. The one or more processors may be implemented with any combination of general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate array (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, dedicated hardware finite state machines, or any other suitable entities that can perform calculations or other manipulations of information. Aspects of this disclosure may provide technical solutions to challenges associated with cloud cybersecurity. Disclosed embodiments include methods, systems, devices, and computer-readable media. For ease of discussion, a system is described below with the understanding that the disclosed details may equally apply to methods, devices, and computer-readable media. Embodiments of the present disclosure include technology referred to as “SideScanning.” In contrast to some existing systems and techniques, these embodiments may provide a distinct advantage because the technology does not necessarily require entering into each workload to inspect data. Rather, some embodiments use an out-of-band process to reach cloud workloads through a runtime storage layer, combining this with metadata gathered from API provided through a cloud service provider's system, thus providing visibility of cloud environments both at a low level and with context, without the requirement for an agent or network scanner. FIG.1is a schematic block diagram100illustrating an exemplary embodiment of a network including computerized systems, consistent with the disclosed embodiments. Diagram100includes user device102, network105, and cloud infrastructure106. Cloud infrastructure106includes scanning system101, databases103A-103D, virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115. While particular numbers and arrangements of devices, systems, and connections, are depicted in exemplaryFIG.1, in some embodiments, each of the devices, systems, or connections may be omitted, duplicated, or modified. For example, in some embodiments, databases109A-109D may exist as only a single database; in other embodiments, cloud infrastructure106may exist as one or more distinct or combined infrastructures (e.g., operated by the same or different cloud services). In some embodiments, scanning system101and/or databases103A-103D may be part of cloud infrastructure106(and may be connected to the various other systems and devices in cloud infrastructure106); in other embodiments, scanning system101and/or databases103A-103D may be separate from cloud infrastructure106(e.g., connected to the systems and devices in cloud infrastructure106through network105). Scanning system101, in some embodiments, may include one or more computer systems. Each of the one or more computer systems may include memory storing instructions and at least one CPU configured to execute those instructions to perform operations as discussed herein. In some embodiments, the instructions cause the CPU to perform scanning operations. In some embodiments, scanning system101may perform a scanning operation on one or more workloads (e.g., systems, devices, resources, etc.) in cloud infrastructure106. User device102, in some embodiments, may include a mechanism operated by a user to control scanning system101. For example, in some embodiments, user device102may be any of a personal computer, a server, a thin client, a tablet, a personal digital assistant, a smartphone, a kiosk, or any other mechanism enabling data input. User device102may be operated to instantiate functionality, access data, or otherwise interact with scanning system101via network105, as described herein. Databases103A-103D include data stores for use by scanning system101. In some embodiments, one or more of databases103A-103D may be implemented as a NoSQL database, a relational database, a cloud database, a columnar database, a wide column database, a key-value database, an object-oriented database, a hierarchical database, or any other kind of database. In some embodiments, one or more of databases103A-103D may be implemented as flat file stores, data stores, or other non-database storage systems. In some embodiments, databases103A-103D may be implemented using one or more of ElasticCache, ElasticSearch, DocumentDb, DynamoDB, Neptune, RDS, Aurora, Redshift clusters, Kafka clusters, or EC2 instances. Network105may be implemented as one or more interconnected data networks. For example, network105may include one or more of any type of network (including infrastructure) that provides communications, exchanges information, and/or facilitates the exchange of information, such as the Internet, a Local Area Network, a near field communication (NFC) network, or other suitable connection(s) that enables the sending and receiving of information between the components of system100. Network105may be implemented using wireless connections, wired connections, or both. In some embodiments, one or more components of system100can communicate through network105. In some embodiments, one or more components of system100may communicate directly through one or more dedicated communication links. While particular devices and systems are shown as connected to network105, in some embodiments, more or fewer devices and systems may be connected to network105. Cloud infrastructure106may be implemented as a set of devices and systems offered by a single cloud service provider. For example, cloud infrastructure106may comprise devices and systems that are part of Amazon Web Services, Microsoft Azure, Google Cloud Platform, IBM Cloud, Alibaba Cloud, or any other cloud platform provider. In some embodiments, one or more of the devices and systems in cloud infrastructure may require authentication or other identity validation for access. For example, to access virtual machine107A, a user may be required to enter a password or provide a key. Systems (e.g., scanning system101or user device102) may administer or interact with cloud infrastructure106using a cloud service provider's system (not pictured). Virtual machines107A-107D may include one or more devices and systems that implement a virtualized/emulated version of a computer. A virtual machine may be implemented as an emulated version of a computer—including an operating system, memory, storage, graphics processing—such that it can be indistinguishable from a standard (non-virtual) machine to a running program. A computer system, referred to as a “host,” may operate virtual machines107A-107D, referred to as “guests,” by dividing the resources of the host between the virtual machines such that each virtual machine is isolated from one another. This means that in some embodiments, one virtual machine, and the operating system(s) and application(s) running thereon, is only able to access the resources that are allocated to that virtual machine and cannot access resources allocated to other virtual machines. For example, if a host has 32 gigabytes of random access memory (RAM), and is hosting three virtual machines, the host may segment 8 gigabytes of RAM to each virtual machine such that each virtual machine may only access data in that 8 gigabytes of RAM and not any of the other 24 gigabytes. Examples of commercial virtual machine software and services include VMWare Workstation, VMWare Server, VMWare ESXi, VirtualBox, Parallels Desktop, Parallels RAS, Amazon Machine Image, Amazon ECS, Kubernetes, Microsoft Hyper-V, and Xen. Databases109A-109D may include data stores for use by devices and systems in cloud infrastructure106. In some embodiments, one or more of databases109A-109D may be implemented as a NoSQL database, a relational database, a cloud database, a columnar database, a wide column database, a key-value database, an object-oriented database, a hierarchical database, or any other kind of database. In some embodiments, one or more of databases109A-109D may be implemented as flat file stores, data stores, or other non-database storage systems. In some embodiments, databases109A-109D may be implemented using one or more of ElasticCache, ElasticSearch, DocumentDb, DynamoDB, Neptune, RDS, Aurora, Redshift clusters, Kafka clusters, or EC2 instances. Databases109A-109D may store data usable by devices or systems in cloud infrastructure106. The data, in some embodiments, may include e-commerce data (e.g., shipments, orders, inventory), media data (e.g., pictures, movies, streaming data), financial data (e.g., banking data, investment data), or other data. Storage111A-111D may include storage systems for use by devices and systems in cloud infrastructure106. In some embodiments, one or more of storage111A-111D may be implemented as a hard drive, a RAID array, flash memory, optical storage, or any other kind of storage. Each of111A-111D may include one or more filesystems (e.g., Amazon Elastic File System, GlusterFS, Google File System, Hadoop Distributed File System, OpenZFS, S3, Elastic Block Storage). In some embodiments, systems and devices of cloud infrastructure106may use databases109A-109D to store data that is accessed frequently (where, for example, access is required within a few milliseconds), and may use storage111A-111D to store data that is accessed less frequently (where, for example, access is required within a few minutes or hours). Keystores113A-113D may include systems storing keys for accessing data and functionality. For example, to access certain data or systems, a system may require the use of passwords or keys in keystores113A-113D for authentication. The data and functionality that the keys grant access to may be part of cloud infrastructure106or may be separate from cloud infrastructure106. For example, keystores113A-113D may include systems that store public and private keys (e.g., for use via SSH), may store passwords (e.g., login information for websites or programs), may store tokens (e.g., one-time passcodes), or the like. In some embodiments, keystores113A-113D may be implemented as one or more of Amazon Web Services KMS, Azure Key Vault, or Google KMS. Load balancer115may include one or more systems that balance incoming requests between the different systems and devices of cloud infrastructure106. For example, load balancer115may be configured to determine usage (e.g., processor load, used storage capacity) of systems or devices in cloud infrastructure106to assist in determining where to route an incoming request from network105to store data, perform processing, or retrieve data. Load balancer115may be configured to receive an incoming request from user device102. Upon receipt of the request, load balancer115may consult a data store (part of or separate from load balancer115; not pictured) to determine usage or forecasted usage of various systems or devices in cloud infrastructure106, and may forward the request to the systems or devices having the lowest usage or forecasted usage. FIG.2Ais a schematic block diagram illustrating an exemplary embodiment of a process200for integration, scanning, assessment, and review, consistent with the disclosed embodiments. In some embodiments, the steps inFIG.2Aare executed using software and hardware of scanning system101. In some embodiments, the steps inFIG.2Amay be performed in an order other than those depicted inFIG.2A. In some embodiments, steps may also be omitted, repeated, or modified. In some embodiments, information gathered in one step ofFIG.2Amay be used to provide context or other information for use in another step ofFIG.2A. In other embodiments, the steps inFIG.2Amay be executed by other devices. Process200begins with step201. In step201, scanning system101may execute a process of integration. The integration process may be performed by scanning system101with cloud infrastructure106. In some embodiments, the integration process includes creating a connection between an account on scanning system101and an account on cloud infrastructure106. The process of integration in step201may, in some embodiments, be implemented as described below with respect toFIG.2B. In step203, scanning system101may execute a process of scanning/mapping. The scanning/mapping process may be performed by scanning system101with cloud infrastructure106. In some embodiments, the scanning and mapping process may include analyzing data relating to cloud infrastructure106using scanning system101, by reading cloud infrastructure106through the connection made in step201and generate a “map” (e.g., a data structure or data collection) representing systems and devices in cloud infrastructure106. The process of scanning and mapping in step203may, in some embodiments, be implemented as described below with respect toFIG.2C. In step205, scanning system101may execute a process of assessing. The assessing process may be performed by scanning system101. In some embodiments, the assessing process may include reviewing vulnerabilities, infrastructure, interconnections, data, and other information, using scanning system101. The process of assessing in step205may, in some embodiments, be implemented as described below with respect toFIG.2D. In step207, scanning system101may execute a process of analyzing/reporting. The analyzing/reporting may be performed by scanning system101. In some embodiments, the change reviewing process comprises scanning system101scanning cloud infrastructure106again to determine the differences between an earlier observed snapshot of cloud infrastructure106and the current state thereof. In some embodiments, scanning system101may analyze information (e.g., steps203and/or205) to generate reports. The reports may list what each vulnerability is, where it is located, and its priority. In this way, security engineers and DevOps teams may be able to easily assess how to best allocate their time and attention. In some embodiments, analyzing/reporting in step207may include scanning system101combining conclusions from different environmental perspectives (e.g., metadata) into a single model. For example, scanning system101may map the running services on cloud infrastructure106and consider collected vulnerability data. Scanning system101, in some embodiments, may generate a visualization of the map for review, listing where the vulnerabilities are on a two-dimensional graphical representation of cloud infrastructure106. In some embodiments, scanning system101may indicate in the map whether an asset (e.g., a system or device) is Internet-facing and easily accessible to attackers, exposed only to internal assets or if it is private altogether (and may be less critical). For example, based on the contextual map, scanning system101may perform a “forward” analysis of the specific asset under identification to identify at least one possible Internet-originating attack vector to the asset. Alternatively or additionally, scanning system101may perform a “backward” analysis of the specific asset to identify exposure risk to assets downstream of the specific asset, wherein the downstream exposure risk includes an identification of an exposed asset, an entry point to the exposed asset, and lateral movement risks associated with the exposed asset. Both backward and forward analysis can be recursively, for one or more hops, Scanning system101may present these analyses to a user via graphical user interface or expose them via an API. As one example, scanning system101may determine a vulnerability in a web service. In some embodiments, scanning system101may score the vulnerability as: high risk if the web service is connected to the Internet (e.g., has at least one public port forwarded through a firewall to the web service, is able to be accessed through a load balancer, is able to be accessed via reverse proxy); medium risk if the web service is only accessible internally (e.g., because of firewall configuration); and low risk if access to the web server is blocked by a configuration of cloud infrastructure106. As another example, if a machine is stopped (e.g., turned off, not running, not connected), a vulnerability, but one that is less likely to be exploited because the machine isn't running. This affects its risk score and other mitigating factors. In step209, scanning system101may execute a process of change reviewing. The change reviewing may be performed by scanning system101. In some embodiments, the change reviewing process may include scanning system101scanning cloud infrastructure106again to determine the differences between an earlier observed snapshot of cloud infrastructure106and the current state thereof. In some embodiments, scanning system101may monitor cloud change, or “delta,” logs (e.g., Cloud event logs such as AWS CloudTrail, monitoring a network-related change in state, a trust-related change in state, or a disk configuration-related change in state in at least one of the primary asset group or a secondary asset group) and contextualize associated risks discovered within a customer cloud account. In some embodiments, scanning system101may generate a new map of assets based on a review of all systems or devices in cloud infrastructure106(e.g., as described above with respect to step203), and compare the new map to the map generated earlier in step203to determine which assets have changed. In some embodiments, this comparison may require comparing data, metadata, network connections, software configurations, firewall configurations, or any other aspects related to cloud infrastructure106, including, for example, determining new assets that are created or assets for which their configuration has changed. After step209, as depicted inFIG.2A, process200may return to step205. Scanning system101may then perform step205again. In some embodiments, steps205, and207may be performed only on the devices or systems that have experienced a change that was detected by step209. In some embodiments, process200may return to step203after step209. FIG.2Bis a schematic block diagram illustrating an exemplary embodiment of a process210for integration, consistent with the disclosed embodiments. In some embodiments, the steps inFIG.2Bare executed using software and hardware of scanning system101. Integration, in some embodiments, is a process for creating a trust relationship between scanning system101and cloud infrastructure106. In some embodiments, the steps inFIG.2Bmay be performed in an order other than those depicted inFIG.2B. In some embodiments, steps may also be omitted, repeated, or modified. In some embodiments, information gathered in one step ofFIG.2Bmay be used to provide context or other information for use in another step ofFIG.2B. In other embodiments, the steps inFIG.2Bmay be executed by other devices. Process210may begin with step211to initiate a connection to cloud infrastructure106. In step211, scanning system101may send a message to user device102instructing a user to authenticate, or log in, to a cloud service provider's system operating cloud infrastructure106. For example, the user may use a username, password, one-time password, two-factor authentication, or any other authentication mechanism to gain access to a cloud service provider's system. Concurrently with or after the first message, scanning system101may send a second message to user device102, instructing the user to generate a role. The second message may include instructions for the user to follow to generate the role. In step213, a user may provide (e.g., via a keyboard at user device102) a role definition to the cloud service provider's system. In some embodiments, the role definition includes read-only permissions and permissions to read a block storage layer (containing block storage volumes). In some embodiments, scanning system101provides a role formation template (e.g., an Amazon Web Services CloudFormation Template) for use with cloud infrastructure106to create the necessary role. In step213, the user may utilize user device102, for example, by copying and pasting a URL of the template, downloading and uploading the template to the cloud service provider's system, or selecting the template from a list of templates. In step215, the cloud service provider's system may generate a string (e.g., a “key” or “resource name”) for use by scanning system101. In some embodiments, this string may be used to enable access by scanning system101to the workload of cloud infrastructure as permitted by the generated role. A user using user device102may copy the string and paste it into a user interface presented by scanning system101on user device102. Other aspects of transmitting this string to scanning system101are possible as well. In step217, scanning system101determines that it is able to access cloud infrastructure106. For example, scanning system101may attempt to perform a command such as authenticating to cloud infrastructure106using the string received in step215or attempting to perform a command using cloud infrastructure (e.g., requesting a listing of files stored on storage111A). Once scanning system101determines that it is able to access cloud infrastructure106, the process may return toFIG.2Aand step203. FIG.2Cis a schematic block diagram illustrating an exemplary embodiment of a process220for scanning/mapping, consistent with the disclosed embodiments. In some embodiments, the steps inFIG.2Care executed using software and hardware of scanning system101. In some embodiments, the steps inFIG.2Cmay be performed in an order other than those depicted inFIG.2C. In some embodiments, steps may also be omitted, repeated, or modified. In some embodiments, information gathered in one step ofFIG.2Cmay be used to provide context or other information for use in another step ofFIG.2C. In other embodiments, the steps inFIG.2Cmay be executed by other devices. Process220may begin with step221where scanning system101initiates a process to access keys, such as those keys stored in keystores113A-113D. Scanning system101may utilize the string (e.g., a key or resource name) received in process210to authenticate in order to retrieve keys stored in keystores113A-113D. Once authenticated, scanning system101may access one, some, or all keys stored in keystores113A-113D. In some embodiments, step221may comprise utilizing pre-established trust relationships, such as AWS trust policies, instead of creating a new relationship. In step223, scanning system101may generate a “snapshot” of devices and systems in cloud infrastructure106. In some embodiments, generating a snapshot may include reading devices and systems in cloud infrastructure, such as storage111A-111D, databases109A-109D, and virtual machines107A-107D, and copying the information read from those devices and systems to storage at scanning system101. Generating a snapshot, in some embodiments, may include recording a reference count to data blocks in one or more of storage111A-111D, databases109A-109D, and virtual machines107A-107D, and copying each block them like a copy and write operation. In some embodiments, scanning system101may generate an Elastic Block Storage snapshot in step223. In some embodiments, scanning system101uses one or more keys retrieved from keystores113A-113D to encrypt “snapshots” of cloud infrastructure. In step225, scanning system101may apply “tags” to the snapshot. This process may include, in some embodiments, adding information to the snapshot to identify the snapshot as being associated with scanning system101. Scanning system101, in some embodiments, may be configured to delete only snapshots with the associated tags. In step227, scanning system101may generate a map of cloud infrastructure106. In some embodiments, this map may be in a form of graph: a plurality of interconnected vectors connecting the plurality of systems and devices, based on the networking configuration. Scanning system101, or other devices, may traverse the map to identify vectors originating in the Internet and reaching the devices and systems in cloud infrastructure106. In some embodiments, generating the map may include enumerating Internet-accessible services that are capable of serving as an Internet proxy. In some embodiments, a user may control scanning system101(e.g., via user device102) to display the map on a graphical user interface. In some embodiments, generating the map may include enumerating properties of all assets, including: virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, load balancer115, log files or databases, API gateway resources, API gateway REST APIs, Autoscaling groups, CloudTrail logs, CloudFront services, volumes, snapshots, VPCs, subnets, route tables, network ACLs, VPC endpoints, NAT gateways, ELB and ALB, ECR repositories, ECS clusters, services, and tasks, EKS, S3 bucket and Glacier storage, SNS topics, IAM roles, policies, groups, users, KMS keys, and Lambda functions. In some embodiments, generating the map in step227may further include analyzing devices or systems for a subset of risk related situations, including determinations of compromise situations (e.g., where an attacker has already gained access), imminent compromise situations (e.g., a known attack vector exists and can be used, such as a data store that is exposed to the public Internet without authentication), hazardous situations (e.g., a serious security implication, but no full attack vector exists), or informational situations (e.g., when storage111A-111D has a limited amount of free space, or an unexploitable vulnerability exists). Generating the map in step227may include recording information such a region identifier, site identifier, datacenter identifier, physical address, network address, workload name, or any other identifier which may be acquired via an API provided through a cloud service provider's system. In step229, scanning system101may provide alerts, e.g., to user device102, indicating any situations found during the process of generating the map. For example, scanning system101may send one or more of an email, a popup alert, a text message, or other notification to user device102. After performing step229, the process may return toFIG.2Aand step205. FIG.2Dis a schematic block diagram illustrating an exemplary embodiment of a process230for assessing, consistent with the disclosed embodiments. In some embodiments, the steps inFIG.2Dare executed using software and hardware of scanning system101. In other embodiments, the steps inFIG.2Dmay be executed by other devices. In some embodiments, scanning system101may execute the steps inFIG.2Dagainst a snapshot created in step223ofFIG.2Cand stored at scanning system101. In some embodiments, the steps inFIG.2Dmay be performed in an order other than those depicted inFIG.2D. In some embodiments, steps may also be omitted, repeated, or modified. In some embodiments, information gathered in one step ofFIG.2Dmay be used to provide context or other information for use in another step ofFIG.2D. Process230begins with step231. In step231, scanning system101may perform a step of vulnerability scanning. In some embodiments, step231comprises extracting everything in the snapshot, including operating system packages, installed software applications, libraries, and program language libraries such as Java archives, Python packages, Go modules, Ruby gems, PHP packages, and Node.js modules, or other software applications. In some embodiments, step231may determine library versions, software versions, and other identifying characteristics of software and operating systems in the snapshot. Scanning system101may then try to match them to known vulnerabilities stored in a vulnerability database (e.g., one of databases103A-103D). The vulnerability database, in some embodiments, may include vulnerability data from: NVD, WPVulnDB, US-CERT, Node.js Security Working Group, OVAL-Red Hat, Oracle Linux, Debian, Ubuntu, SUSE, Ruby Advisory Database, JVN, Safety DB(Python), Alpine secdb, PHP Security Advisories Database, Amazon ALAS, RustSec Advisory Database, Red Hat Security Advisories, Microsoft MSRC, KB, Debian Security Bug Tracker, Kubernetes security announcements, Exploit Database, Drupal security advisories, JPCERT. The vulnerability database may also include other vulnerability data (including, e.g., manually-added vulnerability data or vulnerability sources not listed above). In step233, scanning system101may perform a step of configuration scanning. In step233, scanning system101may gather configuration information—such as a list of users of each system or device (e.g., virtual machines107A-D), each system's or device's services, password hashes, and application-specific configurations for software/services such as Apache, Nginx, SSH, and other services. In some embodiments, scanning system101may perform a first analysis on all information collected in step233to remove sensitive information (e.g., social security numbers, passwords, birthdates) before proceeding to review the configuration information. In some embodiments, step233may comprise verifying adherence of the systems or devices in cloud infrastructure106to standards or benchmarks established by an external entity, such as the Center for Internet Security. In some embodiments, scanning system101may perform a benchmarking process to detect misconfigurations of any services based on the information gathered in step233. For example, scanning system101may determine the software version of each service and examine it against known vulnerabilities (e.g., stored in database103A). In some embodiments, scanning system101may determine bugs or other configuration risks that might only be exploitable from internal machines, because such bugs and risks can facilitate an attacker's lateral movement. In some embodiments, scanning system101may evaluate network misconfigurations and their implications. Scanning system101may query devices and systems capable of routing traffic (e.g., load balancer115, routers, switches, firewalls, and proxies) using an API provided through a cloud service provider's system to determine network configurations, and may evaluate them against known problematic configurations or other configurations. In step235, scanning system101may perform a step of malware scanning. In some embodiments, scanning system101may perform malware scanning across all filesystems in the snapshot (e.g., gathered from virtual machines107A-107D or storage111A-111D). Scanning system101may use multiple malware scanning software solutions to perform a malware scan against the filesystems, including one sourced from another vendor, such as bucketAV, Trend Micro Cloud One, Sophos Cloud Optix, Crowdstrike Falcon CWP, or others. In some embodiments, malware scanning in step235comprises utilizing signatures, heuristics, or sandboxing capabilities to deduce whether there is an infection on the machine. In step237, scanning system101may perform a step of lateral movement scanning. An attacker who establishes a network foothold usually attempts to move laterally from one resource to another in search of rich targets such as valuable data. Stolen passwords and keys unlock access to servers, files, and privileged accounts. In some embodiments, scanning system101may gather keys from each scanned system or device (e.g., virtual machines107A-107D or storage111A-111D). In some embodiments, scanning system101searches for passwords, scripts, shell history, repositories, or other data that may contain passwords, cloud access keys, SSH keys, or other key/password/access information that provide unchecked access to important resources. In some embodiments, scanning system101searches for such keys/passwords/access information and calculates a “hash” (a mathematical fingerprint) of each string. Scanning system101then attempts to match the hashed strings to hashes of strings that that are stored on different systems or devices. This will be used to detect the potential lateral movement between assets. In step239, scanning system101may perform a step of key/password scanning. As one example situation, suppose there is a weak or unprotected password stored (in plain text) in storage111A. For example, if a personal email account has been compromised, the passwords may be known about in advance. Scanning system101may search the snapshot for similar usernames or login names, and, either using known dictionaries or the account owner's previously leaked passwords (stored in, e.g., database103A), may attempt to login to one or more systems or devices in cloud infrastructure106, and may record the result thereof. In some embodiments, scanning system101may perform a “fuzzy search” on any usernames found in a password database (e.g., database103A) to determine existing targets for password testing. In some embodiments, the fuzzy search uses the Damerau-Levenshtein edit distance algorithm. As one example, scanning system101may determine that a leaked username includes the email address [email protected]. Scanning system101may try to match passwords from [email protected], [email protected], and other variations. Scanning system101may perform a similar process against leaked passwords (e.g., if a leaked password is “Victory@19,” scanning system may attempt to log in using “Victory@20.” In step241, scanning system101may perform a step of sensitive information scanning. In some embodiments, scanning system101may search the snapshot for sensitive information, such as personally identifiable information (PII), Social Security numbers, healthcare information, or credit card numbers. In some embodiments, scanning system101may search data repository history as well. This is because it is not uncommon for an entire production environment repository to be cloned, with no one remembering the copy contains sensitive information. In some situations, detecting sensitive data not secured is critical in adherence to data privacy regulations. To be certain that such alerts do not constitute false positives, in some embodiments, scanning system101may perform statistical scans against the data. For example, it is possible for a random number to resemble a Social Security number, yet it is extremely unlikely for the majority of a file with thousands of numbers to be valid Social Security Number by pure chance. In step243, scanning system101may perform a step of container scanning. In some embodiments, scanning system101may apply one or more of the preceding steps ofFIG.2Dagainst containerized environments. In some embodiments, in order to do so, scanning system101reconstructs a container runtime layered file system (LFS) before recursively running one or more of steps231-241on the reconstructed file system. Scanning system101may read network information from the snapshot in order to determine which services within which containers are exposed externally and within which ports they are accessible. For example, scanning system101may identify a port on which a vulnerable application is accessible based on known software vulnerabilities for a versions of software application. Scanning system101may query network accessibility information via an API provided through a cloud service provider's system and may use it to identify specific vulnerabilities susceptible to attack. After performing step243on one or more containers, the process may return toFIG.2Aand step207(as discussed above). Foundational Techniques Aspects of this disclosure may include establishing a trusted relationship between a source account in a cloud environment and a scanner account. A trusted relationship, as used herein, may refer to a secure communication channel between at least two accounts (e.g., domains). The secure communications channel may include, for example, an administration link, communication link, a connection or communication network configured with security protocols, or any other secure relationship between two networked entities. A trusted relationship between two accounts may enable user accounts and global groups to be used in a domain other than the domain where the accounts are defined. A source account, as used herein, may refer to a domain, a location on a network server, or anything used to access system resources. A cloud environment, as used herein, may refer to a platform implemented on, hosted on, and/or accessing servers that may be reached via the Internet or other shared network. A scanner account, as used herein, may refer to any type of account associated with a scanner. A scanner, as used herein, may refer to a device for examining, reading, or monitoring electronically accessible information.FIG.1illustrates one example of a scanner, which is depicted in an exemplary manner as scanning system101. By way of example, establishing a trusted relationship between a source account in a cloud environment and a scanner account may include an integration process (creating a connection between an account on scanning system101and an account on cloud infrastructure106) discussed with reference toFIG.2A. By way of one example, scanning system101ofFIG.1may create a trusted relationship between a source account in a cloud environment and a scanner account. Aspects of this disclosure may include using the established trust relationship, utilizing at least one cloud provider API to identify workloads in the source account. A cloud provider API, as used herein, may refer to any application program interface that allows an end user to interact with a cloud provider's service. A cloud provider, may include any entity through whom information is accessible over the Internet or other shared network. Workloads, as used herein, may refer to systems, devices, or resources in a network (or available via a network) such as a cloud infrastructure. By way of example,FIG.1illustrates examples of a workload: virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115. In some embodiments, scanning system101ofFIG.1may perform a scanning operation on workload (e.g., systems, devices, resources, etc.) in cloud infrastructure106. In some embodiments, scanning system101ofFIG.1may use an API to detect workloads such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115in the source account. Aspects of this disclosure may involve using the at least one cloud provider API to query a geographical location of at least one of the identified workloads. Querying, as used herein, may refer to a request for data or information from a resource. A geographical location, as used herein, may refer to the data center where this workload is served, a position on Earth, a location in a network, an address of a server or other hardware, or any other information capable of identifying a locus, region, or position. Internet geolocation involves software capable of deducing a geographic position of a device connected to the Internet. For example, the device's IP address may be used to determine the country, city, or ZIP code, determining its geographical location. By way of example, scanning system101ofFIG.1may query any of workloads virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106for their geographical location using a cloud provider API. In some embodiments, the geographic location may include an identifier of a physical site. An identifier, as used herein, may refer to anything that allows for recognition of an object, location, or resource. A physical site, as used herein, may refer to an actual location. By way of example, scanning system101ofFIG.1may query any of workloads virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106for an identifier of their physical site (e.g., address, postal code, or any other physical descriptor) using a cloud provider API. In some embodiments, scanning system101ofFIG.1may use an out-of-band process to reach cloud workloads through a runtime storage layer, in combination with information gathered from cloud provider APIs. Aspects of this disclosure may include receive an identification of the geographic location. An identification of the geographic location, as used herein, may refer to a recognition, association, or classification of the geographic location. By way of example, scanning system101ofFIG.1may receive a geographical location, after querying for it, by interrogating one or more workloads, such as virtual machines107A-107D, databases109A-109D, storage111A-111D, or keystores113A-113D of cloud infrastructure106. For example, the scanning system101ofFIG.1may receive address, postal code, or any other physical descriptor of any workloads in cloud infrastructure106. In some embodiments, scanning system101ofFIG.1may receive a listing locating the workloads of cloud infrastructure106on a two-dimensional graphical representation. In some embodiments, the identification of the geographic location may include an identification of a data center, at least one of a data center name, an Internet Protocol (IP) address, a name of the cloud provider, or a unique identity. A data center, as used herein, may refer to a facility that centralizes an organization's shared IT operations and equipment for the purposes of storing, processing, and/or disseminating data and applications. A data center, for example, may include a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems. An identification of a data center, as used herein, may refer to a recognition, association, or classification of a data center. A data center name, as used herein, may refer to a series of letters or numbers, a word, or set of words or numbers, an address, or a code identifying the data center. An Internet Protocol (IP) address, as used herein, may refer to a unique string of characters that identifies a computer using the Internet Protocol to communicate over a network. A cloud provider, as used herein, may refer to a company that offers any components of cloud computing (e.g., infrastructure as a service (IaaS), software as a service (SaaS) or platform as a service (PaaS)). A unique identity, as used herein, may refer to a distinctive numeric or alphanumeric string that is associated with a single entity within a given system. By way of example, scanning system101ofFIG.1may query for and receive a geographical location such as a data center name/city and state, data center address, IP address/specific cloud identifier of the workload, name of cloud provider (e.g., Amazon Web Services), or a specific identifier of information of any of workloads such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106. In some embodiments, scanning system101ofFIG.1may receive a listing indicating locations of the workloads of cloud infrastructure106on a two-dimensional graphical representation. In some other embodiments, scanning system101may receive additional information (in addition to geographic location information) such as whether the workload is Internet-facing and easily accessible to attackers, or if it is private (and may be less critical). Aspects of this disclosure may include using the cloud provider APIs to access block storage volumes of the at least one workload. Block storage volumes, as used herein, may refer to data files maintained on Storage Area Networks (SANs) or in cloud-based storage environments. Each storage volume may act as an individual hard drive configured by a storage administrator. Accessing block storage volumes, as used herein, may refer to obtaining, examining, or retrieving data. As described with relation to process210ofFIG.2B(beginning with step211to initiate a connection to cloud infrastructure106), scanning system101may send a second message to user device102, instructing the user to generate a role. A role may refer to a type associated with a node that may be assigned or defined. The second message may include instructions for the user to follow to generate the role. In step213, a user may provide (e.g., via a keyboard at user device102) a role definition to the cloud service provider's system. In some embodiments, the role definition includes read-only permissions and permissions to read a block storage layer (containing block storage volumes). In some embodiments, scanning system101provides a role formation template (e.g., an Amazon Web Services CloudFormation Template) for use with cloud infrastructure106to create the necessary role. In step213, the user may utilize user device102, for example, by copying and pasting a URL of the template, downloading and uploading the template to the cloud service provider's system, or selecting the template from a list of templates. By way of example, scanning system101ofFIG.1may access the block storage volume of any of workloads virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106. Aspects of this disclosure may include determining a file-system of the at least one workload. A file-system, as used herein, may refer to a method and/or a data structure that the operating system uses to control how data is stored and retrieved. By way of example, scanning system101ofFIG.1may identify the type of file-system of any of workloads virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106. Some file systems may include NTFS, FAT32, ext3, exFAT, HFS, HFS+, HPFC, UFS, ZFS. Aspects of this disclosure may include mounting the block storage volumes on a scanner based on the determined file-system. Mounting the block storage volumes, as used herein, may refer to a process by which the operating system makes files and directories (such as block storage volumes) on a storage device available for users to access via the computer's file system. A scanner, as used herein, may refer to a device for examining, reading, or monitoring electronically accessible information. By way of example,FIG.1illustrates one example of a scanner, scanning system101ofFIG.1. In some embodiments, mounting may include selecting a driver corresponding to the determined file system. In some embodiments, to mount the block storage volumes on the scanner, may include creating a snapshot of the block storage volumes; and mounting the snapshot of the block storage volumes on the scanner. A snapshot of the block storage volumes, as used herein, may refer to a current state or earlier state of the block storage volumes. In some embodiments, the at least one processor may be further configured to encrypt the snapshot of the block storage volumes and mount the encrypted snapshot of the block storage volumes on the scanner. Encrypting a snapshot, as used herein, may refer to a process of encoding information (such as a snapshot). This process may convert the original representation of the information, known as plaintext, into an alternative form known as ciphertext. In some embodiments, the scanner may use a privileged account to log in and determine how secure each host is from an inside vantage point. While authenticated scans can successfully discover potential vulnerabilities, they may be limited to the extent they require a privileged account on each scanned host. Furthermore, scans use significant system resources during the test procedures and require opening ports that by themselves pose a security risk. Aspects of this disclosure may include activating a scanner at the geographic location. Activating a scanner, as used herein, may refer to starting up or initiating a device for examining, reading, or monitoring data. By way of example, scanning system101ofFIG.1may start up a scanner (a different scanner than scanning system101) at the geographic location identified by scanning system101. Aspects of this disclosure may include reconstructing from the block storage volumes a state of the workload. Reconstructing, as used herein, may refer to rebuilding or reforming a state of the workload. A state of the workload, as used herein, may refer to a condition of a workload at a certain time. By way of example, scanning system101ofFIG.1may rebuild the block storage volumes as an earlier version of any of workloads such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106. In some embodiments, the reconstructed state of the workload may include at least two of an indication of an installed application, a version of an installed application, an operating system configuration, an application configuration, a profile configuration, a log, or a database content. An indication of an installed application, as used herein, may refer to an identifier of a software. A version of an installed application, as used herein, may refer to a particular form, variant, edition, or revision of a piece or pieces of software. An operating system configuration, as used herein, may refer to a manner in which components are arranged to make up an operating system or a computer system. An application configuration, as used herein, may refer to a manner in which components are arranged to make up an application or a computer system. A profile configuration, as used herein, may refer to a manner in which components are arranged to make up a profile or a computer system. A log, as used herein, may refer to a detailed list of an application information, system performance, or user activities. Database content, as used herein, may refer to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. In some embodiments, scanning system101ofFIG.1may reconstruct any workload, such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106from the block storage volumes to an indication of an installed application, a version of an installed application, an operating system configuration, an application configuration, a profile configuration, a log, or a database content. Aspects of this disclosure may include assessing the reconstructed state of the workload to extract insights. Assessing the reconstructed state of the workload, as used herein, may refer viewing, utilizing, or evaluating the reconstructed state of the workload. Insights, as used herein, may refer to information or knowledge attained based on an analysis of some data. Extracting insights, as used herein, may refer to pulling information or data about, for example, a vulnerability associated with the workload or a composition of installed applications associated with the workload. By way of example, scanning system101ofFIG.1may evaluate the reconstructed state of any of workloads, such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106to pull valuable data and information. In some embodiments, insights may include at least one of a vulnerability associated with the workload or a composition of installed applications associated with the workload. A vulnerability, as used herein, may refer to any weakness within an organization's information systems, internal controls, or system processes that can be exploited. A composition of installed applications associated with the workload, as used herein, may refer to a structure or group of accessible computer programs. By way of example, scanning system101ofFIG.1may evaluate the reconstructed state of any workloads such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106to pull valuable data and information regarding a vulnerability with the workload. In some embodiments, a scanner may be deployed at the geographical location. Deploying a scanner, as used herein, may refer to installing, activating, utilizing, or bringing a scanner into effective action. By way of example,FIG.1illustrates one example of a scanner, scanning system101ofFIG.1that is deployed. In some embodiments, scanning system101ofFIG.1may update the reconstructed state of the workload based on at least one change to the block storage volumes. Updating the reconstructed state of the workload, as used herein, may refer to overhauling or refurbishing the reconstructed state of the workload. A change to the block storage volumes, as used herein, may refer to any shift, alteration, or modification to the block storage volumes. FIG.3illustrates a block diagram of method300for insight extraction, consistent with disclosed embodiments. In some embodiments, the method may include ten (or more or less) steps: Block302: Establish a trusted relationship between a source account in a cloud environment and a scanner account. In some embodiments, establishing a trusted relationship between a source account in a cloud environment and a scanner account may include an integration process (creating a connection between an account on scanning system101and an account on cloud infrastructure106) discussed with reference toFIG.2A. By way of one example, scanning system101ofFIG.1may create a trusted relationship between a source account in a cloud environment and a scanner account. Block304: Using the established trust relationship, utilize at least one cloud provider API to identify workloads in the source account. In some embodiments, scanning system101ofFIG.1may use an API to detect workloads such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115in the source account. Block306: Use the at least one cloud provider API to query a geographical location of at least one of the identified workloads. In some embodiments, scanning system101ofFIG.1may query any of workloads virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106for their geographical location using a cloud provider API. Block308: Receive an identification of the geographic location. In some embodiments, scanning system101may indicate in the map additional information regarding the workload (e.g., a system or device). For example, scanning system101may provide: an identification of a data center (e.g., a name or number associated with the data center where the workload is located), at least one of a data center name (e.g., a name or address associated with the data center where the workload is located), Internet Protocol (IP) address, name of the cloud provider, or a unique identity (e.g., any other information related to the workload that may be used). Block310: Use the cloud provider APIs to access block storage volumes of the at least one workload. In some embodiments, scanning system101ofFIG.1may access the block storage volume of workloads such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106. In some embodiments, accessing the block storage may include taking the mean of generating snapshots. Block312: Determine a file-system of the at least one workload. In some embodiments, scanning system101ofFIG.1may identify the type of file-system any of workloads (virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106). Block314: Mount the block storage volumes on a scanner based on the determined file-system. In some embodiments, to mount the block storage volumes on the scanner, may include instructing the operating system to map a directory structure logically to a physical storage device. A storage volume may be mounted after it is attached and formatted for use by a servers operating system. Mounting may include creating a snapshot of the block storage. Block316: Activate a scanner at the geographic location. In some embodiments, scanning system101ofFIG.1may start up a scanner (a different scanner than scanning system101) at the geographic location identified by scanning system101. Block318: Reconstruct from the block storage volumes a state of the workload. In some embodiments, scanning system101ofFIG.1may rebuild the block storage volumes to an earlier version of workloads such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106. Block320: Assess the reconstructed state of the workload to extract insights. In some embodiments, scanning system101ofFIG.1may evaluate the reconstructed state of workloads such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D of cloud infrastructure106to derive valuable data and information. Vulnerability Management Techniques FIG.4depicts a cybersecurity system performing a side scanning function to protect against potential vulnerabilities. Referring toFIG.4, a processor400configured to identify in a cloud environment one or more block storage volumes401A-C communicatively connected to a cloud provider API where the block storage volumes401A-C may be contained. When a block storage volume, e.g.,401A is identified, the processor400may be configured to perform the function of identifying the software installed403A in or relating to block storage volume401. Upon identifying installed software403A, the processor may be further configured to identify the version of software405A operating in the installed software403A. Based on the installed software403A and the version of the software405A, the processor may be configured to identify a listing of known vulnerabilities407that may relate to the installed software403A and the version installed405A. When the listing of known vulnerabilities407is obtained, the system may list the known vulnerabilities409A for that software version411A, which can be communicated to the processor400of the end user. This information may be processed and displayed to an end user using processor400. FIG.5of the disclosed embodiments describes a method of operating a cybersecurity system performing a side scanning function to protect against potential vulnerabilities. Referring toFIG.5, a processor unit500may be configured to perform the method. When initiated either manually or automatically, processor unit500may be communicatively connected with a cloud provider API501to initiate a cybersecurity function as disclosed above. Contained digitally on a memory device and accessed by a cloud provider API501is a series of block storage volumes503. Processor unit500may be capable of performing a vertical and horizontal scan for network access or security information of a block storage volume503to detect potential vulnerabilities to the block storage volume503or any processing unit500designed to access said block storage volume503. Based on the accessed block storage volume500, processor unit500with the disclosed embodiments of a cybersecurity system may perform an identification of the type of software installed505connected with block storage volume503. Following identification of the type of software installed505, the processing unit may then identify an installed version507of said installed software505for a version comprised of a unique identifier based on a combination of letters, numbers, or similar unique identifiers. Upon recognition of an installed software version507of the installed software505, processing unit500with the disclosed embodiments of the disclosed scanning system101may provide a list of known vulnerabilities509for review by said scanning system101and/or its end user and maintainer of said block storage volumes503. Based on the list of known vulnerabilities509, the disclosed scanning system101or its end user may identify one or more ports of accessibility to said block storage volume503that may be accessed by a known, associated vulnerability513from listed vulnerabilities509to determine an avenue for potential vulnerability513to access and infiltrate block storage volume503. In some embodiments, a cyber security scanning system for a cloud environment101may include a processor500to operate said system. This processor500may include central processing units and other similar computer-enabling equipment for processing and executing commands based on the information inputted to said system. The processor500may be communicatively connected to a computer network or series of networks to accomplish said cyber security function. As an example embodiment, a processor unit500may be configured to use a cloud-provider API that may communicate with one or more specified computer-readable media across a digital network. This can be accomplished through internet protocols, internet control message protocols, transmission control protocol, or user datagram protocol. Cloud provider API501may be one of several forms of middleware, interface, middle layer, or other systems of interfacing applications. A processor300may be one or more computer processing units, central processing unit, server, microcomputer, mainframe, and any other manifestation of digital computing. Further to one of several possible embodiments, a cloud-provider API501may be configured to access a block-storage volume503of a workload maintained in a cloud-storage environment. This may be accomplished through a system of computer-readable media communicatively connected. Said block-storage volume303may be contained on a Storage Area Network (SAN) or similar cloud-based memory storage environment. The block storage volume503may be contained in smaller storage volumes with an associated identifier unique to that portion of said block storage volume503. In some embodiments, the block-storage volume503of a workload may have multiple paths for the storage volume to be reaggregated and retrieved quickly. Among several embodiments, a scanning system101may comprise a system for identifying an installed software application in the accessed block-storage volume503. This identification of installed software may be accomplished by accessing installed software505files through signature verification, root license, or authorized user lists. The installed software505may be located and identified within applications such as file storage, database storage, and virtual machine file system volumes. The identification of said installed software application507may be processed, analyzed, and communicated to the scanning system101for processing, cataloging, and protection through encryption and various methods of layered cyber defense. Further, the scanning system101described herein may include functionality to analyze installed software applications to determine the associated software version. The software application version507may identify the software version based on unique version name, unique version number, and may be based on unique states of the currently installed computer software505. One of many embodiments disclosed above may include the scanning system101having the ability to access a data structure of known software vulnerabilities509for a plurality of versions of software applications. The known software vulnerabilities509may include, among others, missing data encryption, OS command injection, SQL injection, buffer overflow, missing authentication, missing authorization, unrestricted upload of dangerous file types, reliance on untrusted inputs in a security decision, cross-site scripting and forgery, download of codes without integrity checks, broken algorithms, URL redirection, path traversal, software bugs, weak passwords, and previously infected software. The scanning system101may be able to access and identify software vulnerabilities for mitigation, rectification, correction, and fortification. In one embodiment, a cybersecurity system may also perform scanning according to scanning system101by performing a lookup of the identified installed software version507in the data structure to identify known vulnerabilities509. This function can be performed by the scanning system101according toFIG.1performing a query of the installed software505for unique version number or designator and comparing to, amongst many things, a set of likely or potential vulnerabilities to that software version for potential deficiencies or cybersecurity threats known or suspected to similar software types and versions. This query may be performed according to a predetermined set of values, to include previously identified unique version numbers or designators that may contain the known list of previously identified vulnerabilities. Among many embodiments, embodiments of the disclosed scanning system101may query the cloud provider API501to determine network accessibility information511related to the workload. In order to accomplish this query of the cloud provider API301, the scanning system101may involve index of search results and display of said search results, followed by processing and grouping search results. Network accessibility information511may include connection quality, alternative paths between nodes in a network, and the ability to avoid blockage in said networks. The workloads associated with this query may include applications, services, capabilities, and specific processes such as virtual machines, databases, containers, or Hadoop nodes, among others. If the system detects a vulnerable application513, one embodiment may identify one or more port on which said vulnerable application is accessible. In one of several embodiments, the scanning system101may detect a vulnerable application in one or more computation process. In another embodiment, the cybersecurity system may perform a network accessibility query in a separate process. Further, a disclosed embodiment may perform these separate functions in subsequent and sequential steps of the same process. A person having skill would understand that an authorized user or an authorized scanning system101can perform these functions concurrently and subsequently by an authorized user or an authorized cybersecurity system while performing the same function as the disclosed embodiment. Upon gathering network accessibility information and the identified port to identify one or more vulnerabilities susceptible to attack from outside the workload, a disclosed embodiment would have the functionality to perform processes to gather, display, and mitigate a discovered vulnerability in order to minimize the likelihood and effectiveness of a cyber threat outside of and attempting to access a workload through a known or previously encountered type of cyber threat. This functionality may include collecting and organizing the vulnerabilities according to type or category of vulnerability, displaying the gathered data for an end user or maintainer, and implementing security features automatically or manually by a user or maintainer such as security patches, password or passcode changes and suggestions for users to do the same, and malicious code eradication. As one of several possible embodiments, the scanning system101may also, upon identification of one or more vulnerabilities, implement remedial actions via one or more processors. Remedial actions may include, among other things, notification to an end user of an identified threat, compensation through a revised security code to mitigate the potential threat, publication of the identified threat and vulnerability in a log or record of detected vulnerabilities, and communication of the sensed vulnerability and threat to a server operator or maintainer to fortify the protections of workloads existing on similar environments. In a similar embodiment to above, the remedial measure may include transmission of an alert to a device associated with an administrator. The alert may be, amongst others, written, auditory, and visual for processing and use by an administrator of said scanning system101. Said administrator may take action based on the received alert, to include eliminating the cyber threat through mitigation measures, change in cybersecurity posture, or removing the workload from the cybersecurity threat environment. In another embodiment, a query of the cloud provider API501to determine network accessibility information related to the workload may be performed by at least one processor500configured to examine data sources associated with the workload. The data examined may include user data, system processing data, accessibility data, clock cycles, storage input/output, or similar data processors. A query of the cloud provider API501may be automated or manually-initiated. Based on the said query, the network accessibility information511related to the workload may change based on the data sources associated with said workload. In the above processor and similar embodiments, further configuration may include a process to determine network accessibility information511based on the examined data sources. The examined data sources may include various cloud-based workloads, internet protocols, transmission communication protocols, or other methods and systems of memory and data storage. As an exemplary embodiment, network accessibility information511includes at least one of: data from an external data source, cloud provider information, or at least one network capture log. These embodiments of an external data source may include data from the operating environment of the cloud-based environment, an external operating system for a computer processing unit500, or other similar computer readable media. Cloud provider information may further include information that may identify the network accessibility information511vertically or horizontally to fully describe the associated workload. Network capture logs may be automated or manually updated to include possible vulnerabilities513and threats to the cloud-based storage medium. Further, the disclosed embodiments may include an installed software application505, with the at least one processor500configured to extract data from at least one of operating system packages, libraries, or program language libraries. This data may be extracted through a system query, random access algorithm, or similar automated process. Operating system packages may include systems operable on Microsoft, Apple, Linux, and similar operating systems. Libraries may consist of a series of files, folders, and databases of information stored on one of any indexed data repositories. A program language library may contain several of an exemplary program languages including but not limited to Javascript, Swift, Scala, Go, Python, Elm, Ruby, C#, C++ and other similar sources of software code. As another exemplary embodiment, a scanning system101may also include a processor500configured to identify installed software application505based on the extracted data. The processor500may perform only this function or this function among many to accomplish the layered cybersecurity defense described herein this disclosed cloud-based security environment. The identification of installed software by said processor may include identifying the software by unique version number or designator, recognized source code, metadata associated with the installed software application files, or similar software-identifying information. One embodiment may include an additional function wherein the installed software application505that has been identified includes one or more scripts. These scripts may be processed through various computer readable languages to include Javascript, C#, C++ and other forms of computer code. One embodiment contemplated by the disclosed scanning system101may also include data structure includes aggregated vulnerability data309. This aggregated vulnerability data509may be compiled by an end user or maintainer from within the cloud-based environment of the current data structure as well as compilation from similar data sets and aggregation of common threats to data structures likely to experience similar vulnerabilities. This aggregation of vulnerability data509may be contained within the data structure and it may be collectively aggregated to provide for a more robust and layered cybersecurity defense posture. In an embodiment of the disclosed scanning system101, the aggregated vulnerability data509may include data from one or more third-party vendors. These vendors may include operators of the cloud-based server environment, providers of networking and internet communication, methods of layered authentication, and other similar providers of services directly related and in communication with the cloud-based cybersecurity system. As an additional exemplary embodiment, the aggregated vulnerability data309may include data collected by a scanner. This scanner may involve use of continuous or periodic monitoring of the workload. The scanner may perform security screenings of the various workloads vertically or horizontally to identify network identification information, port accessibility, and associated vulnerabilities. Any scan performed may be communicated to the scanning system101that may be responsible for performing and logging the results of the scan and may be able to initiate follow-on processes and protocols to protect the data contained in the workload that is the subject of scanning. An embodiment of the disclosed scanning system101may also include aggregated vulnerability data509that may include at least one of an advisory, an exploit, a security announcement, or a known bug. An advisory may include notification to a system maintainer or user of the potential vulnerability, may log notice of the advisory, and may recommend possible user or maintainer actions to potentially address said advisory. An exploit may further consist of an automated system response designed to take advantage of the sensed vulnerability data. The exploit can be further reflected in the aggregated vulnerability data and protocols can be written into the cybersecurity infrastructure to prevent said exploit from gaining access and permissions to unauthorized areas of the workload storage environment. A person having ordinary skill in the art would appreciate the above described embodiments are among many potential embodiments, to include a method of operating a scanning system101similar to the one described above. A disclosed embodiment contemplates this method to be accomplished through manual user operation, automated computer processes, or similar manners of operation. These manners of operation and those contemplated similar to them would allow the scanning system101described and disclosed to execute its operations as the system above describes. One of several embodiments of the disclosed method of operating a scanning system101may include a cyber security scanning system for a cloud environment101may include a processor500to operate said system. This processor may include central processing units and other similar computer-enabling equipment for processing and executing commands based on the information inputted to said system. The processor may be communicatively connected to a computer network or series of networks to accomplish said cyber security function. As an example embodiment, a processor unit500may be configured to use a cloud-provider API501that may communicate with one or more specified computer-readable media across a digital network. This can be accomplished through internet protocols, internet control message protocols, transmission control protocol, or user datagram protocol. Cloud provider API501may be one of several forms of middleware, interface, middle layer, or other systems of interfacing applications. A processor may be one or more computer processing units, central processing unit, server, microcomputer, mainframe, and any other manifestation of digital computing. Further to one of several possible embodiments, a cloud-provider API501may be configured to access a block-storage volume503of a workload maintained in a cloud-storage environment. This may be accomplished through a system of computer-readable media communicatively connected. Said block-storage volume503may be contained on a Storage Area Network (SAN) or similar cloud-based memory storage environment. The block storage volume503may be contained in smaller storage volumes with an associated identifier unique to that portion of said block storage volume503. In some embodiments, the block-storage volume503of a workload may have multiple paths for the storage volume to be reaggregated and retrieved quickly. Among several embodiments, a method of operating a scanning system101may comprise a system for identifying an installed software application505in the accessed block-storage volume503. This identification of installed software505may be accomplished by accessing installed software files through signature verification, root license, or authorized user lists. The installed software505may be located and identified within applications such as file storage, database storage, and virtual machine file system volumes. The identification of said installed software application505may be processed, analyzed, and communicated to the method of operating a scanning system101for processing, cataloging, and protection through encryption and various methods of layered cyber defense. Further, a method of operating a scanning system101described herein may include functionality to analyze installed software applications505to determine the associated software version507. The software application version507may identify the software version based on unique version name, unique version number, and may be based on unique states of the currently installed computer software. One of many embodiments disclosed above may include the method of operating a scanning system101having the ability to access a data structure of known software vulnerabilities509for a plurality of versions of software applications507. The known software vulnerabilities509may include, among others, missing data encryption, OS command injection, SQL injection, buffer overflow, missing authentication, missing authorization, unrestricted upload of dangerous file types, reliance on untrusted inputs in a security decision, cross-site scripting and forgery, download of codes without integrity checks, broken algorithms, URL redirection, path traversal, software bugs, weak passwords, and previously infected software. The cyber security system may be able to access and identify software vulnerabilities309for mitigation, rectification, correction, and fortification. In one embodiment, a method of operating a scanning system101may also perform scanning according to scanning system101by performing a lookup of the identified installed software version507in the data structure to identify known vulnerabilities309. This function can be performed by the scanning system101according toFIG.1performing a query of the installed software505for unique version number507or designator and comparing to, amongst many things, a set of likely or potential vulnerabilities to that software version for potential deficiencies or cybersecurity threats known or suspected to similar software types and versions. This query may be performed according to a predetermined set of values, to include previously identified unique version numbers or designators that may contain the known list of previously identified vulnerabilities. Among many embodiments, embodiments of the disclosed method of operating a scanning system101may query the cloud provider API501to determine network accessibility information511related to the workload. In order to accomplish this query of the cloud provider API501, the scanning system101may involve index of search results and display of said search results, followed by processing and grouping search results. Network accessibility information511may include connection quality, alternative paths between nodes in a network, and the ability to avoid blockage in said networks. The workloads associated with this query may include applications, services, capabilities, and specific processes such as virtual machines, databases, containers, or Hadoop nodes, among others. If the system detects a vulnerable application, one embodiment may identify one or more port on which said vulnerable application is accessible. In one of several embodiments, the scanning system101may detect a vulnerable application in one or more computation process. In another embodiment, the scanning system101may perform a network accessibility query in a separate process. Further, a disclosed embodiment may perform these separate functions in subsequent and sequential steps of the same process. A person having skill would understand that an authorized user or an authorized cybersecurity system may perform these functions concurrently and subsequently while performing the same function as the disclosed embodiment. Upon gathering network accessibility information511and the identified port513to identify one or more vulnerabilities susceptible to attack from outside the workload, a disclosed embodiment would have the functionality to perform processes to gather, display, and mitigate a discovered vulnerability in order to minimize the likelihood and effectiveness of a cyber threat outside of and attempting to access a workload through a known or previously encountered type of cyber threat. This functionality may include collecting and organizing the vulnerabilities according to type or category of vulnerability, displaying the gathered data for an end user or maintainer, and implementing security features automatically or manually by a user or maintainer such as security patches, password or passcode changes and suggestions for users to do the same, and malicious code eradication. A person having ordinary skill in the art would similarly understand that the above disclosed system can be disclosed using a suitable non-transitory computer-readable medium performing each of the disclosed functions. The disclosed embodiment may include a non-transitory computer readable medium with a scanning system101similar to the one described above on a non-transitory computer-readable medium. A disclosed embodiment contemplates this system to be accomplished through a medium that may contain a central processing unit, virtual machine, or a similar non-transitory medium. These manners of operation and those contemplated similar to them would allow the scanning system101described and disclosed to execute its operations as the system above describes. One of several embodiments of the disclosed non-transitory computer readable medium with a scanning system101may include a cyber security scanning system for a cloud environment101may include a processor500to operate said system. This processor500may include central processing units and other similar computer-enabling equipment for processing and executing commands based on the information inputted to said system. The processor500may be communicatively connected to a computer network or series of networks to accomplish said cyber security function. As an example embodiment, a processor unit300may be configured to use a cloud-provider API501that may communicate with one or more specified computer-readable media across a digital network. This can be accomplished through internet protocols, internet control message protocols, transmission control protocol, or user datagram protocol. Cloud provider API501may be one of several forms of middleware, interface, middle layer, or other systems of interfacing applications. A processor500may be one or more computer processing units, central processing unit, server, microcomputer, mainframe, and any other manifestation of digital computing. Further to one of several possible embodiments, a cloud-provider API501may be configured to access a block-storage volume503of a workload maintained in a cloud-storage environment. This may be accomplished through a system of computer-readable media communicatively connected. Said block-storage volume503may be contained on a Storage Area Network (SAN) or similar cloud-based memory storage environment. The block storage volume503may be contained in smaller storage volumes with an associated identifier unique to that portion of said block storage volume503. In some embodiments, the block-storage volume503of a workload may have multiple paths for the storage volume to be reaggregated and retrieved quickly. Among several embodiments, a non-transitory computer readable medium with a scanning system101may comprise a system for identifying an installed software application505in the accessed block-storage volume503. This identification of installed software505may be accomplished by accessing installed software files through signature verification, root license, or authorized user lists. The installed software505may be located and identified within applications such as file storage, database storage, and virtual machine file system volumes. The identification of said installed software application505may be processed, analyzed, and communicated to the method of operating a scanning system101for processing, cataloging, and protection through encryption and various methods of layered cyber defense. Further, a non-transitory computer readable medium with a scanning system101described herein may include functionality to analyze installed software applications505to determine the associated software version507. The software application version507may identify the software version based on unique version name, unique version number, and may be based on unique states of the currently installed computer software. One of many embodiments disclosed above may include the non-transitory computer readable medium with a scanning system101having the ability to access a data structure of known software vulnerabilities509for a plurality of versions of software applications507. The known software vulnerabilities509may include, among others, missing data encryption, OS command injection, SQL injection, buffer overflow, missing authentication, missing authorization, unrestricted upload of dangerous file types, reliance on untrusted inputs in a security decision, cross-site scripting and forgery, download of codes without integrity checks, broken algorithms, URL redirection, path traversal, software bugs, weak passwords, and previously infected software. The scanning system101may be able to access and identify software vulnerabilities for mitigation, rectification, correction, and fortification. In one embodiment, a non-transitory computer readable medium with a scanning system101may also perform scanning according to scanning system101by performing a lookup of the identified installed software version in the data structure to identify known vulnerabilities509. This function can be performed by the scanning system101according toFIG.1performing a query of the installed software for unique version number307or designator and comparing to, amongst many things, a set of likely or potential vulnerabilities309to that software version307for potential deficiencies or cybersecurity threats known or suspected to similar software types and versions. This query may be performed according to a predetermined set of values, to include previously identified unique version numbers or designators that may contain the known list of previously identified vulnerabilities509. Among many embodiments, embodiments of the disclosed non-transitory computer readable medium with a scanning system101may query the cloud provider API501to determine network accessibility information511related to the workload. In order to accomplish this query of the cloud provider API501, the scanning system101may involve index of search results and display of said search results, followed by processing and grouping search results. Network accessibility information511may include connection quality, alternative paths between nodes in a network, and the ability to avoid blockage in said networks. The workloads associated with this query may include applications, services, capabilities, and specific processes such as virtual machines, databases, containers, or Hadoop nodes, among others. If the system detects a vulnerable application, one embodiment may identify one or more ports515on which said vulnerable application is accessible. In one of several embodiments, the scanning system101may detect a vulnerable application in one or more computation processes. In another embodiment, the scanning system101may perform a network accessibility query in a separate process. Further, a disclosed embodiment may perform these separate functions in subsequent and sequential steps of the same process. A person having skill would understand that an authorized user or an authorized cybersecurity system can perform these functions concurrently and subsequently while performing the same function as the disclosed embodiment. Upon gathering network accessibility information511and the identified port515to identify one or more vulnerabilities susceptible to attack from outside the workload, a disclosed embodiment would have the functionality to perform processes to gather, display, and mitigate a discovered vulnerability in order to minimize the likelihood and effectiveness of a cyber threat outside of and attempting to access a workload through a known or previously encountered type of cyber threat. This functionality may include collecting and organizing the vulnerabilities according to type or category of vulnerability, displaying the gathered data for an end user or maintainer, and implementing security features automatically or manually by a user or maintainer such as security patches, password or passcode changes and suggestions for users to do the same, and malicious code eradication. Another disclosed embodiment may include a method of using a cloud provider API501, accessing block storage volume503of a workload maintained in a cloud storage environment. The method may include manual user operation, automated system operation, systematic and random operating parameters for the system to provide its cybersecurity and similar security functions. The accessed block storage volume may be contained as a collection of block units organized together, may be a series of individual blocks that can be reorganized to form a new storage volume, and may have the ability to be disaggregated and reaggregated as necessary to accomplish its storage and cybersecurity functions. The disclosed method may further comprise a system that may analyze the identified installed software application505to determine an associated software version507. In analyzing an installed software application305, the disclosed embodiment may include a method of querying the installed software application to access a software version507that may include a series of letters, numbers, and other identifying characters to differentiate the version of software currently operative and its associated identifying characteristics and capabilities. A method of the disclosed may also include accessing a data structure of known software vulnerabilities509for a plurality of versions of software applications507. The disclosed method may determine a data structure of known software vulnerabilities509of one among many operative versions of software installed in the monitored storage system as compared to historical data of similar storage systems. The disclosed method as described herein may further include performing a lookup of the identified installed software version507in the data structure to identify known vulnerabilities509. The lookup of identified installed software507may be previously indexed based on unique version identifier, and the indexed information may include known vulnerability information509, network accessibility information511, network protocols, and administrator identifier information. Based on the said indexed known vulnerabilities509, the scanning system101disclosed may take actions to mitigate a cybersecurity threat and reinforce cybersecurity defenses at the type of cybersecurity threat, as previously discussed. Further to the above, a disclosed method may also include querying the cloud provider API501to determine network accessibility information511related to the workload. A query of the cloud provider API501to determine network accessibility information511related to the workload may be automatically or manually initiated. Network accessibility information511similar to the above embodiment may include connection quality, alternative paths between nodes in a network, and the ability to avoid blockage in said networks. The workloads associated with this query may include applications, services, capabilities, and specific processes such as virtual machines, databases, containers, and Hadoop nodes, among others. A disclosed method of one embodiment also may provide a method of identifying at least one port515on which the vulnerable application is accessible. The identification of one port515on which the vulnerable application is accessible may be for the purpose of identifying a cyber weakness, and it may include the ability to change the port currently accessible to mitigate the said cyber weakness. Yet another disclosed embodiment of a disclosed method may include using the network accessibility information511and the identified at least one port515to identify one or more vulnerabilities513susceptible to attack from outside the workload. The vulnerability513identified as susceptible to attack from outside the workload may include, among others, missing data encryption, OS command injection, SQL injection, buffer overflow, missing authentication, missing authorization, unrestricted upload of dangerous file types, reliance on untrusted inputs in a security decision, cross-site scripting and forgery, download of codes without integrity checks, broken algorithms, URL redirection, path traversal, software bugs, weak passwords, and previously infected software. A method of the disclosed embodiment may further include implementing a remedial action in response to the identified one or more vulnerabilities513. Said remedial action may include, among other things, notification to an end user of an identified threat, compensation through a cybersecurity patch, publish of the identified threat and vulnerability in a log or record of detected vulnerabilities, and communication of the sensed vulnerability and threat to a server operator and maintainer to fortify the protections of workloads existing on similar environments. The disclosed method may also include wherein the remedial measure includes transmitting an alert to a device associated with an administrator. Said alert may be, amongst others, written, auditory, and visual for processing and use by an administrator of said cybersecurity system. Said alert may further be logged and catalogued for future identification of known threats and vulnerabilities to similar software application versions. A disclosed method of the present embodiment may also provide querying the cloud provider API501to determine the network accessibility information311related to the workload and examining data sources associated with the workload. A query of the cloud provider API501to determine network accessibility information511related to the workload may be automatically or manually initiated. Network accessibility information511similar to the above embodiment may include connection quality, alternative paths between nodes in a network, and the ability to avoid blockage in said networks. The workloads associated with this query may include applications, services, capabilities, and specific processes such as virtual machines, databases, containers, and Hadoop nodes, among others. The method of the above disclosed embodiment may further determine network accessibility information511by determining the network accessibility information511based on the examined data sources. The data examined may include user data, system processing data, accessibility data, clock cycles, storage input/output, and similar processes. A query of the cloud provider API501may be automated or manually-initiated. Based on the said query, the network accessibility information511related to the workload may change based on the data sources associated with said workload. In the method of one disclosed embodiment, network accessibility information511may include at least one of data from an external data source, cloud provider information, or at least one network capture log. These embodiments may include data from the operating environment of the cloud-based environment, an external operating system for a computer processing unit, or other similar computer readable media. Network capture logs may be automated or manually updated to include possible vulnerabilities and threats to the cloud-based storage medium. A method of the disclosed embodiment may also include identifying the installed software application507by extracting data from at least one of OS packages, libraries, or program language libraries. This data may be extracted through a system query, random access algorithm, or similar automated process. Operating system packages may include systems operable on Microsoft, Apple, Linux, and similar operating systems. Libraries may consist of a series of files, folders, and databases of information stored on one of any indexed data repositories. A program language library may contain several of an exemplary program languages including but not limited to Javascript, Swift, Scala, Go, Python, Elm, Ruby, C#, C++ and other similar sources of software code. Another method of the disclosed embodiment may identify the installed software application507based on the extracted data. The method of the processor500may perform only this function or this function among many to accomplish the layered cybersecurity defense described herein this disclosed cloud-based security environment. A further disclosed method of the present embodiment may include at least one processor500further configured to identify a version507of the installed software application505. The method of the processor may perform only this function or this function among many to accomplish the layered cybersecurity defense described herein this disclosed cloud-based security environment. Prioritization Techniques FIG.6is a block diagram of method600for risk prioritization, consistent with disclosed embodiments. Aspects of this disclosure may include a cloud-based cybersecurity system for assessing internet exposure of a cloud-based workload with embodiments configured to access at least one cloud provider API to determine a plurality of entities capable of routing and/or filtering traffic in a virtual cloud environment associated with a target account containing the workload. Routing traffic, as used herein, may refer to the process of selecting a path for traffic (e.g., cells, blocks, frames, packets, calls, messages, or other units of data) in a network or between or across multiple networks. The cloud-based cybersecurity system, in some embodiments, may comprise at least one processor configured to assess internet exposure as described herein. A cloud provider API, as used herein, may refer to any application program interface that allows the end user to interact with a cloud provider's service. Workloads, as used herein, may refer to devices in a network. By way of example, workloads may include systems, devices, resources in a cloud infrastructure. By way of example,FIG.1illustrates examples of a workload, such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, or load balancer115. In some embodiments, scanning system101ofFIG.1may perform a scanning operation on any workload (e.g., systems, devices, resources, etc.) in cloud infrastructure106. In some embodiments, scanning system101ofFIG.1may use an API to detect workloads such as virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115in the source account. In some embodiments, workloads may exist as virtual machines, while in other embodiments, workloads may exist as discrete, physical devices. For example, in some embodiments, a layer3routing system could be implemented as a physical device (e.g., a router or switch) or a virtual device (e.g., a routing system instantiated as a virtual device on a computer). Scanning system101may query devices and systems capable of routing and filtering traffic (e.g., load balancer115, routers, switches, firewalls, security groups, API Gateways and proxies) using an API provided through a cloud service provider's system to determine network configurations, and may evaluate them against known problematic configurations or other configurations. These devices can be cloud native and/or unmanaged (e.g. NGINX proxy) A cloud environment, as used herein, may refer to a platform implemented on, hosted on, and/or accessing servers that are accessed over the Internet. A scanner account, as used herein, may refer to any type of account associated with a scanner. A scanner, as used herein, may refer to a device for examining, reading, or monitoring something. By way of example,FIG.1illustrates one example of a scanner, scanning system101ofFIG.1. A target account containing the workload, as used herein, may refer to a location on a network server selected as an aim of connection, for example. In some embodiments, scanning system101ofFIG.1may determine entities capable of routing traffic. In some embodiments, the plurality of entities includes a virtual network appliance. A virtual network appliance, as used herein, may refer to a machine focused on virtualizing network functionality. In some embodiments, a typical network virtual appliance may include functionality directed to various layer four to seven. That is, such appliances may implement functionality associated with OSI model layers4-7(Transport, Session, Presentation, and Application). Such functionality may include, for example, a firewall, WAN optimizer, application delivery controllers, routers, load balancers, IDS/IPS, proxies, or SD-WAN edge device. In some embodiments, scanning system101ofFIG.1may determine a virtual network appliance capable of routing traffic. In some embodiments, the virtual network appliance is at least one of a load balancer, a firewall, a proxy, or a router. A load balancer, as used herein, may refer to any device that acts as a reverse proxy and distributes network or application traffic across a number of servers. In some embodiments, load balancer115may comprise one or more systems that balance incoming requests between the different systems and devices of cloud infrastructure106. For example, load balancer115may be configured to determine usage (e.g., processor load, used storage capacity) of systems or devices in cloud infrastructure106to assist in determining where to route an incoming request from network105to store data, perform processing, or retrieve data. Load balancer115may be configured to receive an incoming request from user service102. Upon receipt of the request, load balancer115may consult a data store (part of or separate from load balancer115; not pictured) to determine usage or forecasted usage of various systems or devices in cloud infrastructure106, and may forward the request to the systems or devices having the lowest usage or forecasted usage. A firewall, as used herein, may refer to a device or system that monitors incoming and outgoing network traffic and permits or blocks data packets based on a set of security rules. A proxy, as used herein, may refer to a device or system that translates traffic between networks or protocols. In some embodiments, a proxy may be implemented as an intermediary a device or system separating end-user devices from destinations devices (e.g., web servers) by routing traffic through them. Proxies (or proxy servers) may be configured to provide varying levels of functionality, security, and privacy depending on use case, needs, or company policy. A router, as used herein, may refer to a device or system that connects a local network to the internet. Aspects of this disclosure may include embodiments in which a cloud-based cybersecurity system is configured to query the at least one cloud provider API to determine at least one networking configuration of the entities. Querying, as used herein, may refer to a request for data or information from a resource. A cloud provider API, as used herein, may refer to any application program interface that allows an end user to interact with a cloud provider's service. A cloud provider may include any entity through whom information is accessible over the Internet or other shared network. At least one networking configuration of the entities, as used herein, may refer to a manner in which components are arranged to make up an operating system, networking system, or a computer system. In some embodiments, scanning system101ofFIG.1may send a request to determine at least one networking configuration of the entities. In some embodiments, the networking configuration is at least one of a routing configuration, a proxy configuration, a load balancing configuration, a firewall configuration, or a VPN configuration. A routing configuration, as used herein, may refer to a manner in which components are arranged to make up a routing system. For example, a routing configuration may provide for dynamic or static “routes” between devices on a network or on networks. A proxy configuration, as used herein, may refer to a manner in which components are arranged to make up a proxy system. For example, a proxy configuration may provide for a configuration of how a client can route traffic through a central point (e.g., a proxy server), in order to relay multiple clients' requests from a single point on a network. A load balancing configuration, as used herein, may refer to a manner in which components are arranged to make up a load balancing system. For example, a loan balancing configuration may define how resources such as uplinks or distributed applications may be utilized by devices on the same (or another) network, in order to balance usage of multiple redundant uplinks or distributed application nodes. A firewall configuration, as used herein, may refer to a manner in which components are arranged to make up a firewall system. For example, a firewall configuration may comprise rules that determine what traffic may be forwarded, which devices are permitted to communicate with one another, or what protocols may be used for communication between devices. A VPN (virtual private network) configuration, as used herein, may refer to a manner in which components are arranged to make up a VPN system. A VPN may refer to a type of network that extends a private network across a public network and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. For example, a VPN configuration may provide rules or other information by which data is sent or received over a single, encrypted route, such that intermediate points cannot determine the content, source, or destination of the data. In some embodiments, scanning system101ofFIG.1may send a request to determine at least a routing configuration, a proxy configuration, a load balancing configuration, a firewall configuration, or a VPN configuration of the entities. Aspects of this disclosure may include embodiments in which a cloud-based cybersecurity system is configured to build a graph connecting the plurality of entities based on the networking configuration. A graph connecting the plurality of entities, as used herein, may refer to a diagram showing the relation between variable quantities (e.g., entities). In some embodiments, the graph may include any data structures that includes the connections from one entity to another. In some embodiments, scanning system101ofFIG.1may construct a graph or diagram linking the plurality of entities based on the networking configuration. In some embodiments, graph includes a data structure sequentially connecting entities. A data structure sequentially connecting entities, as used herein, may refer to an organization, management, and/or storage format that enables efficient access and modification of data such as entities. In some embodiments, a graph may include directional vectors indicating directions of dataflow. Directional vectors, as used herein, may refer to a quantity having direction as well as magnitude (indicating, for example, what kinds of traffic are allowed to flow, what ports are open, what protocols are allowed), especially as determining the position of one point in space relative to another. Directions of dataflow, as used herein, may refer to the movement of data through a system comprised of software, hardware or a combination of both along a particular course. In some embodiments, scanning system101ofFIG.1may construct a graph including a data structure connecting entities sequentially or a graph including directional vectors showing the direction of data flow. In some embodiments, to build the graph, the at least one processor may be configured to identify individual entities as nodes and connect the nodes. Nodes, as used herein, may refer to a device in a network (e.g., devices in cloud infrastructure100). Connecting nodes, as used herein, may refer to linking or bonding devices or data points in a larger network. In some embodiments, the graph includes port numbers associated with the workload. Port numbers, as used herein, may refer to a way to identify a specific process to which an internet or other network message is to be forwarded when it arrives at a server and/or which services are running on a server. In some embodiments, the graph includes a path from the at least one source to the workload. A path from the at least one source to the workload, as used herein, may refer to a string of characters used to uniquely identify a location in a directory structure. Aspects of this disclosure may include embodiments in which a cloud-based cybersecurity system is configured to access a data structure identifying services publicly accessible via the Internet and capable of serving as an internet proxy. Services capable of serving as an internet proxy, as used herein, may refer to a system or router that provides a gateway between users and the internet (e.g., network105). In some embodiments, scanning system101ofFIG.1may access a configuration detecting services over the internet and capable of serving as an internet proxy. Aspects of this disclosure may include embodiments configured to integrate the identified services into the graph. Integrating identified services into a graph, as used herein, may refer to the act of bringing together smaller components (the identified services) into a single system that functions as one (the graph). For example, scanning system101may integrate information on services available in each node into the data structure, associated with each node in the data structure. Aspects of this disclosure may include embodiments in which a cloud-based cybersecurity system is configured to traverse the graph to identify at least one source originating via the Internet and reaching the workload. Traversing the graph, as used herein, may refer to a device, such as scanning system101, accessing the systems represented by the nodes in the graph, to determine which systems are accessible and how. For example, scanning system101may analyze the graph to detect sources coming from the internet and reaching the workload, by accessing a first system, and traversing between systems in a depth-first, breadth-first, or other manner, until a path to an external network (e.g., the Internet) is reached. In some embodiments, the at least one source may be a potential malicious source. In some embodiments, the source may include a code in any part of a software system or script that is intended to cause undesired effects, security breaches or damage to a system. (In some embodiments, the Internet at large may be considered to be a potentially malicious source.) Aspects of this disclosure may include embodiments configured to output a risk notification associated with the workload. Outputting a risk notification, as used herein, may refer to sending out data such as a message indicating a possibility of danger. In some embodiments, scanning system101ofFIG.1may produce a notification indicating a risk that is related with the workload. Examples of such risk notifications may include, for example, an electronic mail message, a visual alert, an audio alert, an audio-visual alert, In some embodiments, the risk notification includes one or more resolution recommendations. Resolution recommendations, as used herein, may refer to a suggestion or proposal as to the best course of action in view of the identified risk. FIG.6illustrates a block diagram of method600performed by a processor (e.g., a processor associated with scanning system101), consistent with disclosed embodiments. In some embodiments, the processor may operate on instructions stored in a non-transitory computer readable medium. In some embodiments, the method may include seven (or more or less) steps: Block602: Access at least one cloud provider API to determine a plurality of entities capable of routing traffic in a virtual cloud environment associated with a target account containing the workload. In some embodiments, scanning system101ofFIG.1may determine entities capable of routing traffic, consistent with the above-disclosed embodiments. Block604: Query the at least one cloud provider API to determine at least one networking configuration of the entities. In some embodiments, scanning system101ofFIG.1may send a request to determine at least one networking configuration of the entities, consistent with the above-disclosed embodiments. Block606: Build a graph connecting the plurality of entities based on the networking configuration. In some embodiments, scanning system101ofFIG.1may construct a graph or diagram linking the plurality of entities based on the networking configuration, consistent with the above-disclosed embodiments. Block608: Access a data structure identifying services publicly accessible via the Internet and capable of serving as an internet proxy. In some embodiments, scanning system101ofFIG.1may access a configuration detecting services over the internet and capable of serving as an internet proxy, consistent with the above-disclosed embodiments. Block610: Integrate the identified services into the graph. In some embodiments, scanning system101ofFIG.1may incorporate the services into the graph, consistent with the above-disclosed embodiments. Block612: Traverse the graph to identify at least one source originating via the Internet and reaching the workload. In some embodiments, scanning system101ofFIG.1may analyze the graph to detect sources coming from the internet and reaching the workload, consistent with the above-disclosed embodiments. Block614: Output a risk notification associated with the workload. In some embodiments, scanning system101ofFIG.1may produce a notification indicating a risk that is related with the workload, consistent with the above-disclosed embodiments. Techniques for Malware Detection Using Secondary System Aspects of this disclosure may provide a technical solution to the challenging technical problem of malware detection on a primary system using a secondary system other than the primary system. In existing technologies, malware detection system may be run on a primary system that itself may be susceptible to malware. The malware may detect existence of the malware detection system and deceive it to avoid being detected. To resolve such a technical problem, a secondary system may be provided to host the malware detection system, which may isolate the malware detection system from potential exposure or influence from the malware that may reside on the primary system. The disclosed technical solutions may increase success rate of malware detection and thus enhance security level of the primary system. A cyber security system, as used herein, may refer to a system including any combination of software and hardware for enhancing security of a device, a platform, or another system in a network environment. By way of example, the cyber security may be implemented as a system including scanning system101inFIG.1. As another example, scanning system101may include the disclosed cyber security system as a subset. A cloud environment, as used herein, may refer to a computing environment running on a cloud. By way of example, the cloud environment may include or be part of cloud infrastructure106inFIG.1. Consistent with disclosed embodiments, at least one processor may be configured to utilize a cloud provider API to access a block storage volume of a workload maintained on a target account in a target system of a cloud storage environment. Utilizing, as used herein, may refer to an operation of using, deploying, enacting, enabling, activating, allocating, invoking, calling, or any operation of putting a thing into use in a computer environment. An API refers to an application programming interface herein. A cloud provider in this disclosure may refer to a cloud service provider (e.g., a service provider of cloud computing, cloud storage, or any services provided to users on demand over a network). A cloud provider API, as used herein, may refer to an API provided, prepared, enabled, activated, programmed, or written by a cloud provider. A block storage volume in this disclosure may refer to a storage volume in a block storage scheme. In some embodiments, a block storage volume may be connected, disconnected, or reconnected to a system (e.g., a physical computer, a virtual machine, a network interface, a cloud, or any combination of hardware or software modules) without interfering (e.g., shutting down, halting, rebooting, or any manner of interrupting) the operation status of the system or other running tasks on the system. For example, a block storage volume may be implemented as a virtual disk. In some embodiments, a block storage volume can be associated with an account of a user of the system, the system that it is connected to, or both. Block storage, as used herein, may refer to a data storage scheme or model in which data is saved to storage media in fixed-sized raw data chunks (referred to as “blocks”). The raw data blocks may contain no metadata. Each block may be associated with a unique address as metadata assigned to the block. Storage blocks may be controlled by an operating system (OS) and may be accessed by a protocol (e.g., iSCSI, Fibre Channel, or Fibre Channel over Ethernet). A storage volume, as used herein, may refer to an identifiable unit (e.g., a physical unit or a logical, virtual unit) of data storage. The storage volume may be mounted to a device via an operation system and be configured with a specific file system (e.g., New Technology File System) assigned a system-unique name or number that identifies the storage volume. The storage volume may represent a named, logical area of storage that enables users and applications to access data on the underlying device. By way of example, a storage volume may be a logical disk that represents a named, logical area of a physical storage device (e.g., a hard disk drive, solid-state drive, compact disc read-only memory, digital video disk, floppy disk, or any other type of storage device). In some embodiments, a logical storage volume may span multiple physical storage devices (e.g., hard disks) and appear as a single, contiguous storage area that works like a physical volume. Storage volumes may be flexibly configured, such as being expanded, contracted, mirrored, stripped, or adapted to support multiple disks (e.g., redundant array of independent disks). A workload, as used herein, may refer to a specific application, service, capability, or a specific amount of work that can be run on a cloud resource, system, or infrastructure. By way of example, a workload may be a virtual machine, a database, a container, a Hadoop node, an application, a storage object, a load balancer, or an IAM (Identity and Access Management) configuration. A cloud storage environment in this disclosure may refer to a computing environment of a cloud computing model (referred to as “cloud storage”) that stores data on the Internet through a cloud computing provider that manages and operates data storage as a service. A cloud storage environment may be managed and controlled by an operating system. The operation system may be susceptible to malware and security vulnerabilities, and may become a primary system (or referred to as a “target system” herein) that needs malware detection. By way of example, the cloud storage environment may include or be part of cloud infrastructure106inFIG.1. A target system of a cloud storage environment in this disclosure may refer to an operating system or a computing system with an installed operating system, which manages and controls the cloud storage environment or a subsystem of the target system. The target system may maintain one or more accounts for users of the cloud storage environment. A target account of the target system in this disclosure may refer to an account of a user of a cloud infrastructure (e.g., a cloud environment or a cloud storage environment). By way of example, to utilize the cloud provider API to access the block storage volume (e.g., existing in storage111A-111D inFIG.1) of the workload, the at least one processor (e.g., a processor in scanning system101ofFIG.1) may communicate with the cloud storage environment (e.g., cloud infrastructure106inFIG.1) via sending and receiving data (e.g., data packets) over a network (e.g., network105inFIG.1), and enable one or more APIs provided by the cloud provider for accessing the block storage volume. For example, the communication and API enabling may be performed through the process of integration as described in step201ofFIG.2. In some embodiments, the at least one processor may create a connection (e.g., by logging in) between a first account on scanning system101and a second account on cloud infrastructure106, then cause the second account to authorize access privileges to the first account via the cloud provider API for accessing storage111A-111D in cloud infrastructure106. Consistent with disclosed embodiments, the at least one processor may be configured to utilize a scanner at a location of the block storage volume and on a secondary system other than the target system. A scanner, as used herein, may refer to an application, a program, a service, a process, a thread, a function, or any executable codes for performing a scanning process on a computer system to obtain information (e.g., related to structures, vulnerabilities, security issues, or any information of the computer system). A location of the block storage volume in this disclosure may refer to a system (e.g., a physical computer, a virtual machine, a network interface, a cloud, a data center, or any combination of hardware or software modules) or an identifier associated with the system that the block storage volume is connected to. In some embodiments, the location of the block storage volume may include at least one of the target account, a secondary system account, a cloud provider account, or a third party account. By way of example, the secondary system account may be an account connected with the target account (e.g., connected by an integration process similar to step201inFIG.2A). The cloud provider account may be an account connected to at least one of the target account or the secondary system account. The third party account may be an account hold by a third party and connected to at least one of the target account, the secondary system account, or the cloud provider account. In some embodiments, the at least one processor may determine the location of the block storage volume based on at least one of the target account, a secondary system account, a cloud provider account, or a third party account. By way of example, to determine the location of the block storage volume, the at least one processor may consult a lookup table, a list, or a database that stores a relationship record between an account (e.g., a target account, a secondary system account, a cloud provider account, or a third party account) and a system (e.g., a physical computer, a virtual machine, a network interface, a cloud, or any combination of hardware or software modules) that the block storage volume is connected to. Based on the relationship record, the at least one processor may identify the system as the location using the account as a key. A secondary system, as used herein, may refer to a computer system running at an environment or conditions not affected by another system (referred to as a “primary system” or “target system” herein). In some embodiments, the secondary system may have an operating system different from an operating system of the target account. By way of example, the target account may have a WINDOWS® operating system, and the secondary system may have a LINUX® operating system. In some embodiments, the secondary system includes at least one of a virtual machine (e.g., any of virtual machines107A-107D inFIG.1), a container, or a serverless function. A container, as used herein, may refer to an OS-level virtualization instance assigned with resources by a non-virtual computer operating system (OS). A non-virtual computer operating system (OS) may create multiple isolated user space instances that may function like real computers from the point of view of programs running in them. A computer program running on a non-virtual operating system can access all available resources (e.g., connected devices, files, folders, network shares, CPU power, or any other software or hardware capabilities) of the computer. In contrast, a computer program running inside of a container can only access the resources assigned to the container. A serverless function, as used herein, may refer to a computer function hosted in a cloud environment that may allocate resources on demand to perform a function (e.g., scanning) for a user of the cloud environment. The user of the serverless function may be free from concerns of capacity planning, configuration, management, maintenance, fault tolerance, or scaling of containers, virtual machines, or physical servers. When a serverless function is not running, no computing resources may be allocated to it (e.g., using no provisioned server, thus named “serverless”). By way of example, a serverless function may be implemented as any combination of Lambda functions, event sources, and other computing resources. A serverless function may be invoked by the user of a cloud infrastructure (e.g., a cloud environment or a cloud environment). By way of example, to utilize the scanner at the location of the block storage volume (e.g., existing in storage111A-111D inFIG.1), the at least one processor (e.g., a processor in scanning system101ofFIG.1) may communicate with the secondary system (e.g., one or more of virtual machines107A-107D inFIG.1) other than the target system (e.g., the main system operating and managing storage111A-111D) and activate the secondary system for scanning. For example, to activate the secondary system, the at least one processor may transmit a scanner (e.g., in the form of a computer program or executable codes) to the secondary system, and the secondary system may allocate computing resources (e.g., CPU powers, memory space, network ports, or any other software or hardware resources) in preparation of running the scanner. The scanner may be programmed to be stored at the location of the block storage volume or to start the scanning at the location of the block storage volume. For example, the block storage volume may be connected to a particular system in cloud infrastructure106inFIG.1, and the scanner may be stored at that system (e.g., on storage connected to the system). When the scanner begins scanning, the scanner may start the scanning at that system before scanning other systems. In some embodiments, to utilize the scanner, the at least one processor may suspend an operation of the scanner after the scan of the block storage volume. Suspending, as used herein, may refer to an operation of pausing, stopping, ceasing, freezing, halting, hanging, terminating, killing, or any manner or temporarily or permanently setting aside an ongoing process or computer program. By way of example, after the scanner completes scanning the block storage volume, the at least one processor may detect the completion by an active manner (e.g., by periodically polling the secondary system for a status of the scanning) or a passive manner (e.g., by receiving a message from the secondary system indicating the completion of the scanning). The at least one process may then send data to the secondary system, the data including a signal or computer-executable codes for suspending the operation of the scanner. After receiving the data, the secondary system may suspend the scanner and release the computing resources previously allocated to the scanner. In some embodiments, to utilize the scanner, the at least one processor may modify a pre-utilized scanner at the location of the block storage volume based on information related to the target account to obtain a modified scanner, and then utilize the modified scanner. A pre-utilized scanner in this disclosure may refer to a scanner that has been previously utilized (e.g., on the same secondary system) and not destroyed. In some embodiments, the information related to the target account may include data or parameters indicating a status or a change of the target account, such as, for example, newly created files, deleted files, changed files, or any data indicating a state of the target account. For example, the pre-utilized scanner at the block storage volume may use a configuration file. The configuration file may include data or parameters indicating a previous status (e.g., list of existing files, public network ports, Internet-accessible data channels, or any operation state parameters) of the target account. The information related to the target account may include data indicating a current status of the target system. The current status of the target system may be different from the previous status of the target system (e.g., including changed files, changed public network ports, changed Internet-accessible data channels, or any changed operation state parameters). The at least one processor may modify the pre-utilized scanner by modifying the configuration file to reflect the current status of the target system. Then, the at least one processor may utilize the modified scanner. Consistent with disclosed embodiments, the at least one processor may be configured to scan the block storage volume for malicious code, using the secondary system. Scanning, as used herein, may refer to an operation of traversing, checking, inspecting, reading, examining, searching, or any manner of looking up. Malicious code, as used herein, may refer to an application, a computer program, a service, a process, a thread, a function, a script, or any type of executable codes that may be executed on a computer without authorization and with a malicious or harmful intent (e.g., manipulating the computer into dangerous behaviors, writing changes or add-ons to existing computer programs or files, stealing computer files, copying sensitive information without authorization, damaging the computer itself, disabling one or more functions of the computer, or attacking another computer using the computer as a disguise). For example, malicious code may include attack scripts, computer viruses, computer worms, Trojan horses, backdoors, or any other malicious executable content. In some embodiments, the malicious code may include a rootkit. A rootkit in this disclosure may refer to any type of computer software designed to enable access to a computer or an area of its software without authorization. By way of example, to scan the block storage volume (e.g., existing in storage111A-111D inFIG.1) for malicious code, the secondary system (e.g., one or more of virtual machines107A-107D inFIG.1) may run the utilized scanner with allocated computing resources to look up databases (e.g., a blacklist) of known malicious files, look up databases storing patterns of malicious files and identify the patterns, look up databases of cryptographic hashes that identify malicious files, run potential malicious code in a sandbox (e.g., a confined and isolated environment for running programs without causing changes outside the confined and isolated environment), determine whether there are scanned files that are not in a database (e.g., a whitelist) of known good files, detect changes over time in the scanned environment, or any combination thereof. In some embodiments, scanning the block storage volume may include scanning disk-backed memory. A disk-backed memory in this disclosure may refer to a storage space implemented on a disk (e.g., a hard disk) and functioning like a volatile memory from the perspective of a running program. By way of example, scanning the block storage volume may include scanning files stored in the disk-backed memory. In some embodiments, the disk-backed memory may include at least one of a page file or a cache file. A page file, as used herein, may refer to a file containing one or more pages (also referred to as “memory pages” or “virtual pages”), each of the one or more pages being a fixed-length contiguous block of virtual memory and described by a single entry in a page table. A page file may be stored on a disk (e.g., a hard disk) for memory management of a virtual memory used by an operating system. A cache file, as used herein, may refer to a file storing data so that future requests for the stored data can be served faster. The data stored in a cache file may be the result of a computation or a copy of the data stored elsewhere. Consistent with disclosed embodiments, the at least one processor may be configured to identify malicious code based on the scan. By way of example, if the secondary system scans the block storage volume by consulting databases of known malicious files and finds a scanned file matches a record in the databases, the at least one processor may identify the matched scanned file as the malicious code. In another example, if the secondary system scans the block storage volume by consulting databases storing patterns of malicious files and identifies a pattern of a scanned file matches one or more of the stored patterns, the at least one processor may identify the scanned file as the malicious code. In yet another example, if the secondary system scan the block storage volume by consulting databases of cryptographic hashes that identify malicious files and finds that a cryptographic hash of a scanned file matches one or more of the cryptographic hashes that identify malicious files, the at least one processor may identify the scanned file as the malicious code. In yet another example, if the secondary system scans the block storage volume by running potential malicious code in a sandbox and finds that running a scanned file causes malicious or harmful changes or behavior (e.g., an attempt to modify a system setting, an attempt to access files outside of a set of directories, an attempt to perform a network mapping process), the at least one processor may identify the scanned file as the malicious code. In yet another example, if the secondary system scans the block storage volume to determine whether there are scanned files that are not in a database of known good files and finds that a scanned file is not in the database of known good files, the at least one processor may identify the scanned file as the malicious code. As yet another example, if the secondary system scans the block storage volume to detect changes over time in the scanned environment and finding that a scanned file causes a malicious or harmful change in the scanned environment over time, the at least one processor may identify the scanned file as the malicious code. It should be noted that the identification of the malicious code based on the scan may be implemented in various manners and is not limited to the examples described herein. Consistent with disclosed embodiments, the at least one processor may be configured to output from the secondary system, a notification of a presence of malicious code in the target system. In some embodiments, the notification may be a pop-up window, a pop-pup dialog, a flashing symbol, an email, a text message, a mobile application notice, a phone call, a warning sound, a tactile feedback (e.g., a vibration), or any type of communications. By way of example, the secondary system may trigger a communication channel between the secondary system and the target system and feed one or more parameters to the communication channel, in which the one or more parameters represent the presence of the malicious code. The secondary system may then send data representing the notification via the triggered communication channel including the one or more parameters to the target system. After receiving the data, the target system may parse the data and identify the one or more parameters. The target system may then display the notification (e.g., as a pop-up window on a screen) that present the presence of the malicious code. Consistent with disclosed embodiments, the at least one processor may be further configured to identify a source of the malicious code. The notification of the presence of malicious code outputted from the secondary system in the target system may include information related to the identified source of the malicious code. In some embodiments, the source of the malicious code may include identity of a developer or distributor of the malicious code, the landing location (e.g., a network port or a vulnerable access point) where the malicious code is imported to the target system, source codes of the malicious code, or any information or data indicating the origination of the malicious code. The information related to the identified source of the malicious code may include a text, a picture, a symbol, a video, a link, a voice, or any type of data for presenting and describing the identified source of the malicious code. By way of example,FIG.7is a block diagram illustrating an exemplary process300of cyber security scanning for a cloud environment, consistent with the disclosed embodiments. While the block diagram may be described below in connection with certain implementation embodiments presented in other figures, those implementations are provided for illustrative purposes only, and are not intended to serve as a limitation on the block diagram. For example, the steps inFIG.7may be executed in any order, steps may be duplicated, or steps may be omitted. In some embodiments, the process300may be performed by at least one processor (e.g., a CPU) of a computing device or system (e.g., scanning system101inFIG.1) to perform operations or functions described herein, and may be described hereinafter with reference toFIGS.1-2Dby way of example. In some embodiments, some aspects of the process300may be implemented as software (e.g., program codes or instructions) that are stored in a memory or a non-transitory computer-readable medium. In some embodiments, some aspects of the process300may be implemented as hardware (e.g., a specific-purpose circuit). In some embodiments, the process700may be implemented as a combination of software and hardware. FIG.7includes process blocks702-710. At block702, at least one processor may utilize a cloud provider API to access a block storage volume of a workload maintained on a target account in a target system of a cloud storage environment. At block704, the at least one processor may utilize a scanner at a location of the block storage volume and on a secondary system other than the target system. In some embodiments, the secondary system may include at least one of a virtual machine, a container, or a serverless function. In some embodiments, the secondary system may have an operating system different from an operating system of the target account. In some embodiments, to utilize the scanner, the at least one processor may suspend an operation of the scanner after the scan of the block storage volume. In some embodiments, to utilize the scanner, the at least one processor may modify a pre-utilized scanner at the location of the block storage volume based on information related to the target account to obtain a modified scanner, and the utilize the modified scanner. At block706, the at least one processor may scan the block storage volume for malicious code, using the secondary system. In some embodiments, to scan the block storage volume, the at least one processor may scan disk-backed memory. For example, the disk-backed memory includes at least one of a page file or a cache file. In some embodiments, the malicious code includes a rootkit. At block708, the at least one processor may identify malicious code based on the scan. At block710, the at least one processor may output from the secondary system, a notification of a presence of malicious code in the target system. Consistent with disclosed embodiments, besides blocks702-710, the at least one processor may further determine the location of the block storage volume based on at least one of the target account, a secondary system account, a cloud provider account, or a third party account. Consistent with disclosed embodiments, besides blocks302-310, the at least one processor may further identify a source of the malicious code. The notification outputted in block710may include information related to the identified source of the malicious code. Forward and Rearward Facing Attack Vector Visualization Techniques FIG.8is a schematic block diagram illustrating an exemplary embodiment of a system for performing visualization of forward and backward facing threats. In one of many embodiments, the system may include a graphical user interface system for providing forward and backward facing attack vector visualizations. A graphical user interface system may include systems similar in appearance and functionality to Microsoft Windows, Mac OS, Ubuntu Unity, Gnome Shell, Android, Apple iOS, Blackberry OS, Windows 10 Mobile, PalmOS-Web OS, or Firefox OS. Forward facing attack vectors may be those external entities that may access an analyzed asset. Backward facing attack vectors may include the possible impacts to an analyzed asset and those impacts' possible effect. Visualizations provided by the above system may include graphical, sequential, or multi-dimensional displays of attack vectors. In an embodiment, the system may comprise at least one processor configured to identify assets801in a cloud environment. The at least one processor may include a processor such as user device102ofFIG.1. The at least one processor may include a personal computer, tablet, smartphone, or virtual machine for processing threats and vulnerabilities to a cloud-based storage volume. For example, cloud infrastructure106may be the cloud environment consisting of virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115, and the cloud-based storage volume may be contained in storage111A-111D. The cloud environment may include service provided by a third party such as Amazon Web Services, Microsoft Azure, Google Cloud, Alibaba Cloud, IBM Cloud, Oracle, Salesforce, SAP, or similar service. Another embodiment contemplated may consist of at least one processor configured to identify risks803associated with each of the identified asset. The identified risks803associated with each of the identified assets may be listed sequentially based on probability of risk, severity of risk, or similar method of ordering the risks. The identified risks303associated with an identified asset may be scored based on the level of risk to the asset and may be categorized as a high risk, medium risk, or low risk. Once the level of risk can be determined by scanning system101, the level of risk can be conveyed and displayed to a user device102with a representative graphical display. An exemplar embodiment may additionally comprise at least one processor configured to identify relationships805between at least some of the identified assets, the relationships including at least one of a trust807A, a network connectivity807B, or a mechanism of network proxying807C. A trust807A may establish a relationship and may involve cryptography, digital signatures, electronic certificates, or similar method of establishing a trusting relationship. A network connectivity807B may include wireless or wired connections and may be established once a relationship of trust807A is established. A mechanism of network proxying807C can be accommodated once trust807A and network connectivity807B are established through any number of connected servers to accomplish proxying. A further embodiment may also comprise a processor configured to receive an identification809of a specific asset under investigation. The specific asset under investigation may be a physical or virtual machine that has software operating a specific version that is vulnerable to any number of associated risks. The identification809of a specific asset under investigation may involve an IP address, universally unique identifier, or similar system to uniquely or semi-uniquely identify the specific asset under investigation. A system among several embodiments may include a processor configured to perform a forward analysis811A of the specific asset under investigation to identify at least one possible attack vector reaching the specific asset via a network outside the cloud environment. A forward analysis811A of the specific asset may involve those external entities that may access an analyzed asset. The analysis performed by scanning system101may determine the origin of the forward threat, the nature of the threat to the analyzed system, and the seriousness of the threat originating from the Internet. For example, scanning system101may identify at least one possible attack vector to the workload currently being analyzed. The network outside of the cloud environment may identify new or existing threats and attack vectors currently threatening the associated system and workload. An exemplary embodiment may further comprise a processor configured to perform a backward analysis811B of the specific asset to identify at least one exposure risk to one or more assets that is in a downstream of the specific asset, wherein the at least one exposure risk includes an identification of an exposed asset, an entry point to the exposed asset, and a lateral movement risk associated with the exposed asset. A backward analysis811B of the associated machine may look downstream to related workloads and systems connected to the analyzed systems. A scan downstream may involve, among several embodiments, the downstream exposure risk including an identification of an exposed asset, an entry point to the exposed asset, and lateral movement risks associated with the exposed asset. The at least one exposure risk may include one or more entry point to access the exposed asset and any related machines potentially vulnerable to attack from the analyzed and exposed asset. Another embodiment may comprise a processor configured to output a signal813to cause on a display to present a presentation of forward and backward paths associated with the specific asset, thereby enabling visualization of a plurality of entry points and lateral movement risks associated with the plurality of entry points. The display of a forward or backward path to a specific asset may be one of several visual displays such as linear, graphical, or through computer-generated images. The visualization of a plurality of entry points and lateral movement risks with the plurality of entry points may include visualization of the plurality of paths may sequence the paths based on likelihood of access via that path or the severity of potential threat in a path of access to an asset. An example embodiment may comprise a system wherein the network outside the cloud environment includes the Internet. The network outside the cloud environment may include access to the Internet in order for the system to access a plurality of analyzed and potentially vulnerable assets. Another embodiment may comprise a system wherein the assets in the cloud environment include at least one of a virtual machine, a network appliance, a storage appliance, a compute instance, or an engine instance. A virtual machine may include an emulated version of a computer—including an operating system, memory, storage, graphics processing—such that it can be indistinguishable from a standard (non-virtual) machine to a running program. A network appliance may include a plurality of servers connected through the Internet in order to establish a system and network of stored assets and workloads. A storage appliance may be comprised by a server designed to store workloads and assets to be analyzed. A compute instance may involve input and processing of data entries that cause the analyzed asset to be accessed and run. An engine instance may include a search algorithm run to analyze and search the analyzed workloads or virtual machines to determine the software and data stored on the said virtual machine or stored instance. One among many embodiments may comprise a system wherein identifying the assets in a cloud environment includes identifying the assets based on at least one of an identity and access management policy, an organization policy, or an access policy. An identity and access management policy may comprise a software tool or similar method of establishing identity of users attempting to access a system to verify identities of users and control access to an associated workload or system functionality. An organization policy may include a policy applied to all users by the controlling organization of a workload. An access policy may determine the authorized users of an associated system. One possible embodiment may comprise a system wherein the presentation of the forward and backward paths indicates alternative paths connecting between the specific asset and an upstream asset or a downstream asset. The indication will be via a visualization showing specific alternative paths connecting specific assets or it may visualize a plurality of alternative paths connecting specific assets. Alternative paths connecting between the specific asset and an upstream asset or a downstream asset may involve a direct connection, such as via the Internet, or an indirect connection via one or more series of servers acting as proxy servers between the analyzed asset and the upstream or downstream asset. An exemplary embodiment may further comprise a system wherein the visualization includes a presentation of the alternative paths. The alternative paths may be presented visually through a graphical representation, sequential list of alternative paths, or a computer-generated image that represents the plurality of alternative paths. Another embodiment may comprise a system wherein the presentation of the forward and backward paths indicates port numbers for each pathway. The port numbers may be indicated by a series of alphanumeric characters (e.g., “HTTP” or “80”) or other identifier. One among several exemplary embodiments may further comprise a system wherein the visualization of the entry points indicates at least one entry point of risk. The visualization may indicate one or more points of entry to a given analyzed system. An entry point of risk may involve a port of access to the workload, and may involve a one or more entry points visualized for use by user device102. Another exemplary embodiment may comprise a system wherein the at least one processor is further configured to monitor network activities of the assets in a cloud environment. The processor may monitor continuously, intermittently, or as directed by an authorized user to direct monitoring of the analyzed network activities of the assets. The cloud environment may be, among several embodiments, the cloud infrastructure106. An embodiment contemplated may comprise a system further configured to detect a potential risk associated with the specific asset based on a network activity of the specific asset. The potential risk may be associated to a specific asset based on historical threats to the specific asset or a list of possible threats to the specific asset based on its current network activities. Another embodiment may comprise a system further configured to detect a potential risk associated with the specific asset based on a network activity of an upstream asset of the specific asset. The potential risk associated with the specific asset may be identified by scanning system101and determine whether a potential risk can be classified as high risk, medium risk, or low risk. The potential risk may be identified based on the specific asset, the network activity performed by the specific asset, or the specific asset type or related software of the upstream asset. One of many embodiments may comprise a system further configured to detect a potential risk associated with the specific asset based on a network activity of a downstream asset of the specific asset. The potential risk may be identified based on the specific asset, the network activity performed by the specific asset, or the specific asset type or related software of the upstream asset. For example, scanning system101may perform a scan on an asset being analyzed, and based on the one or more entry points of risk, may determine if the downstream asset of the specific asset poses a potential risk to the analyzed system. Another exemplary embodiment may comprise a method for generating a graphical user interface for providing forward and backward facing attack vector visualizations. The method may include a method of generating a graphical user interface similar in appearance and functionality to Microsoft Windows, Mac OS, Ubuntu Unity, Gnome Shell, Android, Apple iOS, Blackberry OS, Windows 10 Mobile, PalmOS-Web OS, or Firefox OS. A forward analysis811A of the specific asset may involve those external entities that may access an analyzed asset. Backward analysis811B may include the possible impacts to an analyzed asset and those impacts' possible effect. Visualizations provided by the above system may include graphical, sequential, or multi-dimensional displays of attack vectors. One exemplar embodiment may further comprise a method of identifying assets801in a cloud environment. A method may include at least one processor that may include a personal computer, tablet, smartphone, or virtual machine for processing threats and vulnerabilities to a cloud-based storage volume. For example, cloud infrastructure106may be the cloud environment consisting of virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115, and the cloud-based storage volume may be contained in storage111A-111D. The cloud environment may include service provided by a third party such as Amazon Web Services, Microsoft Azure, Google Cloud, Alibaba Cloud, IBM Cloud, Oracle, Salesforce, SAP, or similar service. One of several embodiments may comprise a method of identifying risks803associated with each of the identified assets801. The method of identifying risks803associated with each of the identified assets801may be listed sequentially based on probability of risk, severity of risk, or similar method of ordering the risks. The identified risks803associated with an identified asset may be scored based on the level of risk to the asset and may be categorized as a high risk, medium risk, or low risk. Once the level of risk can be determined by scanning system101, the level of risk can be conveyed and displayed to a user device102with a representative graphical display. Another embodiment contemplated may comprise a method for identifying relationships305between at least some of the identified assets801, the relationships including at least one of a trust807A, a network connectivity807B, or a mechanism of network proxying807C. A trust807A may establish a relationship and may involve cryptography, digital signatures, electronic certificates, or similar method of establishing a trusting relationship. A network connectivity807B may include wireless or wired connections and may be established once a relationship of trust807A is established. A mechanism of network proxying807C can be accommodated once trust807A and network connectivity807B are established through any number of connected servers to accomplish proxying. An embodiment may further comprise a method of receiving an identification of a specific asset809under investigation. The specific asset under investigation may be a physical or virtual machine that has software operating a specific version that is vulnerable to any number of associated risks803. The identification809of a specific asset under investigation may involve an IP address, universally unique identifier, or similar system to uniquely or semi-uniquely identify the specific asset under investigation. An embodiment may also comprise a method of performing a forward analysis811A of the specific asset under investigation to identify at least one possible attack vector reaching the specific asset via a network outside the cloud environment. A forward analysis811A of the specific asset may involve those external entities that may access an analyzed asset. A forward analysis may involve analysis of the specific asset under identification to identify at least one possible Internet-originating attack vector to the asset. The analysis performed by scanning system101may determine the origin of the forward threat, the nature of the threat to the analyzed system, and the seriousness of the threat originating from the Internet. For example, scanning system101may identify at least one possible attack vector to the workload currently being analyzed. The network outside of the cloud environment may identify new or existing threats and attack vectors currently threatening the associated system and workload. One embodiment may comprise a method of performing a backward analysis811B of the specific asset to identify at least one exposure risk to one or more assets that is in a downstream of the specific asset, wherein the at least one exposure risk includes an identification of an exposed asset, an entry point to the exposed asset, and a lateral movement risk associated with the exposed asset. Backward analysis811B may include the possible impacts to an analyzed asset and those impacts' possible effect. A scan downstream may involve, among several embodiments, the downstream exposure risk including an identification of an exposed asset, an entry point to the exposed asset, and lateral movement risks associated with the exposed asset. The at least one exposure risk may include one or more entry point to access the exposed asset and any related machines potentially vulnerable to attack from the analyzed and exposed asset. An embodiment may comprise a method of outputting a signal813to cause on a display to present a presentation of forward and backward paths associated with the specific asset, thereby enabling visualization of a plurality of entry points and lateral movement risks associated with the plurality of entry points. The display of a forward or backward path to a specific asset may be one of several visual displays such as linear, graphical, or through computer-generated images. The visualization of a plurality of entry points and lateral movement risks with the plurality of entry points may include visualization of the plurality of paths may sequence the paths based on likelihood of access via that path or the severity of potential threat in a path of access to an asset. Another embodiment may comprise a method wherein the network outside the cloud environment includes the Internet. The network outside the cloud environment may include access to the Internet in order for the system to access a plurality of analyzed and potentially vulnerable assets. One of several embodiments may comprise a method wherein the assets in the cloud environment include at least one of a virtual machine, a network appliance, a storage appliance, a compute instance, or an engine instance. A virtual machine may include an emulated version of a computer—including an operating system, memory, storage, graphics processing—such that it can be indistinguishable from a standard (non-virtual) machine to a running program. A network appliance may include a plurality of servers connected through the Internet in order to establish a system and network of stored assets and workloads. A storage appliance may be comprised by a server designed to store workloads and assets to be analyzed. A compute instance may involve input and processing of data entries that cause the analyzed asset to be accessed and run. An engine instance may include a search algorithm run to analyze and search the analyzed workloads or virtual machines to determine the software and data stored on the said virtual machine or stored instance. An embodiment may further comprise a method wherein identifying assets in a cloud environment includes identifying the assets based on at least one of an identity and access management policy, an organization policy, or an access policy. An identity and access management policy may comprise a software tool to verify identities of users and control access to an associated workload or system functionality. An organization policy may include a policy applied to all users by the controlling organization of a workload. An access policy may determine the authorized users of an associated system. An embodiment may also comprise a method wherein the presentation of the forward and backward paths indicates alternative paths connecting between the specific asset and an upstream or a downstream asset. The indication may be via a visualization showing specific alternative paths connecting specific assets or it may visualize a plurality of alternative paths connecting specific assets. Another exemplar embodiment may comprise a method wherein the visualization includes a presentation of the alternative paths. The alternative paths may be presented visually through a graphical representation, sequential list of alternative paths, or a computer-generated image that represents the plurality of alternative paths. Another embodiment may comprise a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations. One embodiment may comprise a non-transitory computer-readable medium storing instructions that, when executed by at least one processor800, are configured to perform operations identifying assets801in a cloud environment. the instructions stored on a non-transitory computer-readable medium may include at least one processor800that may include a personal computer, tablet, smartphone, or virtual machine for processing threats and vulnerabilities to a cloud-based storage volume. The cloud environment may include service provided by a third party such as Amazon Web Services, Microsoft Azure, Google Cloud, Alibaba Cloud, IBM Cloud, Oracle, Salesforce, SAP, or similar service. An embodiment may further comprise a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor300to perform operations comprising identifying risks803associated with each of the identified asset. The instructions performed of identifying risks803associated with each of the identified assets301may be listed sequentially based on probability of risk, severity of risk, or similar method of ordering the risks. Another embodiment may comprise a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations comprising identifying relationships805between at least some of the identified assets301, the relationships including at least one of a trust807A, network connectivity807B, or a mechanism of network proxying807C. An exemplar embodiment may comprise a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations comprising receiving an identification of a specific asset809under investigation. The instructions may include receiving information identifying the specific asset809. The specific asset809under investigation may be a physical or virtual machine that has software operating a specific version that is vulnerable to any number of associated risks803. One among many embodiments may comprise a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations comprising performing a forward analysis811A of the specific asset under investigation to identify at least one possible attack vector reaching the specific asset via a network outside the cloud environment. A forward analysis811A of the specific asset may involve those external entities that may access an analyzed asset. And the scanning system101may identify at least one possible attack vector to the workload currently being analyzed. The network outside of the cloud environment may identify new or existing threats and attack vectors currently threatening the associated system and workload. An embodiment may further comprise a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations comprising performing a backward analysis811B of the specific asset to identify at least one exposure risk to one or more assets that is in a downstream of the specific asset, wherein the at least one exposure risk includes an identification of an exposed asset, an entry point to the exposed asset, and a lateral movement risk associated with the exposed asset. Backward analysis811B may include the possible impacts to an analyzed asset and those impacts' possible effect. The at least one exposure risk may include one or more entry point to access the exposed asset and any related machines potentially vulnerable to attack from the analyzed and exposed asset. An embodiment may comprise a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations comprising outputting a signal813to cause on a display to present a presentation of forward and backward paths associated with the specific asset, thereby enabling visualization of a plurality of entry points and lateral movement risks associated with the plurality of entry points. The display of a forward or backward path to a specific asset may be one of several visual displays such as linear, graphical, or through computer-generated images. The visualization of a plurality of entry points and lateral movement risks with the plurality of entry points may include visualization of the plurality of paths may sequence the paths based on likelihood of access via that path or the severity of potential threat in a path of access to an asset. Passive Key Identification Techniques Aspects of this disclosure may provide a technical solution to the challenging technical problem of identifying access keys (e.g., passwords, Secure Shell (SSH) keys, or cloud keys) to compute resources (e.g., machines, containers, storages, or any hardware or software component) in a cloud environment without using the access keys to access the compute resources. In existing technologies, scanning systems may identify access keys to compute resources and may need to verify the identified keys by actually accessing the compute resources using the access keys. In such cases, the actual accesses may generate logs or records (e.g., for successful or failed accesses) that can constitute activity patterns similar to an attacker to the system. Such activity patterns may distract a system administrator (e.g., an individual or a computer program) to misidentify the scanning system as an attacker. Further, actual accesses may create loads on both the scanning system and the compute resources and may reduce the computing resources and power for other tasks. To resolve such a technical problem, a cryptographic analysis may be performed to identify a first set of fingerprints of the identified keys, and trust configurations of the compute resources may be analyzed to identify a second set of fingerprints of the compute resources. The first and second set of fingerprints may be compared to match keys with the compute resources without using the keys to access the compute resources. By doing so, the scanning system may perform its function without raising false alarms to the system administrator, while reducing computing costs to the cloud environment. A cyber security system, as used herein, may refer to a system including any combination of software and hardware for enhancing security of a device, a platform, or another system in a network environment. By way of example, the cyber security may be implemented as a system including scanning system101inFIG.1. As another example, scanning system101may include the disclosed cyber security system as a subset. A compute resource, as used herein, may refer to a database, a virtual machine, a storage, a keystore, a scanning system, a load balancer, a server, a computer, a container, or any physical or virtual component integrated in or communicatively connected to a cloud infrastructure. By way of example, the compute resource may be scanning system101, any of databases103A-103D, any of virtual machines107A-107D, any of databases109A-109D, any of storage111A-111D, any of keystores113A-113D, or load balancer115in cloud infrastructure106inFIG.1. A key to access a compute resource, as used herein, may refer to computerized data that includes a credential for granting permission to an accessor to access the compute resource, or a service or function provided by the compute resource. By way of example, the key may include at least one of a password, a remote access key, a script containing a password, a cloud key (e.g., a credential file for accessing a cloud service, such as an AWS credential file), an Secure Shell (SSH) key, an AVVS key, a private component of a private-public key pair, or any type of credential that provide checked or unchecked access to the compute resource. Matching a key to a compute resource, as used herein, may refer to confirming, verifying, validating, or any operation or process of determining that the key is valid for an accessor (e.g., another compute resource) to access the compute resource. For example, to match a key to a compute resource, an accessor may test by using the key to access the compute resource and determines that the key is matched to the compute resource if such access is successful. As another example, to match a key to a compute resource, an accessor may perform methods or algorithms to verify that the key is matched to the compute resource without using the key to access the compute resource. Consistent with disclosed embodiments, at least one processor may be configured to analyze a cloud environment to identify a plurality of keys to the compute resources in the cloud environment. In some embodiments, the at least one processor may analyze the cloud environment to identify the plurality of keys to the compute resources in the cloud environment in any of steps231,237, or239inFIG.2D. In some embodiments, the at least one processor may analyze the cloud environment to identify the plurality of keys to the compute resources in the cloud environment in any step described in association withFIGS.2A-2D. In some embodiments, at least one of the plurality of keys may include at least one of a password (e.g., a combination of alphanumeric characters and symbols), a script (e.g., a PYTHON® script or a shell script) containing a password, a cloud key, or an Secure Shell (SSH) key. For example, a cloud key may be a key for accessing a cloud resource. A cloud environment, as used herein, may refer to a computing environment running on a cloud. By way of example, the cloud environment may include or be part of cloud infrastructure106inFIG.1. Analyzing a cloud environment, as used herein, may refer to an operation or a process of separating or dividing the cloud environment into parts or components (e.g., physical components or logic components), then determining nature and relationship of the parts or components based on data associated with the parts or components. By way of example, if the cloud environment is cloud infrastructure106inFIG.1, to analyze cloud infrastructure106, the at least one processor may access cloud infrastructure106(e.g., via network105or via internal connections in cloud infrastructure106) and obtain a list of compute resources in cloud infrastructure106(e.g., scanning system101, any of databases103A-103D, any of virtual machines107A-107D, any of databases109A-109D, any of storage111A-111D, any of keystores113A-113D, or load balancer115). For example, the at least one processor may obtain the list of compute resources by reading a configuration file from a database (e.g., one of databases109A-109D), or visit available compute resources one after another through communicative connections between the compute resources to generate the list of compute resources. After obtaining the list of compute resources in cloud infrastructure106, the at least one processor may determine the nature of the compute resources, such as their types, numbers, functions, or any other characteristics or features. The at least one processor may further determine the relationship between the compute resources, such as communicative connections, access privileges, data input/output directions, or controls between the compute resources. Identifying a key to a compute resource, as used herein, may refer to an operation or a process of locating, recognizing, or any operation or process or analyzing computerized data or information to determine that the computerized data or information is a key to the compute resources. By way of example, the at least one processor may analyze cloud infrastructure106inFIG.1to recognize a list of compute resources in cloud infrastructure106(e.g., scanning system101, any of databases103A-103D, any of virtual machines107A-107D, any of databases109A-109D, any of storage111A-111D, any of keystores113A-113D, or load balancer115). Then, the at least one processor may read computerized data stored in a compute resource and identify that some of the computerized data are one or more keys to other compute resources in cloud infrastructure106. For example, to recognize the keys, the at least one processor may read and compare the computerized data with records stored in keystores113A-113D, and determine that the computerized data is a key if it matches a record in any of keystores113A-113D. As another example, the at least one processor may read the computerized data and check its syntaxes, text string patterns, file formats, file properties, encryption manners, library versions, software versions, or any other characteristics or features of the computerized data, and determine that the computerized data is a key if its checked characteristics or features fit a predetermined pattern of a key or fit an entry in a dictionary of keys. For example, the dictionary of keys may be stored in a keystore (e.g., any of keystores113A-113D). In some embodiments, the plurality of keys may be stored in at least one workload. A workload, for example, may refer to a specific application, service, capability, or a specific amount of work that can be run on a cloud resource, system, or infrastructure. By way of example, a workload may be a virtual machine, a database server, a container, a Hadoop node, an application, a storage server, a load balancer, or an IAM (Identity and Access Management) configuration. By way of example, with reference toFIG.1, cloud infrastructure106may include workloads such as scanning system101, databases103A-103D, virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115. Consistent with disclosed embodiments, at least one processor may be configured to perform a cryptographic analysis on the plurality of keys to identify a first set of fingerprints that uniquely identify each of the plurality of keys. The first set of fingerprints may be non-functional. In some embodiments, the at least one processor may perform the cryptographic analysis in step237ofFIG.2D. In some embodiments, the at least one processor may perform the cryptographic analysis in any step described in association withFIGS.2A-2D. A cryptographic analysis on data, as used herein, may refer to an operation or process of identifying ciphertext, ciphers and cryptosystems contained in the data to discover hidden aspects of the data that may improve or weaken security of the data or compute resources associated with the data. In some embodiments, to perform the cryptographic analysis on the plurality of keys, the at least one processor may perform an algorithm on the plurality of keys to convert the plurality of keys into another form of data. For example, the at least one processor may perform a hashing process on the plurality of keys (e.g., by inputting values of the keys to a hash function) to obtain corresponding hash values of respective keys. A fingerprint of a key, as used herein, may refer to non-functional data (e.g., a numeric value, an alphanumeric string, or any combination of letters, numbers, or symbols) generated based on the key and may uniquely identify the key. By way of example, a fingerprint may be generated by performing a hashing process on a key or a part of the key. The hashing process may include, for example, an MD5 algorithm, an SHA-1 algorithm, an SHA-2 algorithm, an SHA-3 algorithm, a RIPEMD-160 algorithm, a BLAKE2 algorithm, a BLAKE3 algorithm, or any type of cryptographic hash algorithms. In such cases, the fingerprint may be a hash value of the key, a part (e.g., a truncated part) of the hash value of the key, a hash value of a part of the key, or a part (e.g., a truncated part) of the hash value of the part of the key. The hash value may be unique (e.g., non-overlap with any hash value of any other key) and non-functional (e.g., unable to be used as a key to access a compute resource). To identify the first set of fingerprints, the at least one processor may perform the hashing process on the plurality of keys to obtain a plurality of hash values, and search the cloud environment to check whether there exists any compute resource that stores the plurality of hash values. In some embodiments, at least one of the first set of fingerprints may be non-identical to any key of the plurality of keys. For example, if the first set of fingerprints are hash values, they may be non-identical to the plurality of keys. Consistent with disclosed embodiments, at least one processor may be configured to analyze trust configurations of the compute resources to identify a second set of fingerprints for each of the compute resources. In some embodiments, the at least one processor may analyze the trust configurations in step233ofFIG.2D. In some embodiments, the at least one processor may analyze the trust configurations in any step described in association withFIGS.2A-2D. A trust configuration, as used herein, may refer to data (e.g., a stored file, a database entry, a value stored in a data structure, or any computerized information) that stores a pre-established trust relationship between at least two compute resources. For example, a trust configuration may be a file storing a trust policy (e.g., a public key infrastructure (PKI), a digital signature system, or an AWS trust policy). Analyzing a trust configuration, as used herein, may refer to an operation or a process of separating or dividing compute resources associated with the trust configuration into units or groups, then determining nature and relationship of the units or groups based on relationship data associated with the units or groups. For example, the relationship data may include flags, indicators, or any type of value that indicates a first compute resource has an access privilege to a second compute resource. A fingerprint for a compute resource, as used herein, may refer to non-functional data (e.g., a numeric value, an alphanumeric string, or any combination of letters, numbers, or symbols) generated based on a key for accessing the compute resource and may uniquely identify the key. By way of example, a fingerprint for a compute resource may be generated by applying a cryptographic hash function (e.g., an SHA-1 or SHA-2 function) to a key for accessing the compute resource to obtain a sequence of bytes (e.g., a hash value or a modified hash value), and the sequence of bytes may be the fingerprint for the compute resource. The key may be uniquely associated with the compute resource. For example, to identify the second set of fingerprints, the at least one processor may analyze a trust configuration file (e.g., a file storing fingerprints and their associated data) by reading the byte stream of the trust configuration file to determine whether any sequence of bytes that fits to a predetermined pattern (e.g., an alphanumeric string pattern). The sequences of bytes that fit to the predetermined pattern may be determined as the second set of fingerprints. Consistent with disclosed embodiments, at least one processor may be configured to compare the first set of fingerprints with the second set of fingerprints to match keys with the compute resources without using the keys to access the compute resources. In some embodiments, the at least one processor may compare the first set of fingerprints with the second set of fingerprints in any step described in association withFIGS.2A-2D. Comparing a first fingerprint and a second fingerprint, as used herein, may refer to performing a textual, numerical, or semantic comparison of the first fingerprint and the second fingerprint, or performing an algorithmic conversion of at least one of the first fingerprint or the second fingerprint to further perform a textual, numerical, or semantic comparison of the conversion results. By way of example, a first compute resource may store a key (e.g., a public key) for accessing a second compute resource. The at least one processor may analyze the cloud environment to identify the key from the first compute resource, then may perform the cryptographic analysis described herein to identify a first fingerprint of the key. A third compute resource (e.g., may be the same as or different from the second compute resource) may store in its trust configurations a second fingerprint of the key. The second fingerprint may be generated by a different processor at a different time. The at least one processor may then analyze the trust configurations of the third compute resource to identify the second fingerprint. The at least one processor may then compare the first fingerprint and the second fingerprint to determine whether they match each other. If the first fingerprint matches with the second fingerprint, the at least one processor may determine that the key identified from the first compute resource is a key for accessing the second compute resource. As can be seen, after comparison, if the first set of fingerprints are matched with the second set of fingerprints, keys may be matched with the compute resources, and the at least one processor needs not use the keys to access the compute resources to confirm that the keys are matched to the compute resources. Consistent with disclosed embodiments, the at least one processor may further analyze a multi-machine interaction in the cloud environment using the first set of fingerprints. A multi-machine interaction, as used herein, may refer to data input/output, operation control, status polling, scanning, or any type of an interaction between a plurality of machines in a cloud environment. Analyzing a multi-machine interaction, as used herein, may refer to an operation or a process of separating or dividing machines in the cloud environment into units or groups, then determining nature and relationship of the units or groups based on relationship data associated with the units or groups. For example, the relationship data may be the first set of fingerprints. By way of example, to analyze the multi-machine interaction, the at least one processor may compare the first set of fingerprints with the second set of fingerprints. If a first fingerprint of the first set of fingerprints matches with a second fingerprint of the second set of fingerprints, the at least one processor may determine that a first compute resource associated with the first fingerprint may have an inter-machine interaction (e.g., having an ability to access the other) with a second compute resource associated with the second fingerprint. By way of example, a first compute resource may store a key (e.g., a private key) for accessing a second compute resource. The at least one processor may analyze the cloud environment to identify the key from the first compute resource, then may perform the cryptographic analysis described herein to identify a first fingerprint of the key. A third compute resource (e.g., may be the same as or different from the second compute resource) may store in its trust configurations a second fingerprint of the key. The at least one processor may then analyze the trust configurations of the third compute resource to identify the second fingerprint. The at least one processor may then compare the first fingerprint and the second fingerprint to determine whether they match each other (e.g., being the same). If the first fingerprint matches with the second fingerprint, the at least one processor may determine that the key identified from the first compute resource is a key for accessing the second compute resource, and may further determine an inter-machine interaction in which the first compute resource has ability to access the second compute resource. Consistent with disclosed embodiments, the at least one processor may further analyze a multi-machine interaction in the cloud environment using the plurality of keys. For example, the at least one processor may use the plurality of keys as the relationship data for determining nature and relationship of compute resources in the cloud environment. By way of example, to analyze the multi-machine interaction, the at least one processor may compare a first key for accessing a first one of the compute resources and a second key for accessing a second one of the compute resources. If the first key matches with (e.g., is identical to) the second key, the at least one processor may determine that the first one of the compute resources may have an interaction (e.g., having an ability to access the other) with the second one of the compute resources. By way of example,FIG.9is a block diagram illustrating an exemplary process900of matching keys with compute resources in a cloud environment, consistent with the disclosed embodiments. While the block diagram may be described below in connection with certain implementation embodiments presented in other figures, those implementations are provided for illustrative purposes only, and are not intended to serve as a limitation on the block diagram. For example, the steps inFIG.9may be executed in any order, steps may be duplicated, or steps may be omitted. In some embodiments, the process900may be performed by at least one processor (e.g., a CPU) of a computing device or system (e.g., scanning system101inFIG.1) to perform operations or functions described herein, and may be described hereinafter with reference toFIGS.1-2Dby way of example. In some embodiments, some aspects of the process900may be implemented as software (e.g., program codes or instructions) that are stored in a memory or a non-transitory computer-readable medium. In some embodiments, some aspects of the process300may be implemented as hardware (e.g., a specific-purpose circuit). In some embodiments, the process300may be implemented as a combination of software and hardware. FIG.9includes process blocks902-908. At block902, at least one processor may analyze a cloud environment to identify a plurality of keys to the compute resources in the cloud environment. In some embodiments, the plurality of keys may be stored in at least one workload. In some embodiments, at least one of the plurality of keys may include at least one of a password (e.g., a combination of alphanumeric characters and symbols), a script (e.g., a PYTHON® script or a shell script) containing a password, a cloud key, or an Secure Shell (SSH) key. At block904, the at least one processor may perform a cryptographic analysis on the plurality of keys to identify a first set of fingerprints that uniquely identify each of the plurality of keys. The first set of fingerprints may, in some embodiments, be non-functional. In some embodiments, at least one of the first set of fingerprints may be non-identical to the key. At block906, the at least one processor may analyze trust configurations of the compute resources to identify a second set of fingerprints for each of the compute resources. As discussed above, to identify the second set of fingerprints, in some embodiments, the at least one processor may analyze a trust configuration file (e.g., a file storing fingerprints and their associated compute resources) by reading the byte stream of the trust configuration file to determine whether any sequence of bytes that fits to a predetermined pattern (e.g., an alphanumeric string pattern). The sequences of bytes that fit to the predetermined pattern may be determined as the second set of fingerprints. At block908, the at least one processor may compare the first set of fingerprints with the second set of fingerprints to match keys with the compute resources without using the keys to access the compute resources. As discussed above, in some embodiments, a first compute resource may store a key (e.g., a private component of a private-public key pair) to a second compute resource, and the second compute resource may store in its trust configurations a second fingerprint of the key. The at least one processor may analyze the cloud environment to identify the key from the first compute resource, then may perform the cryptographic analysis described herein to identify a first fingerprint of the key. The at least one processor may then analyze the trust configurations of the second compute resource to identify the second fingerprint. The at least one processor may then compare the first fingerprint and the second fingerprint to determine whether they match each other (e.g., being identical). If the first fingerprint matches with the second fingerprint, the at least one processor may determine that the key identified from the first compute resource is a key for accessing the second compute resource. Consistent with disclosed embodiments, besides blocks902-908, the at least one processor may further analyze a multi-machine interaction in the cloud environment using the first set of fingerprints. By way of example, to analyze the multi-machine interaction, the at least one processor may compare the first set of fingerprints with the second set of fingerprints. As discussed above, in some embodiments, a first compute resource may store a key (e.g., a private key) for accessing a second compute resource. The at least one processor may analyze the cloud environment to identify the private component from the first compute resource, then may perform the cryptographic analysis described herein to identify a first fingerprint of the key. A third compute resource (e.g., may be the same as or different from the second compute resource) may store in its trust configurations a second fingerprint of the key. The at least one processor may then analyze the trust configurations of the third compute resource to identify the second fingerprint. The at least one processor may then compare the first fingerprint and the second fingerprint to determine whether they match each other (e.g., being the same). If the first fingerprint matches with the second fingerprint, the at least one processor may determine that the key identified from the first compute resource is a key for accessing the second compute resource, and may further determine an inter-machine interaction in which the first compute resource has ability to access the second compute resource. Consistent with disclosed embodiments, besides blocks902-908, the at least one processor may further analyze a multi-machine interaction in the cloud environment using the plurality of keys. By way of example, to analyze the multi-machine interaction, the at least one processor may compare a first key for accessing a first one of the compute resources and a second key for accessing a second one of the compute resources. Hybrid Ephemeral Scanner Techniques Aspects of this disclosure may include accessing a primary account maintained in a cloud environment. A primary account as used herein, may refer to a principal or main identity created for a person in a computer or computing system. A primary account may also be created for machine entities, such as service accounts for running programs, system accounts for storing system files and processes, and root and administrator accounts for system administration. A cloud environment, as used herein, may refer to a platform implemented on, hosted on, and/or accessing servers that are accessed over the Internet. An example of a cloud environment is cloud infrastructure106inFIG.1. In some embodiments, scanning system101ofFIG.1may gain access to an account within cloud infrastructure106. Aspects of this disclosure may include receiving information defining a structure of the primary account, wherein the structure includes a plurality of assets. Information as used herein, may refer to data received. Structure of the primary account as used herein, may refer to a set up or of the primary account. A plurality of assets as used herein, may refer to one or more data, devices, or components within an organization's system. For example, the plurality of assets may include any of virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, load balancer115, log files or databases, API gateway resources, API gateway REST APIs, Autoscaling groups, CloudTrail logs, CloudFront services, volumes, snapshots, VPCs, subnets, route tables, network ACLs, VPC endpoints, NAT gateways, ELB and ALB, ECR repositories, ECS clusters, services, and tasks, EKS, S3 bucket and Glacier storage, SNS topics, IAM roles, policies, groups, users, KMS keys, and Lambda functions. In some embodiments, scanning system101ofFIG.1may receive information relating to the structure of an account. In such an embodiment, the structure of the account may include virtual machines107A-D and databases109A-109D, for example. In some embodiments, the information may be acquired from another device, received via a one way communication, received in response to a request, retrieved from a storage device, or generated. In some embodiments, the information may exclude raw data of the primary account. Raw data as used herein, may refer to a collection of data as gathered before it has been processed, cleaned, or analyzed. For example, raw data may include usage data, passwords, and cache. Aspects of this disclosure may include deploying, inside the primary account, at least one ephemeral scanner configured to scan at least one block storage volume and to output metadata defining the at least one block storage volume. In some embodiments, the output may exclude raw data of the primary account. In some embodiments, the ephemeral scanner may be deployed inside a secondary account for which trust has been established with a primary account. For example, a secondary account on another system may have an existing trust relationship (such as a cloud trust policy) with a first account. The ephemeral scanner may be operated inside of that secondary account to scan block storage associated with the first account. Each account may be hosted on the same system or on different systems. Further, in some embodiments, the at least one ephemeral scanner may be configured to periodically scan at least one block storage volume. For example, the scanner may be configured to scan at least one block storage volume once per hour, once per day, once per week, or more or less often. Further, in some embodiments, the at least one ephemeral scanner may be periodically deployed to scan at least one block storage volume. For example, the scanner may be deployed for scanning at least one block storage volume once per hour, once per day, once per week, or more or less often. Outputting metadata as used herein, may refer to producing a set of data that describes and gives information about other data. In some embodiments, scanning system101ofFIG.1may scan a block storage volume and output data related to it (e.g., vulnerability information, configuration information, malware information, risk analysis information, and sensitive information). Periodically scanning as used herein, may refer to conducting a scan occurring or recurring at regular intervals. In some embodiments, the ephemeral scanner is configured to perform vulnerability scanning of the at least one block storage volume. In such an embodiment the ephemeral scanner may scan the block storage volume to check for security vulnerabilities on the device. In some embodiments, the ephemeral scanner is configured to perform configuration scanning of the at least one block storage volume. In such an embodiment the ephemeral scanner may scan the block storage volume to check for configuration information related to the device. In some embodiments, the ephemeral scanner may scan the block storage volume in order to look for security issues. In some embodiments, the ephemeral scanner is configured to perform malware scanning of the at least one block storage volume. In such an embodiment the ephemeral scanner may scan the block storage volume to check for malware information related to the device. As discussed above with respect toFIG.2D, in step235, scanning system101may perform a step of malware scanning. In some embodiments, scanning system101may perform malware scanning across all filesystems in the snapshot (e.g., gathered from virtual machines107A-107D or storage111A-111D). Scanning system101may use multiple malware scanning software solutions to perform a malware scan against the filesystems, including one sourced from another vendor, such as bucketAV, Trend Micro Cloud One, Sophos Cloud Optix, Crowdstrike Falcon CWP, or others. In some embodiments, malware scanning in step235comprises utilizing signatures, heuristics, or sandboxing capabilities to deduce whether there is an infection on the machine. In some embodiments, the ephemeral scanner is configured to perform lateral-movement risk analysis of the at least one block storage volume. In such an embodiment the ephemeral scanner may scan the block storage volume to check for lateral-movement risk information related to the device. For example, in some embodiments, scanning system101may perform a “backward” analysis of the specific asset to identify exposure risk to assets downstream of the specific asset, wherein the downstream exposure risk includes an identification of an exposed asset, an entry point to the exposed asset, and lateral movement risks associated with the exposed asset. Further, as discussed above with respect toFIG.2D, in step237, scanning system101may perform a step of lateral movement scanning. An attacker who establishes a network foothold usually attempts to move laterally from one resource to another in search of rich targets such as valuable data. Stolen passwords and keys unlock access to servers, files, and privileged accounts. In some embodiments, scanning system101may gather keys from each scanned system or device (e.g., virtual machines107A-107D or storage111A-111D). In some embodiments, scanning system101searches for passwords, scripts, shell history, repositories, or other data that may contain passwords, cloud access keys, SSH keys, or other key/password/access information that provide unchecked access to important resources. In some embodiments, scanning system101searches for such keys/passwords/access information and calculates a “hash” (a mathematical fingerprint) of each string. Scanning system101then attempts to match the hashed strings to hashes of strings that that are stored on different systems or devices. This will be used to detect the potential lateral movement between assets. In some embodiments, the ephemeral scanner is configured to perform sensitive information scanning of the at least one block storage volume. In such an embodiment the ephemeral scanner may scan the block storage volume to check for sensitive information related to the device. As discussed above with respect toFIG.2D, in step241, scanning system101may perform a step of sensitive information scanning. In some embodiments, scanning system101may search the snapshot for sensitive information, such as personally identifiable information (PII), Social Security numbers, healthcare information, or credit card numbers. In some embodiments, scanning system101may search data repository history as well. This is because it is not uncommon for an entire production environment repository to be cloned, with no one remembering the copy contains sensitive information. In some situations, detecting sensitive data not secured is critical in adherence to data privacy regulations. In some embodiments, the ephemeral scanner is configured to perform container scanning of the at least one block storage volume. In such an embodiment the ephemeral scanner may scan the block storage volume to check for container information related to the device. As discussed with respect toFIG.2D, in step243, scanning system101may perform a step of container scanning. In some embodiments, scanning system101may apply one or more of the preceding steps ofFIG.2Dagainst containerized environments. In some embodiments, in order to do so, scanning system101reconstructs a container runtime layered file system (LFS) before recursively running one or more of steps231-241on the reconstructed file system. In some embodiments, the ephemeral scanner is configured to perform keys and password scanning of the at least one block storage volume. In such an embodiment the ephemeral scanner may scan the block storage volume to check for keys and password information related to the device. As discussed above with respect toFIG.2D, in step239, scanning system101may perform a step of key/password scanning. As one example situation, suppose there is a weak or unprotected password stored (in plain text) in storage111A. For example, if a personal email account has been compromised, the passwords may be known about in advance. Scanning system101may search the snapshot for similar usernames or login names, and, either using known dictionaries or the account owner's previously leaked passwords (stored in, e.g., database103A), may attempt to login to one or more systems or devices in cloud infrastructure106, and may record the result thereof. In some embodiments, defining the at least one block storage volume includes presenting risk data without sharing consumer data or data that was used to identify the risk data. Risk data as used herein, may refer to information relating to causing harm. Consumer data as used herein, may refer to a user's personal information or any information trail user's leave behind as a result of their computer or Internet use. Data used to identify the risk as used herein, may refer to information for determining possible issues. In some embodiments, scanning system101ofFIG.1may present information without sharing a user's personal information or any information trail user's leave behind as a result of their computer or Internet use. In some embodiments, the metadata defining the at least one block storage volume includes at least one of: an indication of an installed application, a version of an installed application, an operating system configuration, an application configuration, or a profile configuration. An indication of an installed application as used herein, may refer to a sign or signal of a stored program. A version of an installed application as used herein, may refer to an older or newer form of a stored program. An operating system configuration as used herein, may refer to one or more computer system settings that have been set by default automatically or manually by a given program or the user. An application configuration as used herein, may refer to one or more computer program settings that have been set by default automatically or manually by a given program or the user. A profile configuration as used herein, may refer to one or more computer program file settings that have been set by default automatically or manually by a given program or the user. Aspects of this disclosure may include receiving a transmission of the metadata from the at least one ephemeral scanner. In some embodiments, the transmission may exclude raw data of the primary account. A transmission of metadata as used herein, may refer to movement of a set of data that describes and gives information about other data. In some embodiments, scanning system101ofFIG.1may receive data excluding any raw data (e.g., data as gathered before it has been processed, cleaned or analyzed). Aspects of this disclosure may include analyzing the received metadata to identify a plurality of cybersecurity vulnerabilities. A plurality of cybersecurity vulnerabilities as used herein, may refer to a weakness in a system. For example, a cybersecurity vulnerability may be exploited by cybercriminals to gain unauthorized access to a computer system. In some embodiments, scanning system101ofFIG.1may analyze the received data in order to identify any weakness in a system (e.g., to avoid cybercriminals gaining unauthorized access to a computer system). In some embodiments, vulnerability information may include information stored in a system. Aspects of this disclosure may include correlating each of the identified plurality of cybersecurity vulnerabilities with one of the plurality of assets. In some embodiments, scanning system101ofFIG.1may connect a security risk with an asset in the cloud environment. In such embodiments, the connection or correlation of vulnerability to asset may allow the system to address the vulnerability. Aspects of this disclosure may include generating a report correlating the plurality of cybersecurity vulnerabilities with the plurality of assets. A report as used herein, may refer to a document containing information. For example, a report may notify the administrator of a website or application about a problem such as a security issue or vulnerability in the system that should be addressed. In some embodiments, scanning system101ofFIG.1may create a report providing information related to the vulnerability and asset. In such an embodiment, the report may allow for the vulnerability to be addressed and resolved. In such embodiments, the system may allow scanners (e.g., ephemeral scanners, scanning system101) to run inside an account, including an Amazon Web Services, Azure, or GCP account. In this mode, a cloud cybersecurity service may generate ephemeral scanners inside an account that performs the same actions as a SAAS node but are logically hosted inside the account. In some embodiments, the at least one processor is further configured to receive a transmission of updated metadata defining the at least one block storage volume in response to at least one change to the at least one block storage volumes. Updated metadata as used herein, may refer to a renewed set of data that describes and gives information about other data. FIG.10is a block diagram of a method1000for deployment of ephemeral scanners, consistent with disclosed embodiments. In some embodiments, the method may include seven (or more or less) steps: Block1002: Access a primary account maintained in a cloud environment. In some embodiments, scanning system101ofFIG.1may gain access to an account within cloud infrastructure106using a password stored in a database, for example. Block1004: Receiving information defining a structure of the primary account, wherein the structure includes a plurality of assets. In some embodiments, the information may exclude raw data of the primary account. In some embodiments, scanning system101ofFIG.1may receive data relating to the structure of an account. In such an embodiment, the structure of the account may include virtual machines107A-D and databases109A-109D, for example. Block1006: Deploying, inside the primary account, at least one ephemeral scanner configured to periodically scan at least one block storage volume and to output metadata defining the at least one block storage volume. In some embodiments, the output may exclude raw data of the primary account. In some embodiments, scanning system101ofFIG.1may scan a block storage volume and output data related to it (e.g., vulnerability information, configuration information, malware information, risk analysis information, and sensitive information). Block1008: Receiving a transmission of the metadata from the at least one ephemeral scanner. In some embodiments, the transmission may exclude raw data of the primary account. In some embodiments, scanning system101ofFIG.1may receive data excluding any raw data (e.g., data as gathered before it has been processed, cleaned or analyzed). Block1010: Analyzing the received metadata to identify a plurality of cybersecurity vulnerabilities. For example, a cybersecurity vulnerability may be exploited by cybercriminals to gain unauthorized access to a computer system. In some embodiments, scanning system101ofFIG.1may analyze the received data in order to identify any weakness in a system (e.g., to avoid cybercriminals gaining unauthorized access to a computer system). Block1012: Correlating each of the identified plurality of cybersecurity vulnerabilities with one of the plurality of assets. In some embodiments, scanning system101ofFIG.1may connect a security risk with an asset in the cloud environment. In such embodiments, the connection or correlation of vulnerability to asset may allow the system to address the vulnerability. Block1014: Generating a report correlating the plurality of cybersecurity vulnerabilities with the plurality of assets. For example, a report may notify the administrator of a website or application about a problem such as a security issue or vulnerability in the system that should be addressed. In some embodiments, scanning system101ofFIG.1may create a report providing information related to the vulnerability and asset. In such an embodiment, the report may allow for the vulnerability to be addressed and resolved. Risk Information Aggregation Techniques FIG.11represents a schematic block diagram1100illustrating an exemplary embodiment of a method for providing a dashboard aggregating risk information. Assets operating in a cloud environment face myriad cybersecurity risks that vary in type and nature of threat with increasing regularity. In order to combine the varied and multiple cybersecurity risks an asset may encounter in a cloud environment into a single view to allow for faster and more flexible processing, prioritization and mitigation, a system and method of aggregating cybersecurity risks into a single dashboard is needed. Aggregation of multiple cybersecurity risks into a single view for an administrator is needed. In one of many embodiments, an exemplary embodiment may include a graphical user interface system for providing comprehensive cloud environment risk inventory visualization. A graphical user interface system may include systems similar in appearance and functionality to Microsoft Windows, Mac OS, Ubuntu Unity, Gnome Shell, Android, Apple iOS, Blackberry OS, Windows 10 Mobile, PalmOS-Web OS, or Firefox OS. For example, cloud infrastructure106may be the cloud environment consisting of virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115, and the cloud-based storage volume may be contained in storage111A-111D. The cloud environment may include service provided by a third party such as Amazon Web Services, Microsoft Azure, Google Cloud, Alibaba Cloud, IBM Cloud, Oracle, Salesforce, SAP, or similar service. Visualizations provided by the above system may include graphical, sequential, or multi-dimensional displays of risk inventories. For example, cloud infrastructure106may be the cloud environment consisting of virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115, and the cloud-based storage volume may be contained in storage111A-111D. An exemplary embodiment may also include at least one processor. The at least one processor may be part of a personal computer, tablet, smartphone, or virtual machine for processing threats and vulnerabilities to a cloud-based storage volume. A processor may include one or more integrated circuits (IC), including application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field-programmable gate array (FPGA), server, virtual server, or other circuits suitable for executing instructions or performing logic operations. In an embodiment may include a processor configured to cause a display to present a plurality of asset categories (step1101). A display may be graphical, sequential, or multi-dimensional. A plurality may include at least one asset category to be displayed. Asset categories may include at least one of an account category, a container category, a database category, an image category, a container category, a managed service category, a messaging service category, a monitoring category, a network category, a storage category, a user category, a access category, a virtual machine category, or a serverless category. An account category may further include listing of assets based on type of account associated with that user. A type of account may include active user account, passive user account, administrator account, maintainer account, a system account, superuser account, or a guest user account. A container category may include one or more containers currently listed in the system. A container may involve specific versions of programming language runtimes, libraries required to run software, or another method of packaging applications abstracted from the cloud environment. A database category may include one or more databases managed and maintained in relation to the system. Databases may include a NoSQL database, a relational database, a cloud database, a columnar database, a wide column database, a key-value database, an object-oriented database, a hierarchical database, or any other kind of database. Databases may be implemented using ElasticCache, ElasticSearch, DocumentDb, DynamoDB, Neptune, RDS, Aurora, Redshift clusters, Kafka clusters, or EC2 instances. An image category may contain one or more images based on image type, size, and content. Images may include images of one or more virtual machines. A managed service category may include one or more managed services listed in the above system, to include at least client or customer-owned systems being managed by a third party entity. A messaging service category may include systems/programs communicating with other systems/programs using services such as Google Cloud Pub and Sub communications, AWS SQS, or similar queues found in Information Systems. A monitoring category may include centralized or decentralized methods of security monitoring systems. A network category may include one or more systems of establishing a form of communicative connectivity between systems, e.g., TCP-IP services. A storage category may list one or more methods of data storage, to include Direct Attached Storage, Network Attached Storage, SSD Flash Drive Arrays, Hybrid Flash Arrays, Hybrid Cloud Storage, Backup Software, Backup Appliances, Cloud Storage, or similar. Storage may include data structures, instructions, or any other data to be contained in a storage medium. A user category may include one or more users of a given system along with identifying information relating to its unique or semi-unique identifier, level of authorized access and permissions, and storage volumes accessible by said user. An access category may include one or more levels of access permission and a listing of users or devices granted access at a given level of access permission to a system. A virtual machine category may list one or more virtual machines accessible to a system and the levels and types of access granted to a given virtual machine. A serverless category may list one or more storage volumes stored on a serverless computing system such as Kubeless, Fission, Google App Engine, AWS Lambda, Google Cloud Functions, IBM Cloud Function, IBM Cloud Code Engine, Microsoft Azure Functions, Cloudflare Workers, and Compute@Edge. In an embodiment, an exemplar may include a processor configured to receive, via an input device, a selection of a particular asset category (step1103). An input device may include a personal computer, tablet, smartphone, or virtual machine. A processor may include one or more integrated circuits (IC), including application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field-programmable gate array (FPGA), server, virtual server, or other circuits suitable for executing instructions or performing logic operations. A selection of one or more asset categories may be received and updated as new visualizations are developed based on the current threats to an asset in that category. Asset categories may be selected or deselected by an end user. Once a particular asset category is selected, it may be added to a list of selected asset categories with a list for deselected categories as well. The list of selected asset categories may organized per category or may be compiled to include all current threats in a comprehensive selected asset threat list. One exemplary embodiment may further include a processor configured to cause the display to present a list of assets in the selected category that have cyber security risks (step1105). When an asset in a selected category has a cyber security risk associated with it, the asset may appear in a list of all listed categories having a cyber security risk associated with it, and may be organized by individual category, type of cyber security risk, or some combination or sequential ordering methods based on the information available to the processor and its end user. In one of many exemplary embodiments a processor may be configured to retrieve workload component cybersecurity risk information (step1107A) for each listed asset. When a category of assets is selected, a processor may be configured to order those assets according to available cybersecurity risk information available to said processor, and may organize by type of risk, type of asset, or similar type of sequentialization. Cybersecurity risk information may be determined from historical threats to the workload component, the current active cybersecurity threats to workload components of certain types, or some combination of the same. In an embodiment, a processor may be configured to retrieve cloud component cybersecurity risk information (step1107B) for each listed asset. Cloud component cybersecurity risk information may include identification of the risk based on a semi-unique identifier, the nature of the risk, the likelihood of the risk, or the severity of the risk to each listed asset. Cybersecurity risk information to a cloud component may be determined from historical threats within the cloud environment106, current active cybersecurity threats to similar workload components in the cloud environment106or the level of risk to assets within the cloud environment106, as outlined below. An exemplary embodiment may include a system configured to cause the display to present a common interface (step1109) providing access to the workload component cybersecurity risk information and cloud component cybersecurity risk information. The common interface may be configured to access component cybersecurity risk information based on the nature of the risk, the likelihood of the risk, or the severity of the risk to the component, including recursive risk—risk that can happen from an attacker getting access to a system and using it as a mean to reach other system. On the same interface, a different interface, or a similar and related interface, a display may present cloud component cybersecurity risk information based on a semi-unique identifier, the nature of the risk, the likelihood of the risk, or the severity of the risk to each listed asset. As an exemplary embodiment, the system may be configured to cause the display to present, in the common interface, an interconnection between the workload component cybersecurity risk information and the cloud component cybersecurity risk information (step1111). With this display presented, an exemplary embodiment may allow for a user to prioritize data from additional sources. The interconnection between workload component cybersecurity risk information and cloud component cybersecurity risk information may generate an interface between them and may be presented as an interconnection between them. The common interface presented may include direct interconnections between the component cybersecurity risk information and the cloud component cybersecurity risk information or links from one set of cybersecurity risk information. For instance, the component cybersecurity risk information may include a link to access the cloud component cybersecurity risk information. Conversely, the cloud component cybersecurity risk information may include a link to access the component cybersecurity risk information. In one embodiment, a system may be configured to provide the plurality of asset categories include at least one of an account category, a container category, a database category, an image category, a container category, a managed service category, a messaging service category, a monitoring category, a network category, a storage category, a user category, a access category, a virtual machine category, or a serverless category. A category may list the currently identified entities of that category type as described below. An account category may include one or more listed accounts of users of the system. An account category may further include listing of assets based on type of account associated with that user. A type of account may include active user account, passive user account, administrator account, maintainer account, a system account, superuser account, or a guest user account. A container category may include one or more containers currently listed in the system. A container may involve specific versions of programming language runtimes, libraries required to run software, or another method of packaging applications abstracted from the cloud environment. A database category may include one or more databases managed and maintained in relation to the system. Databases may include a NoSQL database, a relational database, a cloud database, a columnar database, a wide column database, a key-value database, an object-oriented database, a hierarchical database, or any other kind of database. Databases may be implemented using ElasticCache, ElasticSearch, DocumentDb, DynamoDB, Neptune, RDS, Aurora, Redshift clusters, Kafka clusters, or EC2 instances. An image category may contain one or more images based on image type, size, and content. Images may include images of one or more virtual machines displayed. A managed service category may include one or more managed services listed in the above system, to include at least client or customer-owned systems being managed by a third party entity. A messaging service category may include services of users communicating with at least one other user such as Google Cloud Pub and Sub communications, AWS SQS, or similar queues found in Information Systems. A monitoring category may include centralized or decentralized methods of security monitoring systems. A network category may include one or more systems of establishing a form of communicative connectivity between systems, e.g., TCP-IP services. A storage category may list one or more methods of data storage, to include Direct Attached Storage, Network Attached Storage, SSD Flash Drive Arrays, Hybrid Flash Arrays, Hybrid Cloud Storage, Backup Software, Backup Appliances, Cloud Storage, or similar. Storage may include data structures, instructions, or any other data to be contained in a storage medium. A user category may include one or more users of a given system along with identifying information relating to its unique or semi-unique identifier, level of authorized access and permissions, and storage volumes accessible by said user. An access category may include one or more levels of access permission and a listing of users or devices granted access at a given level of access permission to a system. A virtual machine category may list one or more virtual machines accessible to a system and the levels and types of access granted to a given virtual machine. A serverless category may list one or more storage volumes stored on a serverless computing system such as Kubeless, Fission, Google App Engine, AWS Lambda, Google Cloud Functions, IBM Cloud Function, IBM Cloud Code Engine, Microsoft Azure Functions, Cloudflare Workers, and Compute@Edge. In one of many exemplary embodiments, a system may further include the common interface configured to display information relating to at least one of an asset type, a risk, a region, or an account. A common interface may include the ability to configure and reconfigure the asset type, risk, region, or account to appear in any order and at multiple formats for displaying the information relating to those categories. In another embodiment, a system may provide the common interface configured to display description for each listed asset. The display of a description for each listed asset may be listed vertically or horizontally. The display may be displayed as written text, representative graphical figures, or similar method of displaying the aforementioned types of information for an asset. In another embodiment, a system may further provide the common interface is configured to display at least one of a vulnerability, an insecure configuration, an indication of a presence of malware, a neglected asset, a data at risk, a lateral movement, or an authentication. A vulnerability may include vulnerability data from a vulnerability database such as NVD, WPVulnDB, US-CERT, Node.js Security Working Group, OVAL—Red Hat, Oracle Linux, Debian, Ubuntu, SUSE, Ruby Advisory Database, JVN, Safety DB(Python), Alpine secdb, PHP Security Advisories Database, Amazon ALAS, RustSec Advisory Database, Red Hat Security Advisories, Microsoft MSRC, KB, Debian Security Bug Tracker, Kubernetes security announcements, Exploit Database, Drupal security advisories, JPCERT. An insecure configuration may include software flaws or misconfigurations, non-encrypted files, improper file or directory permissions, unpatched security flaws in server software, enabled or accessible administrative and debugging functions, administrative account vulnerabilities, SSL certificates and encryption settings not properly configured, or a similar misconfiguration. These misconfigurations may be discovered by performing a scan, e.g., may query devices and systems capable of routing traffic (e.g., load balancer115, routers, switches, firewalls, and proxies) using an API provided through a cloud service provider's system to determine network configurations, and may evaluate them against known problematic configurations or other configurations. Malware indicated may include Adware, Botnets, Cryptojacking, Malvertising, Ransomware, Remote Administration Tools (RATs), Rootkits, Spyware, Trojans, Virus Malware, Worm Malware, or similar attack vehicles. A neglected asset may include an asset that has been improperly maintained, patched, or similar cybersecurity security measure. A data at risk may include any packet of data that may be exposed to a cybersecurity threat. A lateral movement may include any pathway between assets where a cybersecurity risk can travel from one affected asset to another affected asset. An authentication may include a username, password, one-time password, two-factor authentication, or any other authentication mechanism to gain access to a cloud service provider's system. A common interface may display more than one of a vulnerability, an insecure configuration, an indication of a presence of malware, a neglected asset, a data at risk, a lateral movement, and an authentication separately or concurrently for an asset. A display of said information may be ordered and listed in any order depending on user preference and input. An exemplary embodiment may further include a system wherein the common interface is configured to display one or more possible attack vectors reaching the each listed asset. Generating this common interface display may include generating a map of an asset to include recording information such as a region identifier, site identifier, datacenter identifier, physical address, network address, workload name, or any other identifier which may be acquired via an API provided through a cloud service provider's system and demonstrating via a display the myriad attack vectors currently available to an analyzed asset. Display of the one or more possible attack vectors reaching the each listed asset may be displayed as written text, representative graphical figures, or similar method of displaying the one or more possible attack vectors. Another exemplary embodiment may further include a system wherein the common interface is configured to display a recommended mitigation tactic for the each listed asset. A recommended mitigation tactic may be generated based on historical data of the cybersecurity risk, the vulnerability attributed to the listed asset, or the data contained on the each listed asset. Mitigation tactics may be selected based on user input or may be implemented automatically based on the mitigation tactics likelihood of a successful mitigation of the cybersecurity risk. A mitigation tactic may include increased frequency of scanning, heightened access control measures, firewalls and antivirus software patches, increased patch management scheduling, continuous workload monitoring, or similar tactics. In another exemplary embodiment is a system wherein the common interface is configured to display one or more workload metrics associated with the each listed asset. Workload metrics may be a statistical representation of the current performance of the workload, the capacity for additional processing of the workload, and the historical performance of the workload based on statistical trend. Another exemplary embodiment may include a system wherein the at least one processor is further configured to provide a cybersecurity report for the each listed asset. A cybersecurity report may be generated by scanning system101following a complete scan of an analyzed asset, and may include a snapshot of one or more of the number of total cybersecurity threats to a listed asset, number of cybersecurity threats mitigated by recent performance, methods of cybersecurity risk mitigation in the listed asset, rate of cybersecurity threats detected in the listed asset, and possibility of failed cybersecurity mitigation tactics based on the frequency and nature of the historical cybersecurity threats. Data collected to generate a cybersecurity report may include, among many things, operating system packages, installed software applications, libraries, and program language libraries such as Java archives, Python packages, Go modules, Ruby gems, PHP packages, and Node.js modules, other software applications, library versions, software versions, and other identifying characteristics of software and operating systems, lists of users of each system or device (e.g., virtual machines107A-D), each system's or device's services, password hashes, and application-specific configurations for software/services such as Apache, Nginx, SSH, and other services, bugs or other configuration risks, malware scan results, passwords, scripts, shell history, repositories, or other data that may contain passwords, cloud access keys, SSH keys, or other key/password/access information that provide unchecked access to important resources. In another embodiment, a system may include the at least one processor is configured to identify a risk level distribution among the listed assets. The at least one processor may be configured to list risk level distribution by highest level of risk, lowest level of risk, or largest number of cybersecurity risks attributed among the listed assets. A risk level may be high if the web service is connected to the Internet (e.g., has at least one public port forwarded through a firewall to the web service, is able to be accessed through a load balancer, is able to be accessed via reverse proxy). A risk level may be medium risk if the web service is only accessible internally (e.g., because of firewall configuration). A risk level may be low risk if access to the web server is blocked by a configuration of cloud infrastructure106. Another exemplary embodiment may include a system wherein the at least one processor is configured to receive a search query for a specific risk. A search query may be input by type of risk, name of a risk, or another semi-unique manner of identifying a specific cybersecurity risk. A search may be initiated by end user operating user device102, by a maintainer or administrator of an analyzed asset, or by a virtual machine associated with operating scanning system101, A search may be initiated by inputting a number of inputs, to include keywords, phrases, file type, or similar identifiers unique or semi-unique to a given cybersecurity risk. Upon initiating a search, the results generated by the search may be stored in the analyzed asset, transmitted to the operator of a processor running side scanning system101, or generated for review by an end user operating user device102. The search and display of a specific risk may be based on historical trends of cybersecurity risks in the listed asset, severity of cybersecurity risk to a listed asset, or a historical search for past cybersecurity threats to a listed asset. Another embodiment may include a system wherein the at least one processor is configured to identify one or more assets vulnerable to the specific risk. For instance, a processor operating scanning system101may be configured to perform a scan of a set of one or more assets with potential exposure to a given risk, and upon identifying vulnerable pathways for an attack vector to reach a set of identified assets, may generate a list of assets that are potentially or currently exposed to one or more specific risk. Assets vulnerable to a specific risk may be determined to be vulnerable based on the type of the specific risk, the existence of specific risks in adjacent or similar workloads, and related risks that may also cause an asset to be vulnerable based on the specific risk determined. In another embodiment is a method for providing comprehensive cloud environment risk inventory visualization in a graphical user interface comprising causing a display to present a plurality of asset categories (step1101). The method may include the use of a graphical user interface system and may include systems similar in appearance and functionality to Microsoft Windows, Mac OS, Ubuntu Unity, Gnome Shell, Android, Apple iOS, Blackberry OS, Windows 10 Mobile, PalmOS-Web OS, or Firefox OS. For example, cloud infrastructure106may be the cloud environment consisting of virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115, and the cloud-based storage volume may be contained in storage111A-111D. The cloud environment may include service provided by a third party such as Amazon Web Services, Microsoft Azure, Google Cloud, Alibaba Cloud, IBM Cloud, Oracle, Salesforce, SAP, or similar service. Visualizations provided by the above system may include graphical, sequential, or multi-dimensional displays of risk inventories. For example, cloud infrastructure106may be the cloud environment consisting of virtual machines107A-107D, databases109A-109D, storage111A-111D, keystores113A-113D, and load balancer115, and the cloud-based storage volume may be contained in storage111A-111D. One of several embodiments may include a method for providing comprehensive cloud environment risk inventory visualization in a graphical user interface comprising receiving, via an input device, a selection of a particular asset category. A method may include a personal computer, tablet, smartphone, or virtual machine. A processor may include one or more integrated circuits (IC), including application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field-programmable gate array (FPGA), server, virtual server, or other circuits suitable for executing instructions or performing logic operations. A selection of one or more asset categories may be received and updated as new visualizations are developed based on the current threats to an asset in that category. Asset categories may be selected or deselected by an end user. Once a particular asset category is selected, it may be added to a list of selected asset categories with a list for deselected categories as well. The list of selected asset categories may organized per category or may be compiled to include all current threats in a comprehensive selected asset threat list. Another embodiment may include a method for providing comprehensive cloud environment risk inventory visualization in a graphical user interface comprising causing the display to present a list of assets in the selected category that have cybersecurity risks (step1105). A display may be graphical, sequential, or multi-dimensional. A list of selected categories may include at least one of an account category, a container category, a database category, an image category, a container category, a managed service category, a messaging service category, a monitoring category, a network category, a storage category, a user category, a access category, a virtual machine category, or a serverless category. When an asset in a selected category has a cyber security risk associated with it, the asset may appear in a list of all listed categories having a cyber security risk associated with it, and may be organized by individual category, type of cyber security risk, or some combination or sequential ordering methods based on the information available to the processor and its end user. Another embodiment may include a method for providing comprehensive cloud environment risk inventory visualization in a graphical user interface comprising retrieving workload component cybersecurity risk information307A for each listed asset. When a category of assets is selected, a processor may be configured to order those assets according to available cybersecurity risk information available to said processor, and may organize by type of risk, type of asset, or similar type of sequentialization. Cybersecurity risk information may be determined from historical threats to the workload component, the current active cybersecurity threats to workload components of certain types, or some combination of the same. Another embodiment may include a method for providing comprehensive cloud environment risk inventory visualization in a graphical user interface comprising retrieving cloud component cybersecurity risk information307B. Cloud component cybersecurity risk information may include identification of the risk based on a semi-unique identifier, the nature of the risk, the likelihood of the risk, or the severity of the risk to each listed asset. Cybersecurity risk information to a cloud component may be determined from historical threats within the cloud environment106, current active cybersecurity threats to similar workload components in the cloud environment106or the level of risk to assets within the cloud environment106, as outlined above. Another embodiment may include a method for providing comprehensive cloud environment risk inventory visualization in a graphical user interface comprising causing the display to present a common interface309providing access to the workload component cybersecurity risk information and cloud component cybersecurity risk information. The common interface may be configured to access component cybersecurity risk information based on the nature of the risk, the likelihood of the risk, or the severity of the risk to the component. On the same or a similar and related interface, a display may present cloud component cybersecurity risk information based on a semi-unique identifier, the nature of the risk, the likelihood of the risk, or the severity of the risk to each listed asset. Another embodiment may include a method for providing comprehensive cloud environment risk inventory visualization in a graphical user interface comprising causing the display to present, in the common interface, an interconnection between the workload component cybersecurity risk information and the cloud component cybersecurity risk information. The interconnection between workload component cybersecurity risk information307A and cloud component cybersecurity risk information307B may generate an interface between them309and may be presented as an interconnection between them311. The common interface presented may include direct interconnections between the component cybersecurity risk information and the cloud component cybersecurity risk information or links from one set of cybersecurity risk information. For instance, the component cybersecurity risk information may include a link to access the cloud component cybersecurity risk information. Conversely, the cloud component cybersecurity risk information may include a link to access the component cybersecurity risk information. In another embodiment, a method may include wherein the plurality of asset categories include at least one of an account category, an authentication category, a container category, a database category, an image category, a container category, a managed service category, a messaging service category, a monitoring category, a network category, a storage category, a user category, a access category, a virtual machine category, or a serverless category. A category may list the currently identified entities of that category type as described below. An account category may include one or more listed accounts of users of the system. An account category may further include listing of assets based on type of account associated with that user. A type of account may include active user account, passive user account, administrator account, maintainer account, a system account, superuser account, or a guest user account. A container category may include one or more containers currently listed in the system. A container may involve specific versions of programming language runtimes, libraries required to run software, or another method of packaging applications abstracted from the cloud environment. A database category may include one or more databases managed and maintained in relation to the system. Databases may include a NoSQL database, a relational database, a cloud database, a columnar database, a wide column database, a key-value database, an object-oriented database, a hierarchical database, or any other kind of database. Databases may be implemented using ElasticCache, ElasticSearch, DocumentDb, DynamoDB, Neptune, RDS, Aurora, Redshift clusters, Kafka clusters, or EC2 instances. An image category may contain one or more images based on image type, size, and content. Images may include images of one or more virtual machines displayed. A managed service category may include one or more managed services listed in the above system, to include at least client or customer-owned systems being managed by a third party entity. A messaging service category may include services of users communicating with at least one other user such as Google Cloud Pub and Sub communications, AWS SQS, or similar queues found in Information Systems. A monitoring category may include centralized or decentralized methods of security monitoring systems. A network category may include one or more systems of establishing a form of communicative connectivity between systems, e.g., TCP-IP services. A storage category may list one or more methods of data storage, to include Direct Attached Storage, Network Attached Storage, SSD Flash Drive Arrays, Hybrid Flash Arrays, Hybrid Cloud Storage, Backup Software, Backup Appliances, Cloud Storage, or similar. Storage may include data structures, instructions, or any other data to be contained in a storage medium. A user category may include one or more users of a given system along with identifying information relating to its unique or semi-unique identifier, level of authorized access and permissions, and storage volumes accessible by said user. An access category may include one or more levels of access permission and a listing of users or devices granted access at a given level of access permission to a system. A virtual machine category may list one or more virtual machines accessible to a system and the levels and types of access granted to a given virtual machine. Another embodiment may include a method wherein the common interface is configured to display information relating to at least one of an asset type, a risk, a region, or an account. A common interface may include the ability to configure and reconfigure the asset type, risk, region, or account to appear in any order and at multiple formats for displaying the information relating to those categories. Another embodiment may include a method wherein the common interface is configured to display description for each listed asset. The display of a description for each listed asset may be listed vertically or horizontally. The display may be displayed as written text, representative graphical figures, or similar method of displaying the aforementioned types of information for an asset. Another embodiment may include a method wherein the common interface is configured to display at least one of a vulnerability, an insecure configuration, an indication of a presence of malware, a neglected asset, a data at risk, a lateral movement, or an authentication. A vulnerability may include vulnerability data from a vulnerability database such as NVD, WPVulnDB, US-CERT, Node.js Security Working Group, OVAL-Red Hat, Oracle Linux, Debian, Ubuntu, SUSE, Ruby Advisory Database, JVN, Safety DB(Python), Alpine secdb, PHP Security Advisories Database, Amazon ALAS, RustSec Advisory Database, Red Hat Security Advisories, Microsoft MSRC, KB, Debian Security Bug Tracker, Kubernetes security announcements, Exploit Database, Drupal security advisories, JPCERT. An insecure configuration may include software flaws or misconfigurations, non-encrypted files, improper file or directory permissions, unpatched security flaws in server software, enabled or accessible administrative and debugging functions, administrative account vulnerabilities, SSL certificates and encryption settings not properly configured, or a similar misconfiguration. These misconfigurations may be discovered by performing a scan, e.g., may query devices and systems capable of routing traffic (e.g., load balancer115, routers, switches, firewalls, and proxies) using an API provided through a cloud service provider's system to determine network configurations, and may evaluate them against known problematic configurations or other configurations. Malware indicated may include Adware, Botnets, Cryptojacking, Malvertising, Ransomware, Remote Administration Tools (RATs), Rootkits, Spyware, Trojans, Virus Malware, Worm Malware, or similar attack vehicles. A neglected asset may include an asset that has been improperly maintained, patched, or similar cybersecurity security measure. A data at risk may include any packet of data that may be exposed to a cybersecurity threat. A lateral movement may include any pathway between assets where a cybersecurity risk can travel from one affected asset to another affected asset. An authentication may include a username, password, one-time password, two-factor authentication, or any other authentication mechanism to gain access to a cloud service provider's system. A common interface may display more than one of a vulnerability, an insecure configuration, an indication of a presence of malware, a neglected asset, a data at risk, a lateral movement, and an authentication separately or concurrently for an asset. A display of said information may be ordered and listed in any order depending on user preference and input. Another embodiment may include a method wherein the common interface is configured to display one or more possible attack vectors reaching the each listed asset. Generating this common interface display may include generating a map of an asset to include recording information such as a region identifier, site identifier, datacenter identifier, physical address, network address, workload name, or any other identifier which may be acquired via an API provided through a cloud service provider's system and demonstrating via a display the myriad attack vectors currently available to an analyzed asset. Display of the one or more possible attack vectors reaching the each listed asset may be displayed as written text, representative graphical figures, or similar method of displaying the one or more possible attack vectors. Another embodiment may include a method wherein the common interface is configured to display a recommended mitigation tactic for the each listed asset. A recommended mitigation tactic may be generated based on historical data of the cybersecurity risk, the vulnerability attributed to the listed asset, or the data contained on the each listed asset. Mitigation tactics may be selected based on user input or may be implemented automatically based on the mitigation tactics likelihood of a successful mitigation of the cybersecurity risk. A mitigation tactic may include increased frequency of scanning, heightened access control measures, firewalls and antivirus software patches, increased patch management scheduling, continuous workload monitoring, or similar tactics. Another embodiment may include a method wherein the common interface is configured to display one or more workload metrics associated with the each listed asset. Workload metrics may be a statistical representation of the current performance of the workload, the capacity for additional processing of the workload, and the historical performance of the workload based on statistical trend. Another embodiment may include a non-transitory computer readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations for matching keys with compute resources, the operations comprising causing a display to present a plurality of asset categories301. A plurality may include at least one asset category to be displayed. Asset categories may include at least one of an account category, a container category, a database category, an image category, a container category, a managed service category, a messaging service category, a monitoring category, a network category, a storage category, a user category, a access category, a virtual machine category, or a serverless category. Another embodiment may include a non-transitory computer readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations for matching keys with compute resources, the operations comprising receiving, via an input device, a selection of a particular asset category (step1103). An input device may include a personal computer, tablet, smartphone, or virtual machine. A processor may include one or more integrated circuits (IC), including application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field-programmable gate array (FPGA), server, virtual server, or other circuits suitable for executing instructions or performing logic operations. A selection of one or more asset categories may be received and updated as new visualizations are developed based on the current threats to an asset in that category. Asset categories may be selected or deselected by an end user. Once a particular asset category is selected, it may be added to a list of selected asset categories with a list for deselected categories as well. The list of selected asset categories may organized per category or may be compiled to include all current threats in a comprehensive selected asset threat list. Another embodiment may include a non-transitory computer readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations for matching keys with compute resources, the operations comprising causing the display to present a list of assets in the selected category that have cybersecurity risks (step1105). A display may be graphical, sequential, or multi-dimensional. A list of selected categories may include at least one of an account category, a container category, a database category, an image category, a container category, a managed service category, a messaging service category, a monitoring category, a network category, a storage category, a user category, a access category, a virtual machine category, or a serverless category. When an asset in a selected category has a cyber security risk associated with it, the asset may appear in a list of all listed categories having a cyber security risk associated with it, and may be organized by individual category, type of cyber security risk, or some combination or sequential ordering methods based on the information available to the processor and its end user. Another embodiment may include a non-transitory computer readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations for matching keys with compute resources, the operations comprising retrieving workload component cybersecurity risk information for each listed asset (step1107A). When a category of assets is selected, a processor may be configured to order those assets according to available cybersecurity risk information available to said processor, and may organize by type of risk, type of asset, or similar type of sequentialization. Cybersecurity risk information may be determined from historical threats to the workload component, the current active cybersecurity threats to workload components of certain types, or some combination of the same. Another embodiment may include a non-transitory computer readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations for matching keys with compute resources, the operations comprising retrieving cloud component cybersecurity risk information for each listed asset (step1107B). Cloud component cybersecurity risk information may include identification of the risk based on a semi-unique identifier, the nature of the risk, the likelihood of the risk, or the severity of the risk to each listed asset. Cybersecurity risk information to a cloud component may be determined from historical threats within the cloud environment106, current active cybersecurity threats to similar workload components in the cloud environment106or the level of risk to assets within the cloud environment106, as outlined above. Another embodiment may include a non-transitory computer readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations for matching keys with compute resources, the operations comprising causing the display to present a common interface providing access to the workload component cybersecurity risk information and cloud component cybersecurity risk information for each listed asset (step1109). The common interface may be configured to access component cybersecurity risk information based on the nature of the risk, the likelihood of the risk, or the severity of the risk to the component. On the same or a similar and related interface, a display may present cloud component cybersecurity risk information based on a semi-unique identifier, the nature of the risk, the likelihood of the risk, or the severity of the risk to each listed asset. Another embodiment may include a non-transitory computer readable medium storing instructions that, when executed by at least one processor, are configured to cause the at least one processor to perform operations for matching keys with compute resources, the operations comprising causing the display to present, in the common interface, an interconnection between the workload component cybersecurity risk information and the cloud component cybersecurity risk information (step1111). The interconnection between workload component cybersecurity risk information (step1107A) and cloud component cybersecurity risk information (step1107B) may generate an interface between them (step1109) and may be presented as an interconnection between them (step1111). The common interface presented may include direct interconnections between the component cybersecurity risk information and the cloud component cybersecurity risk information or links from one set of cybersecurity risk information. For instance, the component cybersecurity risk information may include a link to access the cloud component cybersecurity risk information. Conversely, the cloud component cybersecurity risk information may include a link to access the component cybersecurity risk information. In some embodiments, a processor may cause a display of one or more risks aggregated in the same visualization.FIG.12is a schematic block diagram illustrating an exemplary embodiment of a visual representation of displaying aggregated cybersecurity risk information, consistent with disclosed embodiments. A visualization may be generated from one or more sources based on scanning system101. For example, a visual display may include a cloud provider visualization1201, displaying where the multiple sources of data may be derived from. A cloud service provider may include services such as Cloudflare, Amazon Web Services, Google Cloud, IBM Cloud, Oracle Cloud, Microsoft Azure, or similar. This is further displayed via a communicative connection through an access port1203to a data server1205. Data server1205may include servers such as109A-D. Data server1205may then display risk information that may be communicatively passed through web server1207, demonstrating one or more possible sources of the one or more cybersecurity risk information. Data server1205may be communicatively connected to a web server1207(through access port1203, which may be the same or a different port from port1203between cloud service provider1201and data server1205), which can then be communicatively connected to one or more users via user access points1209. This may include, among other things, user access that may be verified or validated via a credential1211. Each of these connections may demonstrate one or more risk paths that may be exploited in a single view to allow for prioritization of threats and analysis of possible users who may exploit those risks and vulnerabilities, as discussed above. This may allow an administrator to prioritize risks to a given storage volume as well as assess lateral storage volumes that may be similarly risked based on the assessed threats in a single image. FIG.13is a schematic block diagram illustrating an exemplary embodiment of a visual representation of a flow path for aggregated risk information, consistent with disclosed embodiments. In another embodiment, a processor may be configured to display a flow path of aggregated risk information. In one view, a displayed visualization may include user access1301, which displays risk information such as service vulnerability1303and an insecure private key1303. These vulnerabilities or risks may be displayed to an administrator via developer server1307to an administrator through administrator access1309. As previously discussed, this will allow an administrator to visualize myriad risks to a storage volume in a single view that will allow an administrator to prioritize risks and analyze possible exploiters of this risk information on lateral systems and storage volumes. In another embodiment, risk information may be shown in one or more views. This may involve a risk path similar to the exemplary embodiment ofFIG.13. In an exemplary embodiment, in a single visualization data from multiple sources may be displayed. This may involve identifying risk information such as the origin of the risk and potential exploiters of a risk. In some embodiments, this visualization may be done for a specific risk. In another embodiment, this visualization may be done for a combination of risks. Disclosed embodiments may include any one of the following bullet-pointed features alone or in combination with one or more other bullet-pointed features, whether implemented as a method, by at least one processor, and/or stored as executable instructions on non transitory computer readable media:establishing a trusted relationship between a source account in a cloud environment and a scanner account;using the established trust relationship, utilize at least one cloud provider API to identify workloads in the source account;using the at least one cloud provider API to query a geographical location of at least one of the identified workloads;receiving an identification of the geographic location;using the cloud provider APIs to access block storage volumes of the at least one workload;determining a file-system of the at least one workload;mounting the block storage volumes on a scanner based on the determined file-system;activating a scanner at the geographic location;reconstructing from the block storage volumes a state of the workload; andassessing the reconstructed state of the workload to extract insights.wherein the geographic location includes an identifier of a physical site.wherein mounting includes selecting a driver corresponding to the determined file system.wherein the at least one processor is further configured to deploy a scanner at the geographical location.wherein the identification of the geographic location comprises an identification of a data center, at least one of a data center name, Internet Protocol (IP) address, name of the cloud provider, or a unique identity.wherein the reconstructed state of the workload includes at least two of an indication of an installed application, a version of an installed application, an operating system configuration, an application configuration, a profile configuration, a log, or a database content;wherein the at least one processor is further configured to update the reconstructed state of the workload based on at least one change to the block storage volumes.wherein the insights comprise at least one of a vulnerability associated with the workload or a composition of installed applications associated with the workload.wherein to mount the block storage volumes on the scanner, the at least one processor is configured to create a snapshot of the block storage volumes;wherein to mount the block storage volumes on the scanner, the at least one processor is configured to mount the snapshot of the block storage volumes on the scannerwherein the at least one processor is further configured to encrypt the snapshot of the block storage volumes;wherein the at least one processor is further configured to mount the encrypted snapshot of the block storage volumes on the scanner;using a cloud provider API, access a block storage volume of a workload maintained in a cloud storage environment;identifying an installed software application in the accessed block storage volume;analyzing the identified installed software application to determine an associated software version;accessing a data structure of known software vulnerabilities for a plurality of versions of software applications;performing a lookup of the identified installed software version in the data structure to identify known vulnerabilities; andperforming at least one of query the cloud provider API to determine network accessibility information related to the workload, identify at least one port on which the vulnerable application is accessible, use network accessibility information and at least one port to identify one or more vulnerabilities susceptible to attack from outside the workload.implementing a remedial action in response to the identified one or more vulnerabilities.wherein the remedial measure includes transmitting an alert to a device associated with an administrator.wherein querying the cloud provider API to determine network accessibility information related to the workload further comprises examining data sources associated with the workloadwherein querying the cloud provider API to determine network accessibility information related to the workload further comprises determining the network accessibility information based on the examined data sources.wherein querying the cloud provider API to determine network accessibility information related to the workload further comprises: wherein the network accessibility information includes at least one of: data from an external data source, cloud provider information, or at least one network capture log.identifying the installed software application comprises extracting data from at least one of operating system packages, libraries, or program language libraries;identifying the installed software application comprises identifying the installed software application based on the extracted data;identifying a version of the installed software application;wherein the identified installed software application includes one or more scripts;wherein the data structure includes aggregated vulnerability data;wherein the aggregated vulnerability data includes data from one or more third-party vendors;wherein the aggregated vulnerability data includes data collected by a scanner;wherein the aggregated vulnerability data includes at least one of an advisory, an exploit, a security announcement, or a known bug.querying the cloud provider API to determine network accessibility information related to the workload further comprises: wherein the network accessibility information includes at least one of: data from an external data source, cloud provider information, or at least one network capture log.accessing at least one cloud provider API to determine a plurality of entities capable of routing traffic in a virtual cloud environment associated with a target account containing the workload;querying the at least one cloud provider API to determine at least one networking configuration of the entities;building a graph connecting the plurality of entities based on the networking configuration;accessing a data structure identifying services publicly accessible via the internet and capable of serving as an internet proxy;integrating the identified services into the graph;traversing the graph to identify at least one source originating via the internet and reaching the workload; andoutputting a risk notification associated with the workload.wherein the plurality of entities includes a virtual network appliance.wherein the virtual network appliance is at least one of a load balancer, a firewall, a proxy, or a router.wherein the networking configuration is at least one of a routing configuration, a proxy configuration, a load balancing configuration, a firewall configuration, or a VPN configuration.wherein the graph includes a data structure sequentially connecting entities.wherein the graph includes directional vectors indicating directions of dataflow.wherein building the graph comprises identifying individual entities as nodeswherein building the graph comprises connecting the nodes.wherein the graph includes port numbers associated with the workload.wherein the graph includes a path from the at least one source to the workload.utilizing a cloud provider API to access a block storage volume of a workload maintained on a target account in a target system of a cloud storage environment;utilizing a scanner at a location of the block storage volume and on a secondary system other than the target system;scanning the block storage volume for malicious code, using the secondary system; identifying malicious code based on the scan; andoutputting from the secondary system, a notification of a presence of malicious code in the target system.wherein the location of the block storage volume includes at least one of: the target account, a secondary system account, a cloud provider account, or a third party account.wherein scanning the block storage volume includes scanning disk-backed memory.wherein the disk-backed memory includes at least one of a page file or a cache file.wherein the secondary system includes at least one of a virtual machine, a container, or a serverless function.wherein the secondary system has an operating system different from an operating system of the target account.wherein the malicious code includes a rootkit.wherein utilizing a scanner includes suspending an operation of the scanner after the scan of the block storage volume.wherein utilizing a scanner includes modifying a pre-utilized scanner at the location of the block storage volume based on information related to the target account to obtain a modified scanner;wherein utilizing a scanner includes utilizing the modified scanner.identifying assets in a cloud environment;identifying risks associated with each of the identified asset;identifying relationships between at least some of the identified assets, the relationships including at least one of a trust, a network connectivity, or a mechanism of network proxying;receiving an identification of a specific asset under investigation;performing a forward analysis of the specific asset under investigation to identify at least one possible attack vector reaching the specific asset via a network outside the cloud environment;performing a backward analysis of the specific asset to identify at least one exposure risk to one or more assets that is in a downstream of the specific asset, wherein the at least one exposure risk includes an identification of an exposed asset, an entry point to the exposed asset, and a lateral movement risk associated with the exposed asset; andoutputting a signal to cause on a display to present a presentation of forward and backward paths associated with the specific asset, thereby enabling visualization of a plurality of entry points and lateral movement risks associated with the plurality of entry points.wherein the network outside the cloud environment includes the Internet.wherein the assets in the cloud environment include at least one of: a virtual machine, a network appliance, a storage appliance, a compute instances, or an engine instance.wherein identifying the assets in a cloud environment includes identifying the assets based on at least one of: an identity and access management policy, an organization policy, or an access policy.wherein the presentation of the forward and backward paths indicates alternative paths connecting between the specific asset and an upstream asset or a downstream asset.wherein the visualization includes a presentation of the alternative paths.wherein the presentation of the forward and backward paths indicates port numbers for each pathway.wherein the visualization of the entry points indicates at least one entry point at risk. monitoring network activities of the assets in a cloud environment.detecting detect a potential risk associated with the specific asset based on the monitored network activities.detecting a potential risk associated with the specific asset based on a network activity of the specific asset.detecting a potential risk associated with the specific asset based on a network activity of an upstream asset of the specific asset.detecting a potential risk associated with the specific asset based on a network activity of a downstream asset of the specific asset.analyzing a cloud environment to identify a plurality of keys to the compute resources in the cloud environment;performing a cryptographic analysis on the plurality of keys to identify a first set of fingerprints that uniquely identify each of the plurality of keys, the first set of fingerprints being non-functional;analyzing trust configurations of the compute resources to identify a second set of fingerprints for each of the compute resources; andcomparing the first set of fingerprints with the second set of fingerprints to match keys with the compute resources without using the keys to access the compute resources.wherein the plurality of keys are stored in at least one workload.wherein at least one of the first set of fingerprints is not identical to any key of the plurality of keys.wherein at least one of the plurality of keys includes at least one of a password, a script containing a password, a private component of a private-public key pair, a cloud key, or an Secure Shell (SSH) key.testing validity of at least one of the plurality of keys.analyzing a multi-machine interaction in the cloud environment using the first set of fingerprints.analyzing the multi-machine interaction includes comparing the first set of fingerprints with the second set of fingerprints.analyzing a multi-machine interaction in the cloud environment using the plurality of keys.accessing a primary account maintained in a cloud environment;receiving information defining a structure of the primary account, wherein the structure includes a plurality of assets, and wherein the information excludes raw data of the primary account;deploying, inside the primary account or inside a secondary account for which trust has been established with the primary account, at least one ephemeral scanner configured to scan at least one block storage volume and to output metadata defining the at least one block storage volume, the output excluding raw data of the primary account;receiving a transmission of the metadata from the at least one ephemeral scanner, wherein the transmission excludes raw data of the primary account;analyzing the received metadata to identify a plurality of cybersecurity vulnerabilities;correlating each of the identified plurality of cybersecurity vulnerabilities with one of the plurality of assets; andgenerating a report correlating the plurality of cybersecurity vulnerabilities with the plurality of assets.wherein defining the at least one block storage volume includes presenting risk data without sharing consumer data or data that was used to identify the risk data.wherein the metadata defining the at least one block storage volume includes at least one of: an indication of an installed application, a version of an installed application, an operating system configuration, an application configuration, or a profile configuration.receiving a transmission of updated metadata defining the at least one block storage volume in response to at least one change to the at least one block storage volumes.wherein the ephemeral scanner is configured to perform vulnerability scanning of the at least one block storage volume.wherein the ephemeral scanner is configured to perform configuration scanning of the at least one block storage volume.wherein the ephemeral scanner is configured to perform malware scanning of the at least one block storage volume.wherein the ephemeral scanner is configured to perform lateral-movement risk analysis of the at least one block storage volume.wherein the ephemeral scanner is configured to perform sensitive information scanning of the at least one block storage volume.wherein the ephemeral scanner is configured to perform container scanning of the at least one block storage volume.wherein the ephemeral scanner is configured to perform keys and password scanning of the at least one block storage volume.causing a display to present a plurality of asset categories;receiving, via an input device, a selection of a particular asset category;causing the display to present a list of assets in the selected category that have cyber security risks;for each listed asset, retrieving workload component cybersecurity risk information, and for each listed asset. retrieving cloud component cybersecurity risk information;for each listed asset, causing the display to present a common interface providing access to the workload component cybersecurity risk information and cloud component cybersecurity risk information; andfor each listed asset, causing the display to present, in the common interface, an interconnection between the workload component cybersecurity risk information and the cloud component cybersecurity risk information.wherein the plurality of asset categories include at least one of an account category, an authentication category, a container category, a database category, an image category, a container category, a managed service category, a messaging service category, a monitoring category, a network category, a storage category, a user category, a access category, a virtual machine category, or a serverless category.wherein the common interface is configured to display information relating to at least one of an asset type, a risk, a region, or an account.wherein the common interface is configured to display description for each listed asset.wherein the common interface is configured to display at least one of a vulnerability, an insecure configuration, an indication of a presence of malware, a neglected asset, a data at risk, a lateral movement, or an authentication.wherein the common interface is configured to display one or more possible attack vectors reaching the each listed asset.wherein the common interface is configured to display a recommended mitigation tactic for the each listed asset.wherein the common interface is configured to display one or more workload metrics associated with the each listed asset.wherein the at least one processor is further configured to provide a cybersecurity report for the each listed asset.wherein the at least one processor is configured to identify a risk level distribution among the listed assets.receiving a search query for a specific risk; oridentifying one or more assets vulnerable to the specific risk. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosed embodiments being indicated by the following claims. Computer programs based on the written description and methods of this specification are within the skill of a software developer. The various programs or program modules can be created using a variety of programming techniques. One or more of such software sections or modules can be integrated into a computer system, non-transitory computer readable media, or existing software. Moreover, while illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations or alterations based on the present disclosure. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. These examples are to be construed as non-exclusive. Further, the steps of the disclosed methods can be modified in any manner, including by reordering steps or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents. | 292,442 |
11943252 | DETAILED DESCRIPTION For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the aspects illustrated in the drawings, and specific language may be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is intended. Any alterations and further modifications to the described devices, instruments, methods, and any further application of the principles of the present disclosure are fully contemplated as would normally occur to one skilled in the art to which the disclosure relates. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one aspect may be combined with the features, components, and/or steps described with respect to other aspects of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations may not be described separately. For simplicity, in some instances the same reference numbers are used throughout the drawings to refer to the same or like parts. FIG.1is an illustration of an example100associated with securing against network vulnerabilities, according to various aspects of the present disclosure. Example100shows an architectural depiction of included components. In some aspects, the components may include one or more user devices102capable of communicating with a cyber security service provider (CSP) control infrastructure104for purposes of obtaining cyber security services. In some aspects, the one or more user devices102may communicate with the CSP control infrastructure104over a network118. The CSP control infrastructure104may be owned and operated by a cyber security service provider and may include an application programming interface (API)106, a user database108, processing unit110, and a security database112. In some aspects, a user device102may include a communication application114and a processing unit116. The communication application may include an application utilized by the user device102to communicate information and/or messages over the network118. The communication application may include third-party applications such as, for example, a web browser application, an email application, a social network application, a messaging application, or the like. The API106may be capable of communicating with the user database108and with the processing unit110. Additionally, the processing unit110may be capable of communicating with the security database112, which may be capable of storing data associated with providing cyber security services. The user device102may be a physical computing device capable of hosting the communication application114and of connecting to the network118. The user device102may be, for example, a laptop, a mobile phone, a tablet computer, a desktop computer, a smart device, a router, or the like. In some aspects, the user device102may include, for example, Internet-of-Things (IoT) devices such as MSP smart home appliances, smart home security systems, autonomous vehicles, smart health monitors, smart factory equipment, wireless inventory trackers, biometric cyber security scanners, or the like. The network118may be any digital telecommunication network that permits several nodes to share and access resources. In some aspects, the network118may include one or more of, for example, a local-area network (LAN), a wide-area network (WAN), a campus-area network (CAN), a metropolitan-area network (MAN), a home-area network (HAN), Internet, Intranet, Extranet, and Internetwork. The CSP control infrastructure104may include a combination of hardware and software components that enable provision of cyber security services to the user device102. The CSP control infrastructure104may interface with (the communication application on) the user device102via the API106, which may include one or more endpoints to a defined request-response message system. In some aspects, the API106may be configured to receive, via the network118, a connection request from the user device102to establish a connection with the CSP control infrastructure104for purposes of obtaining the cyber security services. The connection request may include an authentication request to authenticate the user device102. The API106may receive the authentication request and a request for the cyber security services in a single connection request. In some aspects, the API106may receive the authentication request and the request for the cyber security services in separate connection requests. The API106may further be configured to handle the connection request(s) by mediating the authentication request. For instance, the API106may receive from the user device102credentials including, for example, a unique combination of a user ID and password for purposes of authenticating the user device102. In another example, the credentials may include a unique validation code known to an authentic user. The API106may provide the received credentials to the user database108for verification. The user database108may include a structured repository of valid credentials belonging to authentic users. In one example, the structured repository may include one or more tables containing valid unique combinations of user IDs and passwords belonging to authentic users. In another example, the structured repository may include one or more tables containing valid unique validation codes associated with authentic users. The cyber security service provider may add or delete such valid unique combinations of user IDs and passwords from the structured repository at any time. Based at least in part on receiving the credentials from the API106, the user database108and a processor (e.g., the processing unit110or another local or remote processor) may verify the received credentials by matching the received credentials with the valid credentials stored in the structured repository. In some aspects, the user database108and the processor may authenticate the user device102when the received credentials match at least one of the valid credentials. In this case, the cyber security service provider may provide the security services to the user device102. When the received credentials fail to match at least one of the valid credentials, the user database108and the processor may fail to authenticate the user device102. In this case, the cyber security service provider may decline to provide cyber security services to the user device102. When the user device102is authenticated, the user device102may initiate a connection with the CSP control infrastructure104for obtaining the cyber security services. The processing unit110included in the CSP control infrastructure104may be configured to determine the cyber security services to be provided to the user device102. In some aspects, the processing unit110may be a logical unit including a logical component configured to perform complex operations associated with computing, for example, numerical weights related to various factors associated with providing the cyber security services. The processing unit110may utilize the API106to transmit information associated with the cyber security services to the user device102. One or more components (e.g., API106, user database108, processing unit110, and/or security database112, communication application114, processing unit116) included in the CSP control infrastructure104and/or included in the user device102, as shown inFIG.1, may further be associated with a controller/processor, a memory, a communication interface, or a combination thereof (e.g.,FIG.7). For instance, the one or more components may include or may be included in a controller/processor, a memory, or a combination thereof. In some aspects, the one or more components included in the CSP control infrastructure104may be separate and distinct from each other. Alternatively, in some aspects, the one or more of the components included in the CSP control infrastructure104may be combined with one or more of the other components. In some aspects, the one or more of the components included in the CSP control infrastructure104and/or the user device102may be local with respect to each other. Alternatively, in some aspects, one or more of the components included in the CSP control infrastructure104and/or the user device102may be located remotely with respect to one or more of other components included in the CSP control infrastructure104and/or the user device102. Additionally, or alternatively, one or more components of the components included in the CSP control infrastructure104and/or the user device102may be implemented at least in part as software stored in a memory for execution by a processor. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. Additionally, the one or more components may be configured to perform one or more functions described as being performed by another set of components shown inFIG.1. As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. A user device may rely on a network (e.g., Internet) to communicate (e.g., transmit and/or receive) information and/or messages with other devices connected to the network. Such information and/or messages may include private information and/or sensitive data associated with the user device. Such private information and/or sensitive data may include, for example, financial information, medical information, business information, etc. In an example, the user device may communicate financial information by utilizing a web browser to conduct transactions with a device associated with a financial institution (e.g., a bank). In another example, the user device may communicate private information of a personal nature associated with, for example, a user of the user device by utilizing a messaging application and/or a social network application. In yet another example, the user device may communicate confidential information by utilizing an email application to conduct business with a device associated with a client or a business entity. The communication of the information and/or messages over the network may be susceptible to a cybercrime perpetrated by a malicious party who may attempt to steal, alter, disable, expose, or destroy the information through unauthorized access to the communicating devices. A cybercrime may include, for example, a malware attack, a phishing attack, a ransomware attack, a virus attack, etc. The malware attack may be associated with use of malicious software (e.g., malware) designed to steal information and/or to damage the user device. The phishing attack may be associated with a malicious party communicating fraudulent messages (e.g., email) while masquerading as a reputable entity or an authorized entity, with the intent of stealing private information and/or sensitive data. The ransomware attack may include use of malware designed to render a system and/or information (e.g., a file, a document, etc.) on the user device unusable. The virus attack may include use of harmful software with an objective of affecting the user device (and other associated devices) with the intent of stealing data, interrupting operation, etc. Because of increased prevalence in utilization of the network to communicate information and/or messages, it may be prudent to implement protective measures to secure the communicating devices and/or the communicated information/messages against cybercrimes. In some cases, the protective measures may include manual intervention. For instance, to avoid falling prey to a cybercrime, a user of the user device may manually inspect features associated with a network communication (e.g., a webpage, an email, a message, etc.) to ensure that the network communication is authentic. In an example, the user may manually inspect features such as images, text, layout, content, etc. included in the network communication to determine whether the network communication is associated with an authorized and/or authentic (e.g., non-malicious) entity. Specifically, the user may manually inspect whether an image and/or text displayed in association with the network communication is authentic, whether a layout of the image and/or text displayed in association with the network communication is authentic, whether content displayed in association with the network communication is authentic, etc. Such manual inspection may be unreliable and error-prone. This is because manual inspection may be influenced by many factors such as age, state of mind, physical health, attitude, emotions, propensity for certain common mistakes, errors and cognitive biases, etc. Further, manual inspection may need to be performed at every instance of communication (e.g., for every webpage, every email, every message, etc.) over the network. Due to one or more of these factors, the user may fail to identify that the features of the network communication are not authentic and/or that the network communication is associated with an unauthorized and/or malicious party. As a result, the user may communicate the information and/or messages including the private information and/or sensitive data via the network communication. In this case, the private information and/or sensitive data may become compromised such that the unauthorized and/or malicious party may use the private information and/or sensitive data for nefarious purposes and/or to affect operation of the user device and/or to damage/destroy the user device. In an example, the operating system may run slower and/or associated circuitry may emit excessive heat and/or noise, thereby causing damage to the user device. The user device may expend various user device resources (e.g., processing resources, memory resources, power consumption resources, battery life, or the like) in efforts to mitigate effects of the damage. Various aspects of systems and techniques discussed in the present disclosure enable securing against network vulnerabilities. In some aspects, a CSP control infrastructure may provide a user device with an extension application to be associated with a communication application utilized by the user device to communicate information and/or messages over a network. The extension application may communicate with the CSP control infrastructure to receive authentic entity information including authentic feature information associated with authentic features (e.g., graphics, text, layout, content, etc.) included in authentic network communications (e.g., a webpage, an email, a message, etc.) associated with one or more authentic entities. The extension application may also receive updated authentic entity information from the CSP control infrastructure. Further, the extension application may receive the updated authentic entity information periodically. In some aspects, authentic entities may include an entity that is authorized to communicate with the user device and/or with whom the user device intends to communicate over the network. The received authentic entity information may also include a table of the authentic feature information (associated with a given authentic entity) being correlated with authentic network communication information (associated with the given authentic entity) (e.g., uniform resource locator (URL) link, domain information, etc.). When the user device utilizes current network communication information to communicate (e.g., transmit and/or receive) with a given entity over the network, the extension application may, in real time, analyze the authentic entity information and determine a portion of network communication information, from among the authentic network communication information included in the table, the portion of network communication information being associated with the given entity. Further, the extension application may utilize an associated processor (e.g., processor116, processor720) to compare, in real time, the current network communication information with the portion of network communication information. Based at least in part on a result of the comparison, the extension application may determine whether the current network communication information is authentic. In an example, when the current network communication information matches network communication information included in the portion of network communication information, the extension application may determine that the current network communication is authentic. Alternatively, when the current network communication information fails to match network communication information included in the portion of network communication information, the extension application may determine that the current network communication is not authentic (e.g., malicious). In this case, the extension application may cause the user device to issue a visual and/or audible notification indicating that the current network communication is not authentic. Based at least in part on the notification, the user device may suspend utilizing the current network communication to communicate with the given entity. In this way, private information and/or sensitive data associated with the user device may be prevented from becoming compromised. Further, the user device may be enabled to expend user device resources (e.g., processing resources, memory resources, power consumption resources, battery life, or the like) for suitable tasks. FIG.2is an illustration of an example flow200associated with securing against network vulnerabilities, according to various aspects of the present disclosure. The example flow200may include a user device in communication with a CSP control infrastructure104. The user device may be similar to a user device102discussed above with respect toFIG.1. In some aspects, the user device may be associated with an account registered with the CSP control infrastructure104. The user device may install an extension application associated with (e.g., provided by) the CSP control infrastructure104. The user device may utilize the extension application to communicate with an application programming interface (API) and/or a processor (e.g., processing unit110, processor620) associated with the CSP control infrastructure104. In some aspects, the user device and the CSP control infrastructure104may communicate with each other over a network (e.g., network118). As discussed elsewhere herein, the CSP control infrastructure104may enable the user device to obtain cyber security services. In some aspects, the user device may install communication applications to be utilized to communicate information and/or messages with other devices over the network. The communication applications may include third party applications such as, for example, email clients (e.g., Outlook application, Gmail application, etc.), web browsers (e.g., Firefox, Chrome, Internet Explorer, etc.), messaging clients (e.g., Slack, Facebook messenger, etc.), social media applications (e.g., Facebook, Instagram, etc.), or the like. In some aspects, the user device may install the extension application, associated with (e.g., provided by) the CSP control infrastructure104, as an extension to a communication application utilized by the user device to communicate information and/or messages with other devices over the network. Based at least in part on the extension application being installed as an extension to the communication application, the extension application may enable the user device to receive information to be processed by the communication application and/or by the CSP control infrastructure104. The extension application may include respective graphical user interfaces to receive the information via local input interfaces (e.g., touch screen, keyboard, mouse, pointer, etc.) associated with the user devices. The information may be received via text input or via a selection from among a plurality of options (e.g., pull down menu, etc.). Further, the extension application may enable transmission of at least a portion of the information to the CSP control infrastructure104. In some aspects, the extension application may activate and/or enable, at appropriate times, the graphical interface for receiving the information. For instance, the extension application may cause a screen (e.g., local screen) associated with the user device to display, for example, a pop-up message to request entry of the information. In some cases, as shown by reference numeral230, the extension application may cause the screen associated with the user device to display, for example, a visual pop-up message to issue a notification. Further, the extension application may cause audio devices (e.g., speakers, headphones, etc.) associated with the user device to produce an audible message to issue the notification. In some aspects, the notification may include audible and/or visual messages. In some aspects, the extension application may utilize a processing unit (e.g., processing unit116, processor720) associated with the user device to perform such processes/operations associated with obtaining the cyber security services. Although only one user device is shown inFIG.2, the present disclosure contemplates the system to include any number of user devices that perform the processes discussed herein in a similar and/or analogous manner. In some aspects, the user device may receive cyber security services from the CSP control infrastructure104. When the user device is authenticated, the user device may initiate a connection with the CSP control infrastructure104to obtain the cyber security services. While obtaining the cyber security services, as shown by reference numeral210, the CSP control infrastructure104may transmit, and the user device may receive, authentic entity information. In some aspects, the authentic entity information may include authentic feature information and an authentic entity table. In some aspects, the authentic feature information may indicate a known characteristic of a known authentic feature included in a known authentic network communication associated with a known authentic entity, with which the user device intends to communicate over a network. In some aspects, the authentic feature information may be associated with authentic features such as, for example, graphics (e.g., illustrations, diagrams, photos, logos, etc.), text, layout, content, etc. included in authentic network communications such as, for example, websites, emails, messages, etc. associated with one or more known authentic entities. The CSP control infrastructure104may determine, for each authentic entity, such authentic feature information based at least in part on analyzing the authentic network communications and correlating the authentic features with the authentic network communication information (e.g., URLs, domain names, etc.) associated with the authentic network communications. In an example, the CSP control infrastructure104may systematically analyze one or more authentic websites known to be associated with an authentic entity to determine characteristics regarding the authentic features included in the one or more authentic websites. For instance, the CSP control infrastructure104may determine the characteristics regarding appearance of one or more graphics on an authentic webpage associated with an authentic web site. Such characteristics may include a size associated with the one or more graphics, a color associated with the one or more graphics, a position associated with the one or more graphics, etc., or a combination thereof, on the authentic webpage. Also, the CSP control infrastructure104may determine characteristics regarding text appearing on the authentic webpage. Such characteristics may include a font size associated with the text, a font color associated with the text, a positioning of the text, etc., or a combination thereof, on the authentic webpage. Further, the CSP control infrastructure104may determine characteristics regarding a layout of information on the authentic webpage. Such characteristics may include a relationship between a position of the one or more graphics with a position of the text on the authentic webpage. Furthermore, the CSP control infrastructure104may determine characteristics regarding content included in the authentic webpage. Such characteristics may include a set of keywords associated with the content included in the authentic webpage. In some aspects, the authentic websites may be associated with all websites published on the World Wide Web, especially those authentic websites that may be more susceptible to cybercrimes. In an example, the CSP control infrastructure104may specifically analyze authentic websites associated with elevated security concerns such as websites associated with, for example, financial institutions, business entities, medical institutions, educational institutions, etc. In another example, the CSP control infrastructure104may analyze authentic websites that observe a threshold amount of data traffic such as websites associated with, for example, research websites (e.g., Google), social media websites (e.g., Facebook), etc. Similarly, the CSP control infrastructure104may determine characteristics regarding appearance of one or more graphics on one or more authentic emails and/or messages (e.g., email/messages) associated with (e.g., received from) the authentic entity. Such characteristics may include a size associated with the one or more graphics, a color associated with the one or more graphics, a position associated with the one or more graphics, etc., or a combination thereof, on an authentic email/message. Also, the CSP control infrastructure104may determine characteristics regarding text appearing on the authentic email/message. Such characteristics may include a font size associated with the text, a font color associated with the text, a positioning of the text, etc., or a combination thereof, on the authentic email/message. Further, the CSP control infrastructure104may determine characteristics regarding a layout of information on the authentic email/message. Such characteristics may include a relationship between a position of the one or more graphics with a position of the text on the authentic email/message. Furthermore, the CSP control infrastructure104may determine characteristics regarding content included in the authentic email/message. Such characteristics may include a set of keywords associated with, for example, a domain name (e.g., @entity.com, etc.) associated with the authentic entity. In some aspects, the authentic email/messages may be associated with all active domain names, especially those active domain names that may be more susceptible to cybercrimes. The CSP control infrastructure104may specifically analyze authentic email/messages associated with elevated security concerns such as email/messages associated with, for example, financial institutions, business entities, medical institutions, educational institutions, etc. The CSP control infrastructure104may analyze authentic email/messages that observe a threshold amount of data traffic such as email/messages associated with, for example, popular websites (e.g., Gmail, Yahoo, etc.), social media websites, etc. In some aspects, as shown in example300ofFIG.3, the processing unit110included in the CSP control infrastructure104may include and/or utilize a self-learning machine learning model (ML model)310in connection with analyzing the authentic network communications (e.g., a website, an email, a message, etc.) and correlating the authentic features with the authentic network communication information (e.g., a URL, a domain name, etc.). In some aspects, the ML model310may include a supervised learning model. In some aspects, the ML model310may include an unsupervised learning model. The processing unit110may utilize the ML model310to automatically and with improved accuracy analyze the authentic network communications and correlate the authentic features with the authentic network communication information. As shown by reference numeral320, the ML model310may obtain training data including metadata and/or previous metadata associated with information received during at least one previous instance of analyzing the authentic network communications and correlating the authentic features with the authentic network communication information and/or update data associated with an output provided by the ML model310during at least one previous instance of analyzing the authentic network communications and correlating the authentic features with the authentic network communication information. In some aspects, the processing unit110may store the above training data in, and the ML model310may obtain the above training data from, for example, one or more memories described elsewhere herein (e.g., security database112, memory730). In some aspects, the previous metadata may include historical metadata associated with the at least one previous instance of analyzing the authentic network communications and correlating the authentic features with the authentic network communication information. In some aspects, the update data may include historical output data associated with at least one previous instance of analyzing the authentic network communications and correlating the authentic features with the authentic network communication information. In some aspects, the ML model310may obtain input training data that is input via an interface associated with the CSP control infrastructure104. As shown by reference number330, the ML model310may process the training data using a machine learning algorithm (ML algorithm). In some aspects, the ML model310may utilize the ML algorithm to evaluate the training data to learn trends and patterns associated with analyzing the authentic network communications and correlating the authentic features with the authentic network communication information. In some aspects, the ML algorithm may evaluate and take into account feedback information (e.g., success rate) associated with previously analyzing authentic network communications and correlating authentic features with the authentic network communication information. The ML algorithm may provide output data to the processing unit110based at least in part on the evaluated training data and the learned trends and patterns. In some aspects, the output data may indicate a value associated with the likelihood that the authentic network communications were analyzed successfully and/or that the authentic features were successfully correlated with the authentic network communication information, thereby assisting the processing unit110in more accurately automating the analyzing of the authentic network communications and correlating of the authentic features with the authentic network communication information. As shown by reference number340, at an end of an instance of automating the analyzing of the authentic network communications and correlating of the authentic features with the authentic network communication information, the ML model310may receive update data including at least the training data and/or the output data. In some aspects, the update data may be included in the previous metadata stored in the one or more memories (e.g., security database112, memory730) to be used as training data for future iterations of automating analyzing of the authentic network communications and correlating of the authentic features with the authentic network communication information. In some aspects, the ML model310may evaluate the update data to learn various aspects such as accuracy, consistency, reliability, efficiency, and/or the like of the output data in enabling the processing unit110to more accurately analyze the authentic network communications and/or to correlate of the authentic features with the authentic network communication information. In this way, the processing unit110may utilize the ML model310to apply a rigorous and automated process to analyze the authentic network communications and/or to correlate the authentic features with the authentic network communication information. In some aspects, the ML model310may enable the processing unit110to more accurately analyze the authentic network communications and/or to correlate the authentic features with the authentic network communication information. In some aspects, the CSP control infrastructure104may also determine, for one or more authentic entities, an authentic entity table of authentic feature information (associated with a given authentic entity) correlated with authentic network communication information (associated with the given authentic entity). In an example, the CSP control infrastructure104may determine correlations of, for example, authentic graphics, text, layout, content, etc. associated with the given authentic entity with, for example, authentic website URLs, domain names, etc. associated with the given authentic entity. Further, the CSP control infrastructure104may store the correlations in the authentic entity table. As shown by reference numeral220, based at least in part on receiving the authentic entity information, including the authentic feature information and the authentic entity table, the extension application may utilize a local and/or remote processor (e.g., processor116, processor720) associated with the user device to secure the user device (and other associated devices) against network vulnerabilities. In an example, the extension application may utilize the local and/or remote processor to determine whether a given (e.g., currently used) network communication is authentic. In some aspects, the extension application may store the received authentic entity information in a local and/or remote memory (e.g., memory730) associated with the user device. When the user device utilizes the communication application to communicate with an entity over the network, the user device may utilize current network communication information (e.g., URL link, domain information, etc.) associated with a current network communication (e.g., website, email, message, etc.). The user device may not know whether the entity is the authentic entity, whether the current network communication information is authentic, and/or whether the current network communication is authentic. In this case, the extension application may monitor current features (e.g., graphics, text, layout, content, etc.) included in and/or associated with the current network communication. Further, the extension application may determine, for each current network communication, current feature information based at least in part on analyzing the current features observed with respect to the current network communication. In some aspects, the current feature information may indicate an observed characteristic of a current feature included in the current network communication associated with the current entity with which the user device is communicating or with which the user device is to communicate over the network. In some aspects, a user of the user device may observe such current features associated with the current network communication on a screen (e.g., output component760) associated with the user device. In an example, when the user device utilizes a web browser application to communicate via a current website, the extension application may systematically analyze a current webpage associated with the current website to determine current feature information regarding the current features observed with respect to a current webpage. For instance, the extension application may determine current feature information regarding appearance of one or more graphics on the current webpage. Such information may include a size associated with the one or more graphics, a color associated with the one or more graphics, a position associated with the one or more graphics, etc., or a combination thereof, on the current webpage. Also, the extension application may determine current feature information regarding text appearing on the current webpage. Such information may include a font size associated with the text, a font color associated with the text, a positioning of the text, etc., or a combination thereof, on the current webpage. Further, the extension application may determine current feature information regarding a layout of information on the current webpage. Such information may include a relationship between a position of the one or more graphics with a position of the text on the current webpage. Furthermore, the extension application may determine current feature information regarding content included in the current webpage. Such information may include a set of keywords associated with the content included in the current webpage. Similarly, when the user device utilizes an email application and/or a messaging application, the extension application may determine current feature information regarding appearance of one or more graphics observed with respect to a current email/message. Such information may include a size associated with the one or more graphics, a color associated with the one or more graphics, a position associated with the one or more graphics, etc., or a combination thereof, on the current email/message. Also, the extension application may determine current feature information regarding text appearing on the current email/message. Such information may include a font size associated with the text, a font color associated with the text, a positioning of the text, etc., or a combination thereof, on the current email/message. Further, the extension application may determine current feature information regarding a layout of information in the current email/message. Such information may include a relationship between a position of the one or more graphics with a position of the text in the current email. Furthermore, the extension application may determine current feature information regarding content included in the current email/message. Such information may include a set of keywords associated with, for example, a domain name (e.g., @entity.com, etc.). Based at least in part on determining the current feature information, the extension application may compare the determined current feature information with the authentic feature information stored in the memory (e.g., memory730) associated with the user device. In some aspects, the extension application may compare the current feature information with the authentic feature information to determine whether the current network communication (e.g., a website, an email, a message, etc.) is authentic. For instance, when the current feature information fails to match at least a portion of the authentic feature information (e.g., is substantially different with respect to the authentic feature information), the extension application may determine that the current features (e.g., graphics, text, layout, content, etc.) are not authentic. Further, based at least in part on the current feature information failing to match at least a portion of the authentic feature information, the extension application may determine that the current features are not associated with an authentic entity. In this case, the extension application may determine that the current network communication being utilized by the user device to communicate is not authentic. As a result, the extension application may cause the processor associated with the user device to suspend utilization of the current network communication and/or to provide a visual and/or audible notification indicating that the current network communication is not authentic. Such a notification may indicate that the user device is susceptible to a cybercrime based at least in part on utilizing the current network communication. In some aspects, suspending utilization of the current network communication may include suspending communication of information in association with the current network communication. Alternatively, when the current feature information matches (e.g., is substantially similar to) at least a portion of the authentic feature information, the extension application may determine that the current features (e.g., graphics, text, layout, content, etc.) is authentic and associated with the authentic entity with which the portion of the authentic feature information is correlated in the authentic entity table. Also, based at least in part on the current featured information matching at least a portion of the authentic feature information, the extension application may determine that the current feature information is authentic. Further, the extension application may compare the current network communication information (e.g., URL link, domain information, etc.) with the authentic network communication information associated with the authentic entity included in the authentic entity table. When the current network communication information matches at least a portion of the authentic network communication information, the extension application may determine that the current network communication information is authentic. In some aspects, when the current network communication information entirely matches the authentic network communication information, the extension application may determine that the current network communication information is authentic. Based at least in part on the current network communication information matching at least a portion of the authentic network communication information or based at least in part on the current network communication information entirely matching the authentic network communication information, the extension application may determine that the current network communication information is associated with the authentic entity with which the portion of the authentic feature information is correlated in the authentic entity table. In this case, the extension application may determine that the current network communication being utilized by the user device to communicate is authentic. As a result, the extension application may enable the processor associated with the user device to continue utilization of the current network communication. In this way, private information and/or sensitive data associated with the user device may be prevented from becoming compromised. Further, by conducing the more process-intensive task of training the machine learning model at the CSP control infrastructure to more accurately analyze the authentic network communications and/or to correlate the authentic features with the authentic network communication information and by transmitting the authentic entity information to the user device, the user device may be enabled to conserve user device resources and efficiently utilize such user device resources (e.g., processing resources, memory resources, power consumption resources, battery life, or the like) to perform less process-intensive tasks associated with securing the user device against network vulnerabilities. As indicated above,FIGS.2and3are provided as examples. Other examples may differ from what is described with regard toFIGS.2and3. FIG.4is an illustration of an example process400associated with securing against network vulnerabilities, according to various aspects of the present disclosure. In some aspects, the process400may be performed by a memory and/or a processor/controller (e.g., processing unit116, processor720) associated with a user device/endpoint (e.g., user device102) running an extension application and/or by a memory and/or a processor (e.g., processing unit110, processor720) associated with an infrastructure device associated with a control infrastructure (e.g., CSP control infrastructure104). As shown by reference numeral410, process400may include transmitting, by an infrastructure device to a user device, a determined characteristic of an authentic feature included in an authentic network communication associated with an authentic entity, with which the user device intends to communicate over a network. For instance, the infrastructure device may utilize an associated communication interface (e.g., communication interface770) along with the associated memory and/or processor to transmit, to a user device, a determined characteristic of an authentic feature included in an authentic network communication associated with an authentic entity, with which the user device intends to communicate over a network, as discussed elsewhere herein. As shown by reference numeral420, process400may include determining, by the user device, an observed characteristic of a current feature included in a current network communication associated with a current entity with which the user device is communicating over the network. For instance, the user device may utilize the associated memory and/or processor to determine an observed characteristic of a current feature included in a current network communication associated with a current entity with which the user device is communicating over the network, as discussed elsewhere herein. As shown by reference numeral430, process400may include comparing, by the user device, the observed characteristic with the determined characteristic. For instance, the user device may utilize the associated memory and/or processor to compare the observed characteristic with the determined characteristic, as discussed elsewhere herein. As shown by reference numeral440, process400may include determining, by the user device, that the current network communication is authentic or that the current network communication is not authentic based at least in part on a result of comparing the observed characteristic with the determined characteristic. For instance, the user device may utilize the associated memory and/or processor to determine that the current network communication is authentic or that the current network communication is not authentic based at least in part on a result of comparing the observed characteristic with the determined characteristic, as discussed elsewhere herein. Process400may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, process400may include determining, by the infrastructure device, the determined characteristic based at least in part on analyzing the authentic network communication, wherein determining the observed characteristic includes determining, by the user device, the observed characteristic based at least in part on analyzing the current network communication. In a second aspect, alone or in combination with the first aspect, process400may include determining, by the infrastructure device, the determined characteristic based at least in part on utilizing a machine learning model. In a third aspect, alone or in combination with the first through second aspects, process400may include providing, by the user device, a visual or audible notification indicating that the current network communication is not authentic based at least in part on the result of the comparing indicating that the observed characteristic fails to match the determined characteristic. In a fourth aspect, alone or in combination with the first through third aspects, process400may include suspending, by the user device, the communicating with the current entity based at least in part on the result of the comparing indicating that the observed characteristic fails to match the determined characteristic. In a fifth aspect, alone or in combination with the first through fourth aspects, in process400, determining that the current network communication is authentic includes determining that the current network communication is authentic based at least in part on the result of the comparing indicating that the observed characteristic matches the determined characteristic. In a sixth aspect, alone or in combination with the first through fifth aspects, in process400, determining that the current network communication is not authentic includes determining that the current network communication is not authentic based at least in part on the result of the comparing indicating that the observed characteristic fails to match the determined characteristic. AlthoughFIG.4shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.4. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.4is provided as an example. Other examples may differ from what is described with regard toFIG.4. FIG.5is an illustration of an example process500associated with securing against network vulnerabilities, according to various aspects of the present disclosure. In some aspects, the process500may be performed by a memory and/or a processor/controller (e.g., processing unit110, processor720) associated with an infrastructure device (e.g., CSP control infrastructure104). As shown by reference numeral510, process500may include determining, by an infrastructure device in communication with a user device, authentic feature information that indicates a characteristic associated with an authentic feature included in an authentic communication associated with an authentic entity, with which the user device intends to communicate over a network. For instance, the infrastructure device may utilize the associated memory and/or processor to determine, while in communication with a user device, authentic feature information that indicates a characteristic associated with an authentic feature included in an authentic communication associated with an authentic entity, with which the user device intends to communicate over a network, as discussed elsewhere herein. As shown by reference numeral520, process500may include transmitting, by the infrastructure device to the user device, authentic entity information that includes the authentic feature information and an association between the characteristic associated with the authentic feature and authentic communication information associated with the authentic communication. For instance, the infrastructure device may utilize an associated communication interface (e.g., communication interface770) along with the associated memory and/or processor to transmit, to the user device, authentic entity information that includes the authentic feature information and an association between the characteristic associated with the authentic feature and authentic communication information associated with the authentic communication, as discussed elsewhere herein. Process500may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process500, the authentic feature includes a graphic or text, the authentic communication includes a webpage or an email associated with the authentic entity, and the authentic communication information includes a uniform resource locator (URL) or a domain name associated with the authentic entity. In a second aspect, alone or in combination with the first aspect, in process500, the characteristic is associated with an appearance of the authentic feature in the authentic communication. In a third aspect, alone or in combination with the first through second aspects, in process500, the characteristic is associated with a relationship between a position of a graphic and a position of text in the authentic communication. In a fourth aspect, alone or in combination with the first through third aspects, in process500, determining the authentic feature information includes determining the authentic information periodically; and transmitting the authentic entity information includes transmitting the authentic entity information periodically. In a fifth aspect, alone or in combination with the first through fourth aspects, in process500, determining the authentic feature information includes determining the authentic feature information based at least in part on utilizing a machine learning algorithm. In a sixth aspect, alone or in combination with the first through fifth aspects, process500may include determining, by the infrastructure device, an association between the characteristic associated with the authentic feature and the authentic communication information. AlthoughFIG.5shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.5. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.5is provided as an example. Other examples may differ from what is described with regard toFIG.5. FIG.6is an illustration of an example process600associated with securing against network vulnerabilities, according to various aspects of the present disclosure. In some aspects, the process600may be performed by a memory and/or a processor/controller (e.g., processing unit116, processor720) associated with a user device/endpoint (e.g., user device102) executing an extension application. As shown by reference numeral610, process600may include comparing, by a user device, an observed characteristic with a determined characteristic, the observed characteristic indicating a current feature included in a current communication associated with a current entity with which the user device is communicating and the determined characteristic indicating an authentic feature included in an authentic communication associated with an authentic entity with which the user device intends to communicate. For instance, the user device may utilize the associated memory and/or processor to compare an observed characteristic with a determined characteristic, the observed characteristic indicating a current feature included in a current communication associated with a current entity with which the user device is communicating and the determined characteristic indicating an authentic feature included in an authentic communication associated with an authentic entity with which the user device intends to communicate, as discussed elsewhere herein. As shown by reference numeral620, process600may include selectively matching, by the user device based at least in part on a result of comparing the observed characteristic with the determined characteristic, current communication information associated with the current communication with authentic communication information associated with the authentic communication. For instance, the user device may utilize the associated memory and/or processor to selectively match, based at least in part on a result of comparing the observed characteristic with the determined characteristic, current communication information associated with the current communication with authentic communication information associated with the authentic communication, as discussed elsewhere herein. As shown by reference numeral630, process600may include determining, by the user device based at least in part on a result of selectively matching the current communication information with the authentic communication information, that the current entity is the authentic entity or that the current entity is not the authentic entity. For instance, the user device may utilize the associated memory and/or processor to determine, based at least in part on a result of selectively matching the current communication information with the authentic communication information, that the current entity is the authentic entity or that the current entity is not the authentic entity, as discussed elsewhere herein. Process600may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process600, comparing the observed characteristic with the determined characteristic includes comparing an appearance of the current feature in the current communication with an appearance of the authentic feature in the authentic communication. In a second aspect, alone or in combination with the first aspect, in process600, selectively matching the current communication information with the authentic communication information includes selectively matching the current communication information with the authentic communication information when the result of comparing the observed characteristic with the determined characteristic indicates that the observed characteristic is substantially the same as the determined characteristic. In a third aspect, alone or in combination with the first through second aspects, in process600, determining that the current entity is the authentic entity includes determining that the current entity is the authentic entity when the result of selectively matching the current communication information with the authentic communication information indicates that the current communication information is the same as the authentic communication information. In a fourth aspect, alone or in combination with the first through third aspects, in process600, determining that the current entity is not the authentic entity includes determining that the current entity is not the authentic entity when the result of selectively matching the current communication information with the authentic communication information indicates that the current communication information is not the same as the authentic communication information. In a fifth aspect, alone or in combination with the first through fourth aspects, in process600, the current feature includes a graphic or text and the authentic feature includes a graphic or text, the current communication includes a webpage or an email associated with the current entity and the authentic communication includes a webpage or an email associated with the authentic entity, and the current communication information includes a uniform resource locator (URL) or a domain name associated with the current entity and the authentic communication information includes a URL or a domain name associated with the authentic entity. In a sixth aspect, alone or in combination with the first through fifth aspects, process600may include determining, by the user device, current feature information associated with the observed characteristic; and receiving, by the user device, authentic feature information associated with the determined characteristic. AlthoughFIG.6shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.6. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.6is provided as an example. Other examples may differ from what is described with regard toFIG.6. FIG.7is an illustration of example devices700associated with securing against network vulnerabilities, according to various aspects of the present disclosure. In some aspects, the example devices700may form part of or implement the systems, servers, environments, infrastructures, components, devices, or the like described elsewhere herein (e.g., CSP control infrastructure) and may be used to perform example processes described elsewhere herein. The example devices700may include a universal bus710communicatively coupling a processor720, a memory730, a storage component740, an input component750, an output component760, and a communication interface770. Bus710may include a component that permits communication among multiple components of a device700. Processor720may be implemented in hardware, firmware, and/or a combination of hardware and software. Processor720may take the form of a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some aspects, processor720may include one or more processors capable of being programmed to perform a function. Memory730may include a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by processor720. Storage component740may store information and/or software related to the operation and use of a device700. For example, storage component740may include a hard disk (e.g., a magnetic disk, an optical disk, and/or a magneto-optic disk), a solid state drive (SSD), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive. Input component750may include a component that permits a device700to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, input component750may include a component for determining location (e.g., a global positioning system (GPS) component) and/or a sensor (e.g., an accelerometer, a gyroscope, an actuator, another type of positional or environmental sensor, and/or the like). Output component760may include a component that provides output information from device700(via, for example, a display, a speaker, a haptic feedback component, an audio or visual indicator, and/or the like). Communication interface770may include a transceiver-like component (e.g., a transceiver, a separate receiver, a separate transmitter, and/or the like) that enables a device700to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface770may permit device700to receive information from another device and/or provide information to another device. For example, communication interface770may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like. A device700may perform one or more processes described elsewhere herein. A device700may perform these processes based on processor720executing software instructions stored by a non-transitory computer-readable medium, such as memory730and/or storage component740. As used herein, the term “computer-readable medium” may refer to a non-transitory memory device. A memory device may include memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory730and/or storage component740from another computer-readable medium or from another device via communication interface770. When executed, software instructions stored in memory730and/or storage component740may cause processor720to perform one or more processes described elsewhere herein. Additionally, or alternatively, hardware circuitry may be used in place of or in combination with software instructions to perform one or more processes described elsewhere herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The quantity and arrangement of components shown inFIG.7are provided as an example. In practice, a device700may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.7. Additionally, or alternatively, a set of components (e.g., one or more components) of a device700may perform one or more functions described as being performed by another set of components of a device700. As indicated above,FIG.7is provided as an example. Other examples may differ from what is described with regard toFIG.7. Persons of ordinary skill in the art will appreciate that the aspects encompassed by the present disclosure are not limited to the particular exemplary aspects described herein. In that regard, although illustrative aspects have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the aspects without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the aspects to the precise form disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” or “device” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. As used herein, a processor is implemented in hardware, firmware, or a combination of hardware and software. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, or not equal to the threshold, among other examples, or combinations thereof. It will be apparent that systems or methods described herein may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems or methods is not limiting of the aspects. Thus, the operation and behavior of the systems or methods were described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems or methods based, at least in part, on the description herein. Even though particular combinations of features are recited in the claims or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. In fact, many of these features may be combined in ways not specifically recited in the claims or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. A phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (for example, a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”). | 69,178 |
11943253 | DETAILED DESCRIPTION For the purposes of reading the description of the various embodiments below, the following descriptions of the sections of the specifications and their respective contents may be helpful: Section A describes a network environment and computing environment which may be useful for practicing embodiments described herein. Section B describes embodiments of systems and methods for determination of a level of security to apply to a group of users before display of user data. A. Computing and Network Environment Prior to discussing specific embodiments of the present solution, it may be helpful to describe aspects of the operating environment as well as associated system components (e.g. hardware elements) in connection with the methods and systems described herein. Referring toFIG.1A, an embodiment of a network environment is depicted. In a brief overview, the network environment includes one or more clients102a-102n(also generally referred to as local machines(s)102, client(s)102, client node(s)102, client machine(s)102, client computer(s)102, client device(s)102, endpoint(s)102, or endpoint node(s)102) in communication with one or more servers106a-106n(also generally referred to as server(s)106, node(s)106, machine(s)106, or remote machine(s)106) via one or more networks104. In some embodiments, client102has the capacity to function as both a client node seeking access to resources provided by a server and as a server providing access to hosted resources for other clients102a-102n. AlthoughFIG.1Ashows a network104between clients102and the servers106, clients102and servers106may be on the same network104. In some embodiments, there are multiple networks104between clients102and servers106. In one of these embodiments, network104′ (not shown) may be a private network and a network104may be a public network. In another of these embodiments, network104may be a private network and a network104′ may be a public network. In still another of these embodiments, networks104and104′ may both be private networks. Network104may be connected via wired or wireless links. Wired links may include Digital Subscriber Line (DSL), coaxial cable lines, or optical fiber lines. Wireless links may include Bluetooth®, Bluetooth Low Energy (BLE), ANT/ANT+, ZigBee, Z-Wave, Thread, Wi-Fi®, Worldwide Interoperability for Microwave Access (WiMAX®), mobile WiMAX®, WiMAX®-Advanced, NFC, SigFox, LoRa, Random Phase Multiple Access (RPMA), Weightless-N/P/W, an infrared channel or a satellite band. The wireless links may also include any cellular network standards to communicate among mobile devices, including standards that qualify as 1G, 2G, 3G, 4G, or 5G. The network standards may qualify as one or more generations of mobile telecommunication standards by fulfilling a specification or standards such as the specifications maintained by the International Telecommunication Union. The 3G standards, for example, may correspond to the International Mobile Telecommunications-2000 (IMT-2000) specification, and the 4G standards may correspond to the International Mobile Telecommunication Advanced (IMT-Advanced) specification. Examples of cellular network standards include AMPS, GSM, GPRS, UMTS, CDMA2000, CDMA-1×RTT, CDMA-EVDO, LTE, LTE-Advanced, LTE-M1, and Narrowband IoT (NB-IoT). Wireless standards may use various channel access methods, e.g. FDMA, TDMA, CDMA, or SDMA. In some embodiments, different types of data may be transmitted via different links and standards. In other embodiments, the same types of data may be transmitted via different links and standards. Network104may be any type and/or form of network. The geographical scope of the network may vary widely and network104can be a body area network (BAN), a personal area network (PAN), a local-area network (LAN), e.g. Intranet, a metropolitan area network (MAN), a wide area network (WAN), or the Internet. The topology of network104may be of any form and may include, e.g., any of the following: point-to-point, bus, star, ring, mesh, or tree. Network104may be an overlay network which is virtual and sits on top of one or more layers of other networks104′. Network104may be of any such network topology as known to those ordinarily skilled in the art capable of supporting the operations described herein. Network104may utilize different techniques and layers or stacks of protocols, including, e.g., the Ethernet protocol, the internet protocol suite (TCP/IP), the ATM (Asynchronous Transfer Mode) technique, the SONET (Synchronous Optical Networking) protocol, or the SDH (Synchronous Digital Hierarchy) protocol. The TCP/IP internet protocol suite may include application layer, transport layer, internet layer (including, e.g., IPv4 and IPv6), or the link layer. Network104may be a type of broadcast network, a telecommunications network, a data communication network, or a computer network. In some embodiments, the system may include multiple, logically grouped servers106. In one of these embodiments, the logical group of servers may be referred to as a server farm or a machine farm. In another of these embodiments, servers106may be geographically dispersed. In other embodiments, a machine farm may be administered as a single entity. In still other embodiments, the machine farm includes a plurality of machine farms. Servers106within each machine farm can be heterogeneous—one or more of servers106or machines106can operate according to one type of operating system platform (e.g., Windows, manufactured by Microsoft Corp. of Redmond, Washington), while one or more of the other servers106can operate according to another type of operating system platform (e.g., Unix, Linux, or Mac OSX). In one embodiment, servers106in the machine farm may be stored in high-density rack systems, along with associated storage systems, and located in an enterprise data center. In the embodiment, consolidating servers106in this way may improve system manageability, data security, the physical security of the system, and system performance by locating servers106and high-performance storage systems on localized high-performance networks. Centralizing servers106and storage systems and coupling them with advanced system management tools allows more efficient use of server resources. Servers106of each machine farm do not need to be physically proximate to another server106in the same machine farm. Thus, the group of servers106logically grouped as a machine farm may be interconnected using a wide-area network (WAN) connection or a metropolitan-area network (MAN) connection. For example, a machine farm may include servers106physically located in different continents or different regions of a continent, country, state, city, campus, or room. Data transmission speeds between servers106in the machine farm can be increased if servers106are connected using a local-area network (LAN) connection or some form of direct connection. In some embodiments, a heterogeneous machine farm may include one or more servers106operating according to a type of operating system, while one or more other servers execute one or more types of hypervisors rather than operating systems. In these embodiments, hypervisors may be used to emulate virtual hardware, partition physical hardware, virtualize physical hardware, and execute virtual machines that provide access to computing environments, allowing multiple operating systems to run concurrently on a host computer. Native hypervisors may run directly on the host computer. Hypervisors may include VMware ESX/ESXi, manufactured by VMWare, Inc., of Palo Alta, California; the Xen hypervisor, an open source product whose development is overseen by Citrix Systems, Inc. of Fort Lauderdale, Florida; the HYPER-V hypervisors provided by Microsoft, or others. Hosted hypervisors may run within an operating system on a second software level. Examples of hosted hypervisors may include VMWare Workstation and VirtualBox, manufactured by Oracle Corporation of Redwood City, California Additional layers of abstraction may include Container Virtualization and Management infrastructure. Container Virtualization isolates execution of a service to the container while relaying instructions to the machine through one operating system layer per host machine. Container infrastructure may include Docker, an open source product whose development is overseen by Docker, Inc. of San Francisco, California. Management of the machine farm may be de-centralized. For example, one or more servers106may comprise components, subsystems and modules to support one or more management services for the machine farm. In one of these embodiments, one or more servers106provide functionality for management of dynamic data, including techniques for handling failover, data replication, and increasing the robustness of the machine farm. Each server106may communicate with a persistent store and, in some embodiments, with a dynamic store. Server106may be a file server, application server, web server, proxy server, appliance, network appliance, gateway, gateway server, virtualization server, deployment server, SSL VPN server, or firewall. In one embodiment, a plurality of servers106may be in the path between any two communicating servers106. Referring toFIG.1B, a cloud computing environment is depicted. A cloud computing environment may provide client102with one or more resources provided by a network environment. The cloud computing environment may include one or more clients102a-102n, in communication with cloud108over one or more networks104. Clients102may include, e.g., thick clients, thin clients, and zero clients. A thick client may provide at least some functionality even when disconnected from cloud108or servers106. A thin client or zero client may depend on the connection to cloud108or server106to provide functionality. A zero client may depend on cloud108or other networks104or servers106to retrieve operating system data for the client device102. Cloud108may include back end platforms, e.g., servers106, storage, server farms or data centers. Cloud108may be public, private, or hybrid. Public clouds may include public servers106that are maintained by third parties to clients102or the owners of the clients. Servers106may be located off-site in remote geographical locations as disclosed above or otherwise. Public clouds may be connected to servers106over a public network. Private clouds may include private servers106that are physically maintained by clients102or owners of clients. Private clouds may be connected to servers106over a private network104. Hybrid clouds may include both the private and public networks104and servers106. Cloud108may also include a cloud-based delivery, e.g. Software as a Service (SaaS)110, Platform as a Service (PaaS)112, and Infrastructure as a Service (IaaS)114. IaaS may refer to a user renting the user of infrastructure resources that are needed during a specified time period. IaaS provides may offer storage, networking, servers or virtualization resources from large pools, allowing the users to quickly scale up by accessing more resources as needed. Examples of IaaS include Amazon Web Services (AWS) provided by Amazon, Inc. of Seattle, Washington, Rackspace Cloud provided by Rackspace Inc. of San Antonio, Texas, Google Compute Engine provided by Google Inc. of Mountain View, California, or RightScale provided by RightScale, Inc. of Santa Barbara, California PaaS providers may offer functionality provided by IaaS, including, e.g., storage, networking, servers, virtualization or containerization, as well as additional resources, e.g., the operating system, middleware, or runtime resources. Examples of PaaS include Windows Azure provided by Microsoft Corporation of Redmond, Washington, Google App Engine provided by Google Inc., and Heroku provided by Heroku, Inc. of San Francisco California SaaS providers may offer the resources that PaaS provides, including storage, networking, servers, virtualization, operating system, middleware, or runtime resources. In some embodiments, SaaS providers may offer additional resources including, e.g., data and application resources. Examples of SaaS include Google Apps provided by Google Inc., Salesforce provided by Salesforce.com Inc. of San Francisco, California, or Office365 provided by Microsoft Corporation. Examples of SaaS may also include storage providers, e.g. Dropbox provided by Dropbox Inc. of San Francisco, California, Microsoft OneDrive provided by Microsoft Corporation, Google Drive provided by Google Inc., or Apple iCloud provided by Apple Inc. of Cupertino, California. Clients102may access IaaS resources with one or more IaaS standards, including, e.g., Amazon Elastic Compute Cloud (EC2), Open Cloud Computing Interface (OCCI), Cloud Infrastructure Management Interface (CIMI), or OpenStack standards. Some IaaS standards may allow clients access to resources over a Hypertext Transfer Protocol (HTTP) and may use Representational State Transfer (REST) protocol or Simple Object Access Protocol (SOAP). Clients102may access PaaS resources with different PaaS interfaces. Some PaaS interfaces use HTTP packages, standard Java APIs, JavaMail API, Java Data Objects (JDO), Java Persistence API (JPA), Python APIs, web integration APIs for different programming languages including, e.g., Rack for Ruby, WSGI for Python, or PSGI for Perl, or other APIs that may be built on REST, HTTP, XML, or other protocols. Clients102may access SaaS resources using web-based user interfaces, provided by a web browser (e.g. Google Chrome, Microsoft Internet Explorer, or Mozilla Firefox provided by Mozilla Foundation of Mountain View, California). Clients102may also access SaaS resources through smartphone or tablet applications, including e.g., Salesforce Sales Cloud, or Google Drive App. Clients102may also access SaaS resources through the client operating system, including e.g. Windows file system for Dropbox. In some embodiments, access to IaaS, PaaS, or SaaS resources may be authenticated. For example, a server or authentication server may authenticate a user via security certificates, HTTPS, or API keys. API keys may include various encryption standards such as, e.g., Advanced Encryption Standard (AES). Data resources may be sent over Transport Layer Security (TLS) or Secure Sockets Layer (SSL). Client102and server106may be deployed as and/or executed on any type and form of computing device, e.g., a computer, network device or appliance capable of communicating on any type and form of network and performing the operations described herein. FIGS.1C and1Ddepict block diagrams of a computing device100useful for practicing an embodiment of client102or server106. As shown inFIGS.1C and1D, each computing device100includes central processing unit121, and main memory unit122. As shown inFIG.1C, computing device100may include storage device128, installation device116, network interface118, and I/O controller123, display devices124a-124n, keyboard126and pointing device127, e.g., a mouse. Storage device128may include, without limitation, operating system129, software131, and a software of security awareness training system120. As shown inFIG.1D, each computing device100may also include additional optional elements, e.g., a memory port103, bridge170, one or more input/output devices130a-130n(generally referred to using reference numeral130), and cache memory140in communication with central processing unit121. Central processing unit121is any logic circuitry that responds to and processes instructions fetched from main memory unit122. In many embodiments, central processing unit121is provided by a microprocessor unit, e.g.: those manufactured by Intel Corporation of Mountain View, California; those manufactured by Motorola Corporation of Schaumburg, Illinois; the ARM processor and TEGRA system on a chip (SoC) manufactured by Nvidia of Santa Clara, California; the POWER7 processor, those manufactured by International Business Machines of White Plains, New York; or those manufactured by Advanced Micro Devices of Sunnyvale, California Computing device100may be based on any of these processors, or any other processor capable of operating as described herein. Central processing unit121may utilize instruction level parallelism, thread level parallelism, different levels of cache, and multi-core processors. A multi-core processor may include two or more processing units on a single computing component. Examples of multi-core processors include the AMD PHENOM IIX2, INTER CORE i5 and INTEL CORE i7. Main memory unit122may include one or more memory chips capable of storing data and allowing any storage location to be directly accessed by microprocessor121. Main memory unit122may be volatile and faster than storage128memory. Main memory units122may be Dynamic Random-Access Memory (DRAM) or any variants, including static Random-Access Memory (SRAM), Burst SRAM or SynchBurst SRAM (BSRAM), Fast Page Mode DRAM (FPM DRAM), Enhanced DRAM (EDRAM), Extended Data Output RAM (EDO RAM), Extended Data Output DRAM (EDO DRAM), Burst Extended Data Output DRAM (BEDO DRAM), Single Data Rate Synchronous DRAM (SDR SDRAM), Double Data Rate SDRAM (DDR SDRAM), Direct Rambus DRAM (DRDRAM), or Extreme Data Rate DRAM (XDR DRAM). In some embodiments, main memory122or storage128may be non-volatile; e.g., non-volatile read access memory (NVRAM), flash memory non-volatile static RAM (nvSRAM), Ferroelectric RAM (FeRAM), Magnetoresistive RAM (MRAM), Phase-change memory (PRAM), conductive-bridging RAM (CBRAM), Silicon-Oxide-Nitride-Oxide-Silicon (SONOS), Resistive RAM (RRAM), Racetrack, Nano-RAM (NRAM), or Millipede memory. Main memory122may be based on any of the above described memory chips, or any other available memory chips capable of operating as described herein. In the embodiment shown inFIG.1C, the processor121communicates with main memory122via system bus150(described in more detail below).FIG.1Ddepicts an embodiment of computing device100in which the processor communicates directly with main memory122via memory port103. For example, inFIG.1Dmain memory122may be DRDRAM. FIG.1Ddepicts an embodiment in which the main processor121communicates directly with cache memory140via a secondary bus, sometimes referred to as a backside bus. In other embodiments, main processor121communicates with cache memory140using system bus150. Cache memory140typically has a faster response time than main memory122and is typically provided by SRAM, BSRAM, or EDRAM. In the embodiment shown inFIG.1D, the processor121communicates with various I/O devices130via local system bus150. Various buses may be used to connect central processing unit121to any of I/O devices130, including a PCI bus, a PCI-X bus, or a PCI-Express bus, or a NuBus. For embodiments in which the I/O device is video display124, the processor121may use an Advanced Graphic Port (AGP) to communicate with display124or the I/O controller123for display124.FIG.1Ddepicts an embodiment of computer100in which main processor121communicates directly with I/O device130bor other processors121′ via HYPERTRANSPORT, RAPIDIO, or INFINIBAND communications technology.FIG.1Dalso depicts an embodiment in which local busses and direct communication are mixed: the processor121communicates with I/O device130ausing a local interconnect bus while communicating with I/O device130bdirectly. A wide variety of I/O devices130a-130nmay be present in computing device100. Input devices may include keyboards, mice, trackpads, trackballs, touchpads, touch mice, multi-touch touchpads and touch mice, microphones, multi-array microphones, drawing tablets, cameras, single-lens reflex cameras (SLR), digital SLR (DSLR), CMOS sensors, accelerometers, infrared optical sensors, pressure sensors, magnetometer sensors, angular rate sensors, depth sensors, proximity sensors, ambient light sensors, gyroscopic sensors, or other sensors. Output devices may include video displays, graphical displays, speakers, headphones, inkjet printers, laser printers, and 3D printers. Devices130a-130nmay include a combination of multiple input or output devices, including, e.g., Microsoft KINECT, Nintendo Wiimote for the WII, Nintendo WII U GAMEPAD, or Apple iPhone. Some devices130a-130nallow gesture recognition inputs through combining some of the inputs and outputs. Some devices130a-130nprovide for facial recognition which may be utilized as an input for different purposes including authentication and other commands. Some devices130a-130nprovide for voice recognition and inputs, including, e.g., Microsoft KINECT, SIRI for iPhone by Apple, Google Now or Google Voice Search, and Alexa by Amazon. Devices130a-130nhave both input and output capabilities, including, e.g., haptic feedback devices, touchscreen displays, or multi-touch displays. Touchscreen, multi-touch displays, touchpads, touch mice, or other touch sensing devices may use different technologies to sense touch, including, e.g., capacitive, surface capacitive, projected capacitive touch (PCT), in cell capacitive, resistive, infrared, waveguide, dispersive signal touch (DST), in-cell optical, surface acoustic wave (SAW), bending wave touch (BWT), or force-based sensing technologies. Some multi-touch devices may allow two or more contact points with the surface, allowing advanced functionality including, e.g., pinch, spread, rotate, scroll, or other gestures. Some touchscreen devices, including, e.g., Microsoft PIXELSENSE or Multi-Touch Collaboration Wall, may have larger surfaces, such as on a table-top or on a wall, and may also interact with other electronic devices. Some I/O devices130a-130n, display devices124a-124nor group of devices may be augmented reality devices. The I/O devices may be controlled by I/O controller123as shown inFIG.1C. The I/O controller may control one or more I/O devices, such as, e.g., keyboard126and pointing device127, e.g., a mouse or optical pen. Furthermore, an I/O device may also provide storage and/or installation medium116for computing device100. In still other embodiments, computing device100may provide USB connections (not shown) to receive handheld USB storage devices. In further embodiments, a I/O device130may be a bridge between the system bus150and an external communication bus, e.g. a USB bus, a SCSI bus, a FireWire bus, an Ethernet bus, a Gigabit Ethernet bus, a Fiber Channel bus, or a Thunderbolt bus. In some embodiments, display devices124a-124nmay be connected to I/O controller123. Display devices may include, e.g., liquid crystal displays (LCD), thin film transistor LCD (TFT-LCD), blue phase LCD, electronic papers (e-ink) displays, flexile displays, light emitting diode displays (LED), digital light processing (DLP) displays, liquid crystal on silicon (LCOS) displays, organic light-emitting diode (OLED) displays, active-matrix organic light-emitting diode (AMOLED) displays, liquid crystal laser displays, time-multiplexed optical shutter (TMOS) displays, or 3D displays. Examples of 3D displays may use, e.g. stereoscopy, polarization filters, active shutters, or auto stereoscopy. Display devices124a-124nmay also be a head-mounted display (HMD). In some embodiments, display devices124a-124nor the corresponding I/O controllers123may be controlled through or have hardware support for OPENGL or DIRECTX API or other graphics libraries. In some embodiments, computing device100may include or connect to multiple display devices124a-124n, which each may be of the same or different type and/or form. As such, any of I/O devices130a-130nand/or the I/O controller123may include any type and/or form of suitable hardware, software, or combination of hardware and software to support, enable or provide for the connection and use of multiple display devices124a-124nby computing device100. For example, computing device100may include any type and/or form of video adapter, video card, driver, and/or library to interface, communicate, connect or otherwise use display devices124a-124n. In one embodiment, a video adapter may include multiple connectors to interface to multiple display devices124a-124n. In other embodiments, computing device100may include multiple video adapters, with each video adapter connected to one or more of display devices124a-124n. In some embodiments, any portion of the operating system of computing device100may be configured for using multiple displays124a-124n. In other embodiments, one or more of the display devices124a-124nmay be provided by one or more other computing devices100aor100bconnected to computing device100, via network104. In some embodiments, software may be designed and constructed to use another computer's display device as second display device124afor computing device100. For example, in one embodiment, an Apple iPad may connect to computing device100and use the display of the device100as an additional display screen that may be used as an extended desktop. One ordinarily skilled in the art will recognize and appreciate the various ways and embodiments that computing device100may be configured to have multiple display devices124a-124n. Referring again toFIG.1C, computing device100may comprise storage device128(e.g. one or more hard disk drives or redundant arrays of independent disks) for storing an operating system or other related software, and for storing application software programs such as any program related to security awareness training system120. Examples of storage device128include, e.g., hard disk drive (HDD); optical drive including CD drive, DVD drive, or BLU-RAY drive; solid-state drive (SSD); USB flash drive; or any other device suitable for storing data. Some storage devices may include multiple volatile and non-volatile memories, including, e.g., solid state hybrid drives that combine hard disks with solid state cache. Some storage device128may be non-volatile, mutable, or read-only. Some storage device128may be internal and connect to computing device100via bus150. Some storage device128may be external and connect to computing device100via a I/O device130that provides an external bus. Some storage device128may connect to computing device100via network interface118over network104, including, e.g., the Remote Disk for MACBOOK AIR by Apple. Some client devices100may not require a non-volatile storage device128and may be thin clients or zero clients102. Some storage device128may also be used as an installation device116and may be suitable for installing software and programs. In some embodiments, the operating system and the software can be run from a bootable medium, for example, a bootable CD, e.g. KNOPPIX, a bootable CD for GNU/Linux that is available as a GNU/Linux distribution from knoppix.net. Computing device100(e.g., client device102) may also install software or application from an application distribution platform. Examples of application distribution platforms include the App Store for iOS provided by Apple, Inc., the Mac App Store provided by Apple, Inc., GOOGLE PLAY for Android OS provided by Google Inc., Chrome Webstore for CHROME OS provided by Google Inc., and Amazon Appstore for Android OS and KINDLE FIRE provided by Amazon.com, Inc. An application distribution platform may facilitate installation of software on client device102. An application distribution platform may include a repository of applications on server106or cloud108, which clients102a-102nmay access over a network104. An application distribution platform may include application developed and provided by various developers. A user of client device102may select, purchase and/or download an application via the application distribution platform. Furthermore, computing device100may include a network interface118to interface to network104through a variety of connections including, but not limited to, standard telephone lines LAN or WAN links (e.g., 802.11, T1, T3, Gigabit Ethernet, InfiniBand), broadband connections (e.g., ISDN, Frame Relay, ATM, Gigabit Ethernet, Ethernet-over-SONET, ADSL, VDSL, BPON, GPON, fiber optical including FiOS), wireless connections, or some combination of any or all of the above. Connections can be established using a variety of communication protocols (e.g., TCP/IP, Ethernet, ARCNET, SONET, SDH, Fiber Distributed Data Interface (FDDI), IEEE 802.11a/b/g/n/ac CDMA, GSM, WiMAX and direct asynchronous connections). In one embodiment, computing device100communicates with other computing devices100′ via any type and/or form of gateway or tunneling protocol e.g. Secure Socket Layer (SSL) or Transport Layer Security (TLS), or the Citrix Gateway Protocol manufactured by Citrix Systems, Inc. Network interface118may comprise a built-in network adapter, network interface card, PCMCIA network card, EXPRESSCARD network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing computing device100to any type of network capable of communication and performing the operations described herein. Computing device100of the sort depicted inFIGS.1B and1Cmay operate under the control of an operating system, which controls scheduling of tasks and access to system resources. Computing device100can be running any operating system such as any of the versions of the MICROSOFT WINDOWS operating systems, the different releases of the Unix and Linux operating systems, any version of the MAC OS for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, any operating systems for mobile computing devices, or any other operating system capable of running on the computing device and performing the operations described herein. Typical operating systems include, but are not limited to: WINDOWS 2000, WINDOWS Server 2012, WINDOWS CE, WINDOWS Phone, WINDOWS XP, WINDOWS VISTA, and WINDOWS 7, WINDOWS RT, WINDOWS 8 and WINDOW 10, all of which are manufactured by Microsoft Corporation of Redmond, Washington; MAC OS and iOS, manufactured by Apple, Inc.; and Linux, a freely-available operating system, e.g. Linux Mint distribution (“distro”) or Ubuntu, distributed by Canonical Ltd. of London, United Kingdom; or Unix or other Unix-like derivative operating systems; and Android, designed by Google Inc., among others. Some operating systems, including, e.g., the CHROME OS by Google Inc., may be used on zero clients or thin clients, including, e.g., CHROMEBOOKS. Computer system100can be any workstation, telephone, desktop computer, laptop or notebook computer, netbook, ULTRABOOK, tablet, server, handheld computer, mobile telephone, smartphone or other portable telecommunications device, media playing device, a gaming system, mobile computing device, or any other type and/or form of computing, telecommunications or media device that is capable of communication. Computer system100has sufficient processor power and memory capacity to perform the operations described herein. In some embodiments, computing device100may have different processors, operating systems, and input devices consistent with the device. The Samsung GALAXY smartphones, e.g., operate under the control of Android operating system developed by Google, Inc. GALAXY smartphones receive input via a touch interface. In some embodiments, computing device100is a gaming system. For example, the computer system100may comprise a PLAYSTATION 3, or PERSONAL PLAYSTATION PORTABLE (PSP), PLAYSTATION VITA, PLAYSTATION 4, or a PLAYSTATION 4 PRO device manufactured by the Sony Corporation of Tokyo, Japan, or a NINTENDO DS, NINTENDO 3DS, NINTENDO WII, NINTENDO WII U, or a NINTENDO SWITCH device manufactured by Nintendo Co., Ltd., of Kyoto, Japan, or an XBOX 360 device manufactured by Microsoft Corporation. In some embodiments, computing device100is a digital audio player such as the Apple IPOD, IPOD Touch, and IPOD NANO lines of devices, manufactured by Apple Computer of Cupertino, California Some digital audio players may have other functionality, including, e.g., a gaming system or any functionality made available by an application from a digital application distribution platform. For example, the IPOD Touch may access the Apple App Store. In some embodiments, computing device100is a portable media player or digital audio player supporting file formats including, but not limited to, MP3, WAV, M4A/AAC, WMA Protected AAC, AIFF, Audible audiobook, Apple Lossless audio file formats and .mov, .m4v, and .mp4 MPEG-4 (H.264/MPEG-4 AVC) video file formats. In some embodiments, computing device100is a tablet e.g. the IPAD line of devices by Apple; GALAXY TAB family of devices by Samsung; or KINDLE FIRE, by Amazon.com, Inc. of Seattle, Washington In other embodiments, computing device100is an eBook reader, e.g. the KINDLE family of devices by Amazon.com, or NOOK family of devices by Barnes & Noble, Inc. of New York City, New York. In some embodiments, communications device102includes a combination of devices, e.g. a smartphone combined with a digital audio player or portable media player. For example, one of these embodiments is a smartphone, e.g. the iPhone family of smartphones manufactured by Apple, Inc.; a Samsung GALAXY family of smartphones manufactured by Samsung, Inc; or a Motorola DROID family of smartphones. In yet another embodiment, communications device102is a laptop or desktop computer equipped with a web browser and a microphone and speaker system, e.g. a telephony headset. In these embodiments, communications devices102are web-enabled and can receive and initiate phone calls. In some embodiments, a laptop or desktop computer is also equipped with a webcam or other video capture device that enables video chat and video call. In some embodiments, the status of one or more machines102,106in network104is monitored, generally as part of network management. In one of these embodiments, the status of a machine may include an identification of load information (e.g., the number of processes on the machine, CPU and memory utilization), of port information (e.g., the number of available communication ports and the port addresses), or of session status (e.g., the duration and type of processes, and whether a process is active or idle). In another of these embodiments, the information may be identified by a plurality of metrics, and the plurality of metrics can be applied at least in part towards decisions in load distribution, network traffic management, and network failure recovery as well as any aspects of operations of the present solution described herein. Aspects of the operating environments and components described above will become apparent in the context of the systems and methods disclosed herein. B. Systems and Methods for Determination of Level of Security to Apply to Users of a Group Before Display of User Data The following describes systems and methods for determination of a level of security to apply to users of a group before display of user data. An organization may organize simulated phishing campaigns to test and educate employees (also referred to as ‘users’) on phishing threats and ways to deal with phishing attacks. The organization may create one or more groups of users based on one or more criteria and may use these groups when planning simulated phishing campaigns. The criteria for users that is used to create groups may include job role, location, department, training status, phishing failures, assessment scores, and any other attribute associated with the employees. The criteria used to establish the groups may be based on user actions and user characteristics and may be applied to any set or subset of users of the organization. Groups may help the organization deliver simulated phishing campaigns and security awareness training targeted to specific users based on the specific users' information matching the criteria of the groups. In an example, users may be added to a specific group depending on one or more interactions of the users with a simulated phishing communication, such as clicking on a link or opening an attachment included in the simulated phishing communication. In another example, users who have yet to complete a security awareness training may be added to a specific group. To create simulated phishing campaigns and organize appropriate security awareness trainings for users who interact with those simulated phishing campaigns, may involve a system administrator. The system administrator may be a professional who has responsibility for managing organizational cybersecurity aspects. As a part of his or her job, the system administrator can query membership of any group, for example allowing the system administrator to see the membership of any group and identify the users in the group. However, when the system administrator runs a query, he or she may be presented with personally identifiable information of users that the system administrator may not be allowed to access according to security rules and/or access rules and requirements, such as certain data protection laws of some countries or policies of specific organizations. To preserve anonymity of users in certain situations, many jurisdictions may have data protection laws and workplace regulations in force. These laws and regulations provide a legal framework on how to obtain, use, and store data of the users. For example, General Data Protection Regulation (GDPR) is in force in Europe and General Data Protection Law (Lei Geral de Proteção de Dados (LGPD)) is in force in Brazil. In an example, according to a data protection law, information of a user of an organization who has failed a simulated phishing attack or failed to complete remedial training may have to be obfuscated to ensure that the user is not subjected to unfair treatment by the system administrator and other users of the organization. To comply with data protection laws and workplace regulations, organizations may implement security and access controls to protect privacy of their users that require anonymization of the users and information associated with them. For example, organizations may need to protect information related to how the users perform during a simulated phishing campaign. However, when creating a query to establish a group, a system administrator may not have awareness as to which users will match the query and become members of the group, and whether certain users becoming members of the group impacts whether some or all users that are members of the group should have their personal information obscured. In some embodiments, completely obscuring group membership or anonymizing the identities of the users resulting from the query in order to comply with data protection laws may make it difficult for the system administrator to subsequently identify and provide training to users who need to receive remedial training, and still comply with certain data protection laws. Also, users of organizations may be based in different jurisdictions. In such circumstances, the organizations may find organizing simulated phishing campaigns and security awareness trainings for users in compliance with several data protection laws challenging. The systems and methods of the present disclosure leverage a security awareness training system to determine a level of security to apply to one or more groups of users before display of user data. A user may be an individual who may have to be tested and/or trained by the security awareness training system. Further, the user may be an employee of an organization, a member of a group, or any individual that can receive an electronic message, or who may act in any capacity of the security awareness training system. As a part of the security awareness training system, to organize and execute simulated phishing campaigns, remedial training, and other interactions with users, groups are formed. The groups can be based on, or formed from, any set of users, and may be determined based on user actions and characteristics meeting one or more criteria, and may enable the security awareness training system to deliver simulated phishing campaigns and training campaigns targeted to users who meet specified criteria. In an example, these groups may be query based groups that accurately and automatically build a list of users that meet specified criteria at the moment that the group is executed, for example when the group is created, requested, or used. In an implementation, these groups may be inspected to determine which users met the specified criteria. Users may be dynamically added and removed from the groups based on these criteria when the groups are executed. Some of these criteria may lead to personally identifiable information of the users that is protected under data protection laws and company policy to be exposed if the user's data is displayed when the user meets the criteria. In an implementation, the security awareness training system may automatically configure, or mark criteria as protected according to compliance with various government regulations (or data protection laws) and/or company policy. Marking a criteria as protected may indicate that identity of users that fit the criteria cannot be displayed without violating the regulations. In some examples, marking a criteria as protected indicate that some or all information associated with users that fit the criteria cannot be displayed without violating the regulations. In some implementations, groups (also referred to as smart groups when the groups are created according to membership criteria) with any of protected criteria may automatically be converted into secured groups with user anonymity enabled. In an example, a secured group involves partial or full obfuscation of user data of a smart group, thus allowing the system administrator to create intelligent workflows involving simulated phishing campaigns delivered to users and training the users who fail the phishing campaigns while staying in accordance with anonymity requirements while at the same time, protecting the security of the users personal information. In an implementation, security awareness training system may use artificial intelligence (AI) and/or machine learning (ML) techniques to determine which criteria need to be marked protected and to decide how much user data should be obfuscated while displaying the user data. Thus, the security awareness training system may allow automatic obfuscation of user data in compliance with data protection laws across several jurisdictions and company policies while still enabling the system administrator to create and run simulated phishing campaigns and training campaigns to a set of users that meet relevant criteria. FIG.2depicts an implementation of some of an architecture of an implementation of system200for using secured groups for simulated phishing campaigns to obfuscate user data for levels of security, access and/or privacy based on protected criteria classes, according to some embodiments. Secured groups may be used for the security of data of the user, including personal information of the user. Using secured groups for simulated phishing campaigns to obfuscate user data and controlling access privilege to user data and personal information improves the security of the user's data and personal information. The secured groups may provide different levels of access control and/or security to manage and/or control what portion of data may be accessed, viewed or displayed to whom. System200may include security awareness training system202, user device204, and network206enabling communication between the system components for information exchange. Network206may be an example or instance of network104, details of which are provided with reference toFIG.1Aand its accompanying description. According to some embodiments, security awareness training system202may be implemented in a variety of computing systems, such as a mainframe computer, a server, a network server, a laptop computer, a desktop computer, a notebook, a workstation, and any other computing system. In an implementation, security awareness training system202may be implemented in a server, such as server106shown inFIG.1A. In some implementations, security awareness training system202may be implemented by a device, such as computing device100shown inFIGS.1C and1D. In some embodiments, security awareness training system202may be implemented as a part of a cluster of servers. In some embodiments, security awareness training system202may be implemented across a plurality of servers, thereby, tasks performed by security awareness training system202may be performed by the plurality of servers. These tasks may be allocated among the cluster of servers by an application, a service, a daemon, a routine, or other executable logic for task allocation. In one or more embodiments, security awareness training system202may facilitate cybersecurity awareness training, for example via simulated phishing campaigns and security training campaigns. The simulated phishing campaigns may also be interchangeably referred to as simulated phishing attacks. A simulated phishing campaign is a technique of testing a user to see whether the user is likely to recognize a true malicious phishing attack and act appropriately upon receiving the malicious phishing attack. In some embodiments, the user may be an employee of the organization, a customer, or a vendor, or anyone associated with the organization. In some embodiments, the user may be an end-customer/consumer or a patron using the goods and/or services of the organization. In an implementation, security awareness training system202may execute the simulated phishing campaign by sending out one or more simulated phishing communications periodically or occasionally to the users and observe responses of the users to such simulated phishing communications. A simulated phishing communication may mimic a real phishing message and appear genuine to entice a user to respond/interact with the simulated phishing communication. Further, a simulated phishing communication may include links, attachments, macros, or any other simulated phishing threat that resembles a real phishing threat. In response to a user interaction with the simulated phishing communication, for example if the user clicks on a link (i.e., a simulated phishing link), the user may be provided with security awareness training. In some embodiments, security awareness training system202may provide security awareness training to users through one or more security training campaigns. In some examples, security awareness training system202may teach users various aspects of security awareness through quizzes, tests, training videos, assessments, text and image media, and any other method of training. In an example, security awareness training system202may provide security awareness training to users for advertisement-based threats, email-based threats, newsletter-based threats, and other phishing threats. In some implementations, security awareness training system202may be owned or managed or otherwise associated with an organization or any entity authorized thereof. In an implementation, security awareness training system202may be managed by a system administrator. The system administrator may be a professional (or a team of professionals) managing organizational cybersecurity aspects. The system administrator may oversee and manage security awareness training system202to ensure cybersecurity goals of the organization are met. In an example, the system administrator may manage creation and configuration of simulated phishing campaigns, whitelisting and delivery verification campaigns, cybersecurity training campaigns, and any other element within security awareness training system202. In an embodiment, the system administrator may be assigned administrator login credentials to access security awareness training system202. In an example, security awareness training system202may be a Computer Based Security Awareness Training (CBSAT) system that performs security services such as performing simulated phishing attacks on a user or a set of users of an organization as a part of security awareness training. In some implementations, security awareness training system202may use artificial intelligence (AI) and/or machine learning (ML) techniques to intuitively analyze security awareness requirements of a user based on risk scores, responses to simulated phishing communications, and tests associated with cybersecurity training. Based on the analysis, security awareness training system202may generate a simulated phishing campaign for the user or may provide cybersecurity awareness training to the user. Security awareness training system202may reduce work burden and support the system administrator in achieving the security goals of the organization. According to some embodiments, security awareness training system202may include processor208and memory210. For example, processor208and memory210of security awareness training system202may be CPU121and main memory122respectively as shown inFIGS.1C and1D. Further, security awareness training system202may include simulated phishing campaign manager212. Simulated phishing campaign manager212may include various functionalities that may be associated with cybersecurity awareness training. In an implementation, simulated phishing campaign manager212may be an application or a program that manages various aspects of simulated phishing campaigns (or simulated phishing attacks). In an example, simulated phishing campaign manager212may manage tailoring and/or execution of a simulated phishing attack. Simulated phishing campaign manager212may also manage remedial trainings and other training campaigns. A simulated phishing attack may test the readiness of a user to handle phishing attacks such that malicious actions are prevented. For instance, simulated phishing campaign manager212may monitor and control timing of various aspects of a simulated phishing attack including processing requests for access to simulated phishing attack results, and performing other tasks related to the management of a simulated phishing attack. Simulated phishing campaigns may include of simulated phishing communications that include or mimic tricks that real phishing messages use, to try and teach users to recognize these. The more genuine a simulated phishing communication appears, the more likely a user will respond to it. Further, security awareness training system202may include generation module214, determination module216, and decision module218. In an implementation, generation module214, determination module216, and decision module218may be coupled to processor208and memory210. In some embodiments, generation module214, determination module216, and decision module218amongst other modules, may include routines, programs, objects, components, data structures, etc., which may perform particular tasks or implement particular abstract data types. Generation module214, determination module216, and decision module218may also be implemented as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulate signals based on operational instructions. In some embodiments, generation module214, determination module216, and decision module218may be implemented in hardware, instructions executed by a processing unit, or by a combination thereof. The processing unit may comprise a computer, a processor, a state machine, a logic array or any other suitable devices capable of processing instructions. The processing unit may be a general-purpose processor which executes instructions to cause the general-purpose processor to perform the required tasks or the processing unit may be dedicated to performing the required functions. In some embodiments, generation module214, determination module216, and decision module218may be machine-readable instructions which, when executed by a processor/processing unit, perform any of desired functionalities. In some embodiments, generation module214may be otherwise known as a group executor or a group generator. The machine-readable instructions may be stored on an electronic memory device, hard disk, optical disk or other machine-readable storage medium or non-transitory medium. In an implementation, the machine-readable instructions may also be downloaded to the storage medium via a network connection. In an example, machine-readable instructions may be stored in memory210. In some embodiments, security awareness training system202may include user data storage220, privacy rules storage222, smart group criteria storage224, and secured group criteria storage226. In an implementation, user data storage220may store metadata relating to users. In some examples, user data storage220may store personal information of the users. In some implementations, user data storage220may also store information associated with actions performed by the users with respect to simulated phishing campaigns, training campaigns, remedial trainings, and other such campaigns and trainings. According to an implementation, privacy rules storage222may store one or more privacy rules regarding how the data or information in user data storage220is to be managed. The privacy rules may be implemented based on various government regulations (or data protection laws) across several jurisdictions, company policies, system administrator preferences, and user preferences that restrict access to user data. In an example, a privacy rule may specify particular information about users to be obfuscated. The privacy rules may be referred to as security rule and/or access control rules as such rules control who has access to what data. In an implementation, smart group criteria storage224may store information related to one or more criteria to be applied to, or associated with, groups. Users may be dynamically added and removed from smart groups based on these criteria when a smart group is executed to identify one or more users matching the one or more criteria of the smart group. In an example, each of the one or more criteria may include a set of criterions. A criteria may include any attribute that could be applicable to a user. In other words, a criteria may be used as a basis for identifying users. Examples of the one or more criteria include, but are not limited to, user field criteria, user date criteria, phish event criteria, and training criteria. In an example, the user field criteria may be used to filter users based on attributes related to the users such as a first name, a last name, a location, a risk score, a phish prone percentage, or any other metadata. A phish prone percentage of a user is a metric scoring a number of simulated phishing attacks the user has failed, in some examples as a percentage of the total number of simulated phishing attacks the user has received, and a risk score of a user is a metric scoring security awareness risk the user poses to the organization. In an example, user date criteria may be used to filter users by user-specific dates such as dates when the users were added to security awareness training system202, dates when the users joined the organization, dates when the users logged in to security awareness training system202, or any other custom dates regarding or associated with the users. Further, in an example, the phish event criteria may be used to filter users based on their actions related to simulated phishing campaigns or simulated phishing tests. For example, the phish event criteria may be used to filter users who have failed a simulated phishing test in one or more ways, users who have received a simulated phishing test and did not fail, and any other action related to simulated phishing campaigns or simulated phishing tests. In an example, the phish event criteria may be used to filter the users based on whether or not the users performed any type of interaction or a specific type of action with a simulated phishing communication, a severity of the interaction performed by the users, and/or a number of interactions of the users with the simulated phishing communication. In an example, training criteria may be used to filter users based on their involvement in training campaigns. For example, training criteria may include information such as whether users are enrolled in a training, whether the users have started the training, an amount of training the users have completed, whether or not the users have completed the training, whether or not the users interact with a simulated phishing communication when enrolled in a training, after starting the training, or after completing the training, a type of interaction that users have with the simulated phishing communication when enrolled in the training, after starting the training, or after completing the training, whether or not an assessment has been taken by the users, whether or not the users have scored within a certain threshold score on the assessment, whether or not certain assessment topics have been taken, or other any criteria associated with demographics, characteristics or behavior of the users with regard to security awareness. In some implementations, the one or more criteria may be stored in smart group criteria storage224and may be set, created, or defined by the system administrator, by cybersecurity experts of the organization, or by a personnel contracted by the organization. In an example, the one or more criteria may be learnt through results of simulated phishing campaigns and training campaigns. In some examples, the criteria could also be set by Artificial Intelligence (AI) techniques or Machine Leaning (ML) techniques. In some embodiments, secured group criteria storage226may store information about a plurality of protected criteria classes. In an example, each of the plurality of protected criteria classes may specify if information of users is to be obfuscated for display and how much of the information of the users is to be obfuscated for display. In an example, the plurality of protected criteria classes may include Criteria Class A, Criteria Class B, Criteria Class C, and Criteria Class D. The Criteria Class D may further include Criteria Class D1 and Criteria Class D2. In an example, Criteria Class A may be criteria that do not create a privacy rule violation. For example, users associated with criteria of Criteria Class A can be personally identified and information about the users can be displayed to anyone. In an example, Criteria Class B may be criteria that always create a privacy rule violation. For example, each user associated with criteria of Criteria Class B must have his or her personal identity protected to be compliant with privacy rules. In other words, information about the users associated criteria of Criteria Class B must always be obfuscated. In an example, Criteria Class C may include criteria that create a privacy rule violation if combined with other criteria. In an example, information about the users associated with criteria of Criteria Class C must be obfuscated if the users meet one or more additional criteria that create a privacy rule violation if combined. In a further example, Criteria Class D may include a criteria for which an obfuscation decision is dependent on an outcome of a query. In other words, Criteria Class D may include a criteria that examines user data of the users returned from the query to look for compliance with privacy rules. In an example, Criteria Class D1 specifies that that only users meeting the criteria of Criteria Class D1 are to have their information obfuscated for display. In an example, if an identifier for a criteria is X, where 1<X<=M and M is a total number of criteria, then if the criteria X when run on the entire user population U yields a user Y, where 1<Y<=N and N is the total number of users in the organization, then information of user Y must be obfuscated. Further, in an example, Criteria Class D2 specifies that if a specified user meets the criteria of Criteria Class D2, then all of the users meeting the criteria are to have at least a portion of their information obfuscated for display. Accordingly, each of the protected criteria classes provides a different level of privacy to the users. Information related to the users of the organization stored in user data storage220, information stored in privacy rules storage222, information related to the one or more criteria to be applied to, or associated with, groups stored in smart group criteria storage224, and information related to the plurality of protected criteria classes stored in secured group criteria storage226may be periodically or dynamically updated as required. Referring again toFIG.2, in some embodiments, user device204may be any device used by a user. The user may be an employee of an organization or any entity. According to some embodiments, user device204may include processor228and memory230. In an example, processor228and memory230of user device204may be CPU121and main memory122, respectively, as shown inFIGS.1C and1D. User device204may also include user interface232such as a keyboard, a mouse, a touch screen, a haptic sensor, voice-based input unit, or any other appropriate user interface. It shall be appreciated that such components of user device204may correspond to similar components of computing device100inFIGS.1C and1D, such as keyboard126, pointing device127, I/O devices130a-nand display devices124a-n. User device204may also include display234, such as a screen, a monitor connected to the device in any manner, or any other appropriate display. In an implementation, user device204may display received content (for example, simulated phishing communications) for the user using display234and is able to accept user interaction via user interface232responsive to the displayed content. In some embodiments, user device204may include email client236. In one example implementation, email client236may be an application installed on user device204. In another example implementation, email client236may be an application that can be accessed over network206through a browser without requiring to be installed on user device204. In an implementation, email client236may be any application capable of composing, sending, receiving, and reading email messages. For example, email client236may be an instance of an application, such as Microsoft Outlook™ application, IBM® Lotus Notes® application, Apple® Mail application, Gmail® application, or any other known or custom email application. In an implementation, email client236may be configured to receive simulated phishing communications from security awareness training system202. In an example, a user of user device204may be mandated to download and install email client236by the organization. In another example, email client236may be provided by the organization as default. In some examples, a user of user device204may select, purchase and/or download email client236, through for example, an application distribution platform. The term “application” as used herein may refer to one or more applications, services, routines, or other executable logic or instructions. In operation, as a part of cybersecurity awareness training, security awareness training system202may be configured to establish groups based on one or more criteria that are used to identify users, at a time when the groups are executed. In an example, security awareness training system202may establish the groups upon receiving a request from the system administrator. The system administrator may want to create a group at any time, for example before initiating a simulated phishing campaign or when a computer-based security awareness training program is being organized. In such situations, the system administrator may send a request to security awareness training system202for creating or establishing the group. According to an implementation, generation module214may be configured to receive a request from the system administrator (or any other individual) for establishing a group configured to resolve members of the group based on a plurality of users matching one or more criteria of the group at a time of execution of the group. In response to receiving the request, generation module214may prompt the system administrator to provide information related to the one or more criteria based on which the group is to be established. Generation module214may then receive an input from the system administrator on the one or more criteria to be used as a basis for identifying the one or more users for membership of the group. Examples of the one or more criteria include, but are not limited to, user field criteria, user data criteria, phish event criteria, and training criteria. As described above, each of the one or more criteria may include a set of criterions. In examples, some of these criteria may lead to personally identifiable information (also referred to as identifying information) of the users that is protected under various data protection laws across one or more jurisdictions and/or workplace regulations to be exposed. In some embodiments, upon receiving input from the system administrator on the one or more criteria, generation module214may generate a query statement in order to run a query on users to determine users that meet the one or more criteria so that they are added or included to the group or members in the group is “resolved” to identify the users of group at the time or instance of running the query. Resolving members of the group includes identifying the users who match the criteria of the query at the time of the query executed by the group generator (e.g., generation module). In an example, when the query is run, the query statement is an expression of search bounds. The query statement may be generated using criterion options. In an example, criterion options may be combined using Boolean expressions to generate the query statement. Examples of criterion options associated with a user include a first name, a last name, a location, manager or subordinate's identity, email aliases, a job title, a division/department, an industry, a phone number, a phish prone percentage, a risk score, a language, a name of the organization, and any other information related to the user or attribute of the user. In an example, a query statement may be generated using condition, comparison, values, count, time frame, and event criterion options, in examples in combination with one of user field and/or phish event criterion options. In an example, the criterion options may be classified in one of a plurality of protected criteria classes, and therefore, the criterion options may or may not comply with government regulations and/or company policies. For example, the ‘location” user field criterion option with a value “Brazil” may be classified in protected criteria class B. In an example, user field criterion options may include any attribute related to a user such as a name, a location, a phish prone percentage, or any other metadata. The condition criterion option may include an inclusive condition or an exclusive condition, such as “must” or “must not”. In an example, the comparison criterion option may include following options: equal, contains, does not contain, starts with, ends with, greater than, or less than. The value criterion option may include a specific value to a criterion. In an example, for criterion “name”, the value may be “Charles” and for criterion “location”, the value may be “southwest”. In another example, for criterion “phish prone percentage”, the value may be “20”. Further, the time frame criterion option may include a time range, a start time, an end time, an earliest time, or a latest time that is part of the criteria being queried. In an example, an event criterion option may include any events in connection with users associated with simulated phishing campaigns, remedial trainings, and other training campaigns. In an example, an event may be any type of activity, interaction, behavior, notification, alert or communication that may occur by or in association with a user and/or a simulated phishing campaign and/or a training campaign. For example, an event may be a user clicking on an email or opening an attachment, a user receiving or opening an email, user training or assessments, a user passing or failing a simulated phishing attack, or a last date when a user logged in to security awareness training system202. The count criterion option may include a number of times an event happens or does not happen. According to an example, the query statement generated by generation module214may include “The first name must not contain Charles”. In another example, the query statement may include “The phish prone percentage must be greater than 50”. In yet another example, the query statement may include “User must not have had enabled a macro more than 1 time.” In an implementation, when more than one criteria are provided by the system administrator, generation module214may combine the criteria using logical operators, such as “AND”, “OR”, and “NOT”. In an example, the criteria provided by the system administrator may include “users who have not been trained” AND “users who have been phished in the past X months”. Further, the criteria are search bounds that determine the results of the query. As may be understood, query statements are built from one or more criteria and applied to user data. According to an embodiment, generation module214may run a query using the query statement for users that meet the one or more criteria of the group provided by the system administrator. In an implementation, generation module214may run the query against user data storage220which includes attributes and information about the users. In an example, when the query is run, a plurality of users is returned who fit or meet the one or more criteria of the group. Thus, as a result of the query, generation module214may identify the plurality of users meeting or matching the one or more criteria of the group. Generation module214may then establish the group by adding the plurality of users to the group, or by overwriting (replacing) the group membership with the plurality of users meeting or matching the one or more criteria of the group, or by keeping as users of the group the users that were already members of the group that also meet or match the one or more criteria of the group. In an example, generation module214may automatically add the plurality of users to the group upon detecting an event in connection with the plurality of users. In an example, generation module214may resolve user A, user B, and user C as members of a group, including them in the group because the users match the one or more criteria of the group. Any members previously matching the criteria but no longer matching the criteria are removed from the group. For example, user B ceases to match the one or more criteria of the group when the group is executed, and at that point, generation module214resolves only user A and user C as members of the group. As such, membership in the group is dynamically determined or resolved (e.g., users of the groups identified) at time of the query or execution of the group. In an example, once the plurality of users is identified by the query and added to the group, the plurality of users that are in the group may be enrolled in a campaign (for example, a simulated phishing campaign). In an implementation, security awareness training system202may carry out the campaign, and when the campaign is completed, the plurality of users may be automatically added one or more other groups based on their actions. In an example, the query may be run again to identify more users on which to run the campaign. Further, security awareness training system202may track remedial training completed by a user, and responsive to the completion of remediation training, the user may be added to another group and/or the user may be removed from an existing group. Although, it has been described that generation module214establishes the group based on receiving the input from the system administrator on the one or more criteria, in some embodiments, generation module214may automatically identify the one or more criteria for establishing the group, for example, using AI and/or ML techniques. In an example, a group may be established having as criteria users from an accounting department of the organization, such that security awareness training system202may send a simulated phishing campaign (named “First Campaign”) to the users in the accounting department. Once the criteria of the group is established and the group has been executed, security awareness training system202may send the “First Campaign” to all users in the group. Thereafter, a new group may be established with criteria “users who have failed “First Campaign”, to be used to provide remedial training to users who failed the “First Campaign”. Users who got phished with the “First Campaign” (e.g., users that failed one or more aspect of the simulated phishing campaign) would be added to the new group when the group is executed. Further, the users of the new group may be removed from the new group and added to another group upon completion of remedial training by the user, at the next time the group is executed. According to an implementation, after the group is established, generation module214may perform a query, which is also referred to as “executing the group”. In one example, if the group is going to be used for a simulated phishing campaign, the query is run at the time when security awareness training system202needs to perform an action on the users, such as, send the users a simulated phishing communication. In some examples, the query is run at a predetermined time. For example, the query may be run at a fixed point in time relative to the start of a simulated phishing campaign. In some examples, the group may be executed periodically. In some embodiments, security awareness training system202may associate users with the group until such time as the query is run again. In some embodiments, security awareness training system202may associate users with the group until the group is used for the function which triggered the generation module214to run the query. For example, if the query was triggered due to a request to run a simulated phishing campaign, the users that are identified as a result of the query are associated as members of the group that will be sent the simulated phishing campaign until after the simulated phishing campaign has been completed. In some embodiments, security awareness training system202may associate the users with the group for a period of time. In some embodiments, security awareness training system202may associate the users with the group until another query is run by security awareness training system202, for example a query for which a user meets the criteria of the query. In some embodiments, determination module216may determine that at least one criteria of the one or more criteria of the group has been configured as a protected criteria class of a plurality of protected criteria classes. In an example, each of the plurality of protected criteria classes may specify if information of users is to be obfuscated for display and how much of the information of the users is to be obfuscated for display. As described earlier, the plurality of protected criteria classes may include Criteria Class A, Criteria Class B, Criteria Class C, and Criteria Class D. The Criteria Class D may further include Criteria Class D1 and Criteria Class D2. In an implementation, determination module216may determine whether the at least one criteria of the one or more criteria of the group has been configured as part of a protected criteria class based on the one or more privacy rules stored in privacy rules storage222. In an implementation, determination module216may determine whether the criteria in the query statement, or the user data that resulted from the criteria or the query statement, or any combination of these require that some or all of the results are protected. In an implementation, determination module216may be configured to receive an indication to configure or mark the at least one criteria of the one or more criteria of the group as protected. In an example, determination module216may receive inputs for example, from the system administrator or from individual users, regarding the at least one criteria being configured as a protected criteria class. In some embodiments, determination module216may be configured to prompt the system administrator with a suggestion of the at least one criteria being configured as a protected criteria class. In some implementations, determination module216may use AI and/or ML techniques to determine that the at least one criteria has been configured or should be configured as a protected criteria class. According to an embodiment, responsive to the determination that at least one criteria of the one or more criteria of the group has been configured as a protected criteria class of the plurality of protected criteria classes, determination module216may identify the group as a secured group. In some implementations, determination module216may be configured to receive an indication from the system administrator to make the group a secured group. In an embodiment, decision module218may execute the group to identify one or more users of the plurality of users as members of the group based at least on the one or more users matching the at least one criteria of the secured group at the time of execution of the group. Upon identification of the one or more users, decision module218may obfuscate information of the one or more users resulting from the execution of the secured group for display, in accordance with the one or more protected criteria classes of the at least one criteria of the secured group. In an implementation, decision module218may be configured to obfuscate some or all identifying information of the one or more users of the secured group. In some implementations, decision module218may be configured to obfuscate metadata of the one or more users. In an implementation, decision module218may use AI and/or ML techniques to determine how much information of the one or more users is to be obfuscated. According to an embodiment, decision module218may query and assess user data storage220to determine if information of a user should be obfuscated in case the query statement returns the user. For example, if the user is a Chief Financial Officer (CFO) in the organization, and there is a government regulation that information of a CFO of an organization is to be considered protected, then in some embodiments, a query statement crafted to query users with the title CFO or a query statement that returns the user who is currently the CFO of the organization will display obfuscated data about the user. Further, for a query statement “The first name must not contain Charles”, any user who does not have “Charles” in any part of the first name field of his or her metadata will be displayed in a group (or a smart group) and a secured group will display all users who do not have “Charles” in any part of the first name field of their metadata, only if this criterion is determined not to be protected. In an implementation, decision module218may determine that a combination of the criteria forming a query statement for the secured group require at least a portion of data from results of the query are to be protected or obfuscated for display. In an example, decision module218may obfuscate the users and their metadata in line with the query statement. The query statement may consider one or more privacy rules stored in privacy rules storage222. As described earlier, the one or more privacy rules may be defined based on company policies, government regulations, and/or preferences of the system administrator. As may be understood, the query statement includes the criteria configured as a protected criteria class. In some implementations, decision module218may be configured to determine that at least a portion of user data resulting from a query of a secured group are to be protected or obfuscated for display. An example of a generalized Boolean expression that may be used to evaluate a query statement including criteria configured as a protected criteria class of the plurality of protected criteria classes is provided below. The users for which decision module218obfuscates some or all of the user data of may be determined in some examples as follows.Let the Ythuser in the organization be represented by UY.Let the total number of users U in the organization be N. Therefore, all the users can be represented by U1. . . UN.Let the Xthcriterion be represented by CX.Let the total number of criterion=M. Therefore, all criterion can be represented by C1. . . CM. The protected criteria classes are as follows:Criteria Class A—Criteria that do not create a privacy rule violation. In other words, any users can be personally identified with the criteria and there is no problem with revealing the identity of the user.Criteria Class B—Criteria that create a privacy rule violation no matter what. In other words, every user that the criteria may potentially identify must have their personal identity protected to be compliant with privacy rules.Criteria Class C—Criteria that create a privacy rule violation in combination with other criteria, and the knowledge of which users can't be shown, and which ones can be known from the outset. The group of users that criterion X creates no issue for (i.e., their information can be shown in the output) is CX_ACK.Criteria Class D—Criteria for which the obfuscation decision is dependent on the outcome of the query. In other words, criteria that examines the user data from the users returned to look for compliance with privacy rules.Criteria Class D1—Each user can have one or more criteria for which, if the user meets the criterion, the user's information must be hidden. If the identifier for the criterion is X, where 1<X<=M, then if criterion X when run on the entire user population U yields User Y (CX(U)=UY), where 1<Y<N, then User Y's data must be obfuscated.Criteria Class D2—Each user can have one or more criterion for which, if that specific user meets the criterion, the information of all users who meet the criterion must be obfuscated.Let the group of users displayed (not obfuscated) be represented by V.Let the group of users that are fully or partially obfuscated for display be represented by O. Then:Initialization→V=U (all users are allowed)Initialization→For i=1 M, ResultsM=0 (For each criterion initialize “Results” to 0)For i=1 M, ToggleM=1 (For each criterion initialize “Toggle” to 1)Function Definition: CA(UB) means user B is run through criterion A. If user B meets criterion A, then CA(UB)=UB, otherwise if user B does not meet criterion A, CA(UB)=0 To determine which users to obfuscate— FOR A=1...M ← (For each of the criterion, the users need to pass throughthis loop)IF (CAis a member of Criteria Class B) ← (If a criterion is a memberof Criteria Class B, all user data must be obfuscated)V=0 ← (users to be displayed is NULL, regardless of anyother criterion)GO TO END FOR ← (exit loop)IF (CAis a member of Criteria Class C) ← (If a criterion is a memberof Criteria Class C, some user data may be obfuscated and some not)V=V AND CA—ACK ← (The group of users displayed is thelesser of the currently known display group and the users forwhich criterion A allows some or all of their user data to beshown)IF (CAis a member of Criteria Class D1), then ← (If a criterion is amember of Criteria Class D1 only obfuscate user data of the users thatmeet the criterion)FOR B=1 ....N ← (For each of the users)ResultsA=ResultsA+ CA(UB) ← (run each user throughthe current criterion, if the user meets the currentcriterion the user's identifier is added to the “Results”for the criterion. These users will not be displayed)END FORV=V − (ResultsA) ← (The users that will not be displayed areremoved from the group of users to be displayed)IF (CAis a member of Criteria Class D2), then ← (If a criterion is amember of Criteria Class D2, then if at least one user meets thecriterion then the user data for all users that meet the criterion are notdisplayed)FOR B=1 ....N ← (For each of the users)IF CA(UB)=UB← (run each user through the currentcriterion, if the user meets the criterion, then the flag forthat user is set to 0. All flags are initialized to 1)ToggleB=0ToggleM= ToggleMAND ToggleB← (ANDtogether all of the flags, such that if even oneuser's flag is zero, the composite flag is zero.)END FORV=V AND ToggleM← (If the criterion is a member of D2and at least one user meetings the criterion, then the group ofusers to be displayed is null)END FORO = NOT (V) ← (The users to be obfuscated are those that are notdisplayed) In an example, decision module218may completely obfuscate the user data. In some examples, decision module218may partially obfuscate the user data. Further, in some examples, decision module218may display only a number of users in the secured group. In some examples, decision module218may display an equivalency of the number of users in a secured group (for example, less than 10 users, or more than 50% of all users, or 1 out of 3 users). In some examples, decision module218may display a percentage of users in the organization that are in the secured group. In an example, decision module218may display only a subset of users. In some examples, decision module218may not display any information about the users or how many users are in the secured group. In an example, decision module218may display only partial information about individual users. Accordingly, different amounts of user data are visible. According to an embodiment, if the protected criteria class is removed from the query, the system administrator may be able to see the users and their associated information again. FIG.3depicts an exemplary graphical user interface300that a system administrator sees after querying users in a secured group, according to some embodiments. As can be seen inFIG.3, the secured group is established based on training criteria “Users must not have completed in Compliance Series: PCI DSS for Merchants” and user field criteria “The group name must be equal to management”. In an example, the criteria of the secured group may be configured as a protected criteria class “Criteria Class A”. The Criteria Class A may be a criteria that do not create a privacy rule violation. Accordingly, users associated with the Criteria Class A can be personally identified and information about the users can be displayed to the system administrator, and therefore to other people. Thus, when the system administrator queries the users in the secured group, none of the user data is obfuscated, and all of the user data is visible, along with the exact number of users in the secured group. As can be seen inFIG.3, when the system administrator queries the users in the secured group, the system administrator can view a total number of users i.e., 541 users in the secured group, represented by “302”. The system administrator can also view user data304. These are the group of users who meet the criteria “Users must not have completed in Compliance Series: PCI DSS for Merchants” and user field criteria “The group name must be equal to management”. FIG.4depicts an exemplary graphical user interface400that the system administrator sees after querying users in a secured group, according to some embodiments. As can be seen inFIG.4, the secured group is established based on user field criteria “The first name must be equal to Chris” and phish event criteria “User must have clicked on a phishing email once in the last 6 months”. In an example, the criteria of the secured group may be configured as a protected criteria class “Criteria Class B”. The Criteria Class B may be a criteria that always create a privacy rule violation. For example, each user associated with the Criteria Class B must have his or her personal identity protected to be compliant with privacy rules. In other words, information about the users associated with the Criteria Class B must always be obfuscated. Thus, when the system administrator queries the users in the secured group, all of the user data is obfuscated, and none of the user data is visible, including a number of users in the secured group. For example, when the system administrator queries about the users in the secured group, “0 users” (represented by “402”) is displayed to the system administrator. As shown inFIG.4, a message “No users matched your search criteria” is shown to the system administrator, represented by “404”. In some embodiments, the system administrator could be informed that there are one or more users that met the criteria of the secured group, however the details of the results of the query are protected and cannot be shown. FIG.5depicts a flowchart500for using secured groups for simulated phishing campaigns to obfuscate user data for levels of privacy based on protected criteria classes, according to some embodiments. Step502includes establishing a group configured to resolve members of the group based on a plurality of users matching one or more criteria of the group at a time of execution of the group. Examples of the one or more criteria include, but are not limited to, user field criteria, user date criteria, phish event criteria, and training criteria. In an implementation, generation module214may establish the group configured to resolve members of the group based on the plurality of users matching one or more criteria of the group at the time of execution of the group. Step504includes determining that at least one criteria of the one or more criteria of the group has been configured as a protected criteria class of a plurality of protected criteria classes. In an example, each of the plurality of protected criteria classes may specify if information of users is to be obfuscated for display and how much of the information of the users is to be obfuscated for display. In an example, the plurality of protected criteria classes may include Criteria Class A, Criteria Class B, Criteria Class C, and Criteria Class D. The Criteria Class D may further include Criteria Class D1 and Criteria Class D2. In an example, at least one protected criteria class of the plurality of protected criteria classes is dependent on an outcome of a query of the plurality of users matching the criteria of a secured group. Further, in an example, at least one protected criteria class of the plurality of protected criteria classes specifies that if a specified user meets the criteria then all of the users meeting the criteria are to have at least a portion of their information obfuscated for display. Also, in an example, at least one protected criteria class of the plurality of protected criteria classes specifies that only the users meeting the criteria are to have their information obfuscated for display. In an implementation, determination module216may determine that at least one criteria of the one or more criteria of the group has been configured as the protected criteria class of the plurality of protected criteria classes. According to an implementation, determination module216may receive an indication to mark the at least one criteria as protected. Step506includes identifying, responsive to the determination, the group as a secured group. In an implementation, determination module216may identify the group as a secured group in response to the determination that the at least one criteria of the one or more criteria of the group has been configured as the protected criteria class of the plurality of protected criteria classes. In some implementations, determination module216may determine that a combination of the criteria forming a query statement for the secured group require at least a portion of data from results of the query are to be protected or obfuscated for display. Further, determination module216may determine that at least a portion of user data resulting from a query of the secured group are to be protected or obfuscated for display. Step508includes executing the group to identify one or more users of the plurality of users as members of the group based at least on the one or more users matching the at least one criteria of the secured group at the time of execution of the group. In an implementation, decision module218may execute the group to identify the one or more users of the plurality of users as members of the group based at least on the one or more users matching the at least one criteria of the secured group at the time of execution of the group. Step510includes obfuscating for display, information of the one or more users resulting from the execution of the secured group in accordance with the protected criteria class. In an example, identifying information of the one or more users meeting the criteria of the protected criteria class of the secured group at the time of execution of the secured group may be obfuscated. In some example, metadata of the one or more users meeting the criteria of the protected criteria class of the secured group at the time of execution of the secured group may be obfuscated. According to an implementation, decision module218may obfuscate for display, information of the one or more users resulting from the execution of the secured group in accordance with the protected criteria class. While various embodiments of the methods and systems have been described, these embodiments are illustrative and in no way limit the scope of the described methods or systems. Those having skill in the relevant art can effect changes to form and details of the described methods and systems without departing from the broadest scope of the described methods and systems. Thus, the scope of the methods and systems described herein should not be limited by any of the illustrative embodiments and should be defined in accordance with the accompanying claims and their equivalents. | 94,736 |
11943254 | It will be recognized that some or all of the figures are schematic representations for purposes of illustration. The figures are provided for the purpose of illustrating one or more embodiments with the explicit understanding that they will not be used to limit the scope or the meaning of the claims. DETAILED DESCRIPTION Referring generally to the FIGURES, systems and methods relate generally to implementing a cybersecurity framework. In some arrangements, the system represents an embodiment of a security architecture that employs modeling to furnish an incident response management platform. Many existing cybersecurity systems and architectures face several challenges that limit their effectiveness in managing and responding to cyber threats. One of these problems is a lack of integrated incident response capabilities. In particular, many existing systems operate in silos, with separate tools for threat detection, response, and recovery. This lack of integration can lead to delays in response times, miscommunication between teams, and a lack of overall visibility into the security posture of an organization. Another problem is the lack of streamlined processes for engaging with third-party vendors for incident response services. Organizations often have to navigate through complex procurement processes during a cyber incident, losing crucial time that could be used to mitigate the incident. Additionally, organizations often struggle to accurately assess their readiness to respond to incidents. They lack clear visibility into their own capabilities and limitations, and often don't have an effective way to communicate this information to potential response providers. Yet another problem with existing cybersecurity systems and architectures is the inability to dynamically adapt to changes in the security landscape. Many existing systems employ static defenses that are unable to adjust to new threats as they arise. This leads to vulnerabilities as attackers continually evolve their strategies and methods. Moreover, static systems also fail to account for changes in the organization's own infrastructure and operations, such as the adoption of new technologies or changes in business processes, which can introduce new potential points of attack. This inability to dynamically adapt hampers the organization's ability to maintain a robust security posture, leaving them exposed to a constantly evolving threat landscape. Accordingly, the ability to prevent cyber threats, such as hacking activities, data breaches, and cyberattacks, provides entities and users (e.g., provider, institution, individual, and company) improved cybersecurity by creating a customized cybersecurity framework tailored to their specific needs. This framework not only helps entities understand their current cybersecurity vulnerabilities but also connects them with appropriate vendors offering targeted protection plans. The customized framework enhances the protection of sensitive data, such as medical records and financial information, proprietary business data, and also helps safeguard the reputation of the entity. In addition to improving protection, the tailored cybersecurity framework also has the potential to reduce financial costs associated with data breaches, such as falling stock prices, costs of forensic investigations, and legal fees. The detailed design and execution of cybersecurity models for detecting and addressing vulnerabilities enable dynamic monitoring of various relationships, such as network, hardware, device, and financial relationships, between entities and vendors. The unique approach of providing a customized cybersecurity framework allows for significant improvements in cybersecurity by improving network security, infrastructure security, technology security, and data security. With vendors actively monitoring entities, immediate response to potential threats can be facilitated, thus further enhancing the overall security posture of the entity. This approach not only mitigates existing vulnerabilities but also anticipates potential threats, offering an adaptive and proactive solution to cybersecurity. Furthermore, by utilizing a customized cybersecurity framework for entities and users, it is possible to understand existing vulnerabilities, link them to specific assets, and provide targeted protection strategies, offering the technical benefit of generating personalized remediation recommendations and avoiding and preventing successful hacking activities, cyberattacks, data breaches, and other detrimental cyber-incidents. As described herein, the systems and methods of the present disclosure may facilitate the connection of entities to suitable vendors, offering security plans tailored to their specific vulnerabilities and needs. An additional benefit from the implementation of a customized cybersecurity framework is the ability to streamline the process of identifying and addressing vulnerabilities. This optimization of resources not only enables rapid risk reduction but also allows for the ongoing monitoring of the entity's cybersecurity status by the vendor, ensuring continuous protection and immediate response to potential threats. The implementation of such a framework not only allows entities to understand and address their current vulnerabilities but also empowers them to make informed decisions about their cybersecurity strategy. This includes selecting from a range of vendor plans and services, activating these plans as needed, and having the peace of mind that their cybersecurity is being actively monitored and managed by professionals. Additionally, the present disclosure provides a technical enhancement of dynamic cybersecurity architecture comprehension. For instance, an entity's cybersecurity vulnerabilities can be automatically understood and mapped within the process of implementing a customized cybersecurity framework, eliminating the need for maintaining separate inventories of network weaknesses, infrastructure vulnerabilities, operating systems susceptibilities, etc. In some embodiments, the implementation of this customized cybersecurity framework includes identifying potential security gaps associated with a particular entity or device identifier, such as a domain identifier (e.g., a top-level domain (TLD) identifier, a subdomain identifier, or a URL string pointing to a particular directory), an IP address, a subnet, etc. As a result, rather than separately assessing each subclass of vulnerabilities, a computing system can utilize a unified view into a computing environment of a particular target entity (e.g., via the readiness system of the security architecture) and centrally manage the understanding of different types of vulnerabilities and associated potential security threats. For instance, by initiating a comprehensive vulnerability assessment in a single operation. These vulnerability identification operations, described further herein, may comprise computer-executed operations to discern the entity's cybersecurity status and potential threats, determine vulnerabilities based on this status and subsequently connect the entity to suitable vendors offering appropriate cybersecurity plans. Referring toFIG.1generally, system100is an implementation of a security architecture utilizing modeling to provide an incident response management platform that includes multiple components, such as client device110, response system130, third-party devices150, and data sources160. These components can be interconnected through a network120that supports secure communication protocols such as TLS, SSL, and HTTPS. In some implementations, the response system130can generate and provide an application for incident response readiness that guides users through the steps to prepare for and manage incidents effectively. The application can integrate with various technologies and vendors to purchase services to resolve issues, and provides integration points for incident response workflow management. For example, users can access a marketplace within the application to purchase products, insurance, and services, and can determine their organization's capabilities, limitations, and threat focus. In some implementations, the response system130also presents the organization's readiness to incident response providers and automatically routes them to pre-associated panel vendors or organization-selected vendors at the point of need, contracting and activating the incident room immediately. In some implementations, the response system130can integrate readiness, including insurer data, into various third-party systems via APIs. In some implementations, the response system130can map an incident response (IR) plan from a static document or documents to the task enablers in Responder that bring them to life, showing where the tasks required by partners such as IR firms, insurers, and breach counsel are covered by the IR plan and IR playbook. The response system130can decompose the response plan into associated actionable tasks and activities by the organization, incident response providers, and other stakeholders, and provides different users and partners with a unified view of tasks, activities, and progress/status tracking. In some implementations, the response system130stores data regarding key milestones in an authoritative data source such as blockchain (e.g., database140), ensuring that results are traceable and linkable. For example, issues can be identified, tasks can be created, work can be routed to vendors, and proof of resolution can be recorded. In some implementations, the response system130can also supports real-time status tracking of policy-aligned tasks to status updates provided for incident response. In some implementations, instant intake is achieved by a remote embeddable widget on a website, which starts an incident response process that begins with a proposal stage and continues through workflows to achieve response readiness based on pre-defined logic and automation. For example, services can be purchased or extended within the application, and in the event of an inbound incident, the application facilitates routing to a claim manager. In some implementations, the response system130can provide an application for incident response readiness that guides users through the steps to ensure they are prepared for any potential incidents. The application can be designed to integrate with technology and vendors to purchase services that are required to resolve any issues. For example, the user can access the application through a variety of devices, including client device110. In particular, the application can offer integration points for incident response workflow management, enabling users to streamline their incident response process. The organization incident readiness feature of the response system130offers several features, including the integration of readiness, including insurer data, into various third-party systems, such as via an API. By integrating with third-party systems, the response system130can ensure that users have access to the most up-to-date information regarding their organization's readiness for potential incidents. In addition, the response system130can offer incident response plan mapping from a static plan document to the task enablers in Responder, which brings the tasks required by partners such as IR firms, insurers, and breach counsel to be measurable and identified. Still referring toFIG.1generally, the response system130can offer a marketplace for purchasing products, insurance, and services that may be required in the event of an incident. The marketplace includes various vendors that offer different products and services, enabling users to choose the best fit for their organization based on their capabilities, limitations, and threat focus. The application also determines organization readiness levels with proof of date, time stamps, and artifacts (e.g., on the blockchain), which can be used to identify any gaps in the organization's incident response plan. In some implementations, the response system130can automate the routing of incidents to pre-associated, panel vendors or organization-selected vendors at the point of need and immediately contracts and activates the incident room (e.g., when a cyber incident occurred or potentially occurred). Accordingly, the system100can ensure that the organization can respond to an incident as quickly and efficiently as possible. Additionally, the response system130can decompose the response plan into associated actionable tasks and activities by the organization, incident response providers, and others. This allows users to better understand their organization's response plan and identify areas for improvement. In general, the application (e.g., graphical user interface provided by content management circuit135) provides different users/partners with a unified view of tasks, activities, and progress/status tracking. For example, the status tracking can be tied back to incident readiness and managing the incident through resolution. Users can collaborate via the tool instead of via phone calls and emails, which ensures that everyone is working from the same information and avoids any miscommunication. The application can also offers real-time (or near real-time) status tracking of policy aligned tasks to status updates provided for incident response, enabling users to quickly and easily see how their incident response plan is progressing. In some implementations, data regarding key milestones is stored in an authoritative data source such as blockchain (e.g., database140(private ledger) or data sources160(public ledger)), ensuring that results can be traceable and linkable. Thus, this can enable users to identify areas for improvement in their incident response plan and make changes as necessary. In some implementations, the response system130offers an instant intake feature that can be integrated into a remote embeddable widget on a website. For example, the widget can start an incident response process that starts with a proposal stage and continues through workflows to achieve response readiness based on pre-defined logic and automation. This ensures that incidents are quickly identified and resolved, and that the organization is prepared for any potential incidents. Still referring toFIG.1Agenerally, the response system130of system100includes a data acquisition engine180and analysis circuit136that democratizes posture threats, incidents, and claim data. In particular, all stakeholders in the incident response process can have access to relevant data to make informed decisions. The analysis circuit136can use the democratized data in underwriting, claims, and the resilience process to enhance the overall response to an incident. With the data acquisition engine180, the response system130can collect and process data from various sources, such as third-party devices150and data sources160, to provide a comprehensive view of the organization's security posture. In some implementations, the response system130also implement incident response protocols and features via analysis circuit136that provide a centralized location for managing and configuring incident responses. For example, an application can walk users through the steps of incident response readiness and integrates with technology and vendors to purchase services to resolve issues. The response system130can automate the routing of incident response tasks to pre-associated, panel vendors, or organization-selected vendors at the point of need and immediately contracts and activates the incident room. By decomposing the response plan into associated actionable tasks and activities by the organization, incident response providers, and other stakeholders, the response system130ensures that all parties are working together to manage the incident through resolution. In some implementations, the response system130includes a vendor-provider marketplace that allows organizations to purchase products, insurance, and services that enhance their incident response capabilities. For example, the marketplace can be integrated into the response system130, allowing users to easily access relevant products and services during an incident. Additionally, the response system130can determine the organization's capabilities, limitations, and threat focus to present readiness to incident response providers. In some implementations, the response system130can include collection, recall, and proof of state features that provide that data regarding key milestones is stored in an authoritative data source such as the blockchain. This includes capabilities pre-incident, what happened after the incident occurred, what was the root cause, and recording. For example, results are traceable and linkable, and issues are identified, tasks are created, work is routed to vendors, and proof of resolution is recorded. In some implementations, the response system130can include a drag and drop file tokenization feature that allows users to securely tokenize and store sensitive files. In particular, this feature is useful when organizations desire to share sensitive information with third parties or with internal stakeholders. The system ensures that the information is secure and that only authorized parties can access it. Thus, this feature is designed to streamline the incident response process and enable better collaboration between all stakeholders. Referring now toFIG.1Ain more detail, a block diagram depicting an implementation of a system100for managing and configuring incident responses. System100includes client device110, response system130, third party devices150, and data sources160. In various implementations, components of system100communicate over network120. Network120may include computer networks such as the Internet, local, wide, metro or other area networks, intranets, satellite networks, other computer networks such as voice or data mobile phone communication networks, combinations thereof, or any other type of electronic communications network. Network120may include or constitute a display network. In various implementations, network120facilitates secure communication between components of system100. As a non-limiting example, network120may implement transport layer security (TLS), secure sockets layer (SSL), hypertext transfer protocol secure (HTTPS), and/or any other secure communication protocol. In general, the client device(s)110and third party device(s)150can execute a software application (such as application112or application150, e.g., a web browser, an installed application, or other application) to retrieve content from other computing systems and devices over network120. Such an application may be configured to retrieve an interfaces and dashboards from the response system130. In one implementation, the client device110and third party device150may execute a web browser application, which provides the interface (e.g., from content management circuit135) on a viewport of the client device110or third party device150. The web browser application that provides the interface may operate by receiving input of a uniform resource locator (URL), such as a web address, from an input device (such as input/output circuit118or158, e.g., a pointing device, a keyboard, a touch screen, or another form of input device). In response, one or more processors of the client device110or third party device150executing the instructions from the web browser application may request data from another device connected to the network120referred to by the URL address (e.g., the response system130). The other device may then provide webpage data and/or other data to the client device110or third party device150, which causes the interface (or dashboard) to be presented by the viewport of the client device110or third party device150. Accordingly, the browser window presents the interface to facilitate user interaction with the interface. In some embodiments, the interface (or dashboard) can be presented via an application stored on the client device110and third party device150. The network120can enable communication between various nodes, such as the response system130, third party device150, client device110, and data sources160. In some arrangements, data flows through the network120from a source node to a destination node as a flow of data packets, e.g., in the form of data packets in accordance with the Open Systems Interconnection (OSI) layers. A flow of packets may use, for example, an OSI layer-4 transport protocol such as the User Datagram Protocol (UDP), the Transmission Control Protocol (TCP), or the Stream Control Transmission Protocol (SCTP), transmitted via the network130layered over an OSI layer-3 network protocol such as Internet Protocol (IP), e.g., IPv4 or IPv6. The network130is composed of various network devices (nodes) communicatively linked to form one or more data communication paths between participating devices. Each networked device includes at least one network interface for receiving and/or transmitting data, typically as one or more data packets. An illustrative network120is the Internet; however, other networks may be used. The network130may be an autonomous system (AS), i.e., a network that is operated under a consistent unified routing policy (or at least appears to from outside the AS network) and is generally managed by a single administrative entity (e.g., a system operator, administrator, or administrative group). Client device110(sometimes referred to herein as a “mobile device”) may be a mobile computing device, smartphone, tablet, smart watch, smart sensor, or any other device configured to facilitate receiving, displaying, and interacting with content (e.g., web pages, mobile applications, etc.). Client device110may include an application112to receive and display content and to receive user interaction with the content. For example, application112may be a web browser. Additionally, or alternatively, application112may be a mobile application. Client device110may also include an input/output circuit118for communicating data over network120(e.g., receive and transmit to response system130). In various implementations, application112interacts with a content publisher to receive online content, network content, and/or application content. For example, application112may receive and present various dashboards and information resources from distributed by the content publisher (e.g., content management circuit135). Dashboards and/or information resources may include web-based content such as a web page or other online documents. The dashboards information resources may include instructions (e.g., scripts, executable code, etc.) that when interpreted by application112cause application112to display a graphical user interface such as an interactable web page and/or an interactive mobile application to a user (e.g., dashboards). In various implementations, application112can include one or more application interfaces for presenting an application (e.g., mobile application, web-based application, virtual reality/augmented reality application, smart TV application and so on). Application112is shown to include library114having an interface circuit116. The library114may include a collection of software development tools contained in a package (e.g., software development kit (SDK), application programming interface (API), integrated development environment (IDE), debugger, etc.). For example, library114may include an application programming interface (API). In another example, library114may include a debugger. In yet another example, the library114may be an SDK that includes an API, a debugger, and IDE, and so on. In some implementations, library114includes one or more libraries having functions that interface with a particular system software (e.g., iOS, Android, Linux, etc.). Library114may facilitate embedding functionality in application112. For example, a user may use library114to automatically transmit event logs whenever an event occurs on application112. As a further example, library114may include a function configured to collect and report device analytics and a user may insert the function into the instructions of application112to cause the function to be called during specific actions of application112(e.g., during testing as described in detail below). In some implementations, interface circuit116functionalities are provided by library114. In various implementations, interface circuit116of system100can provide one or more interfaces to users, which can be accessed through an application interface presented in the viewport of client device110. These interfaces can take the form of dashboards and other graphical user interfaces, offering a variety of functionality to the user. For example, a user can view incident responses, remediate claims, communicate with team members, purchase or extend products and services, and more. The interfaces provided by interface circuit116can be customizable and dynamic, allowing users to configure and adjust them to suit their specific needs. They can also be designed to present real-time data associated with current incident responses, potential incidents or threats, and other important information, allowing users to make informed decisions and take proactive steps to manage risk. For example, interface circuit116can generate dashboards that provide real-time data and insights. These dashboards can be customized to suit the needs of individual users or groups, providing a comprehensive view of incident responses, potential threats, and the status of remediation efforts. For example, a dashboard might show the status of incident responses across different regions, or highlight areas where additional resources are needed. In another example, the interface circuit116can generate a landscape of all currently connected devices to the entity, such as a company or institution. This can include information on the types of devices, their locations, and other important details that can help inform incident response efforts. With this information, users can better understand the scope of potential threats, identify vulnerable areas, and take steps to improve security and resilience. In another example implementation, the application112executed by the client device110can cause a web browser to the display the interfaces (e.g., dashboards) on the client device110. For example, the user may connect (e.g., via the network120) to a website structured to host the interfaces. In various implementations, interface can include infrastructure such as, but not limited to, host devices (e.g., computing device) and a collection of files defining the interface and stored on the host devices (e.g., in database140). The web browser operates by receiving input of a uniform resource locator (URL) into a field from an input device (e.g., a pointing device, a keyboard, a touchscreen, mobile phone, or another form of input device). In response, the interface circuit116executing the interface in the web browser may request data such as from content (e.g., vendor information, settings, current incident response, other dashboards, etc.) from database140. The web browser may include other functionalities, such as navigational controls (e.g., backward and forward buttons, home buttons). In some implementations, the debugging interface can include both a client-side interface and a server-side interface. For example, a client-side interface can be written in one or more general purpose programming and can be executed by client device110. The server-side interface can be written, for example, in one or more general purpose programming languages and can be executed by the response system130. Additional details associated with the interface are described in detail with reference to exampleFIGS.7-21. Interface circuit116may detect events within application112. In various implementations, interface circuit116may be configured to trigger other functionality based on detecting specific events (e.g., transactions, in-app purchases, performing a test of a vendor, scrolling through an incident response plan, sending a contract to a vendor, spending a certain amount of time interacting with an application, etc.). For example, interface circuit116may trigger a pop-up window (overlayed on an interface) upon selecting an actionable object (e.g., button, drop-down, input field, etc.) within a dashboard. In various implementations, library114includes a function that is embedded in application112to trigger interface circuit116. For example, a user may include a function of library114in a transaction confirmation functionality of application112that causes interface circuit116to detect a confirmed transaction (e.g., purchase cybersecurity protection plans, partnering). It should be understood that events may include any action important to a user within an application and are not limited to the examples expressly contemplated herein. In various implementations, interface circuit116is configured to differentiate between different types of events. For example, interface circuit116may trigger a first set of actions based on a first type of detected event (e.g., selecting actionable objects within the static response plan) and may trigger a second set of actions based on a second type of detected event (e.g., running a test). In various implementations, interface circuit116is configured to collect event logs associated with the detected event and/or events and transmit the collected event logs to content management circuit135. In various implementations, the interface circuit116can collect events logs based on a designated session. In one example, the designated session may be active from when application112is opened/selected to when application112is closed/exited. In another example, the designated session may be active based on a user requesting a session to start and a session to end. Each session, the interface circuit116can collect event logs while the session is active. Once completed, the event logs may be provided to any system described herein. During the session, the event logs may trace each event in the session such that the events are organized in ascending and/or descending order. In some implementations, the events may be organized utilizing various other techniques (e.g., by event type, by timestamp, by malfunctions, etc.). In various implementations, the interface circuit116of the client device110(or third party device150) may start collecting event logs when application112is opened (e.g., selected by the user via an input/output device118of the client device110), thus starting a session. In some implementations, once the application is closed by the user the interface circuit116may stop collecting event logs, thus ending the session. In various implementations, the user may force clear event logs or force reset application112such that the current session may reset, thus ending a particular session and starting a new session. Additional details regarding the interface circuit116functionalities, and the dashboards and interfaces presented within a viewport of client device110are described in additional details with reference toFIGS.7-21. The input/output circuit118is structured to send and receive communications over network120(e.g., with response system130and/or third-party device150). The input/output circuit118is structured to exchange data (e.g., bundled event logs, content event logs, interactions), communications, instructions, etc. with an input/output component of the response system130. In one implementation, the input/output circuit118includes communication circuitry for facilitating the exchange of data, values, messages, and the like between the input/output circuit118and the response system130. In yet another implementation, the input/output circuit118includes machine-readable media for facilitating the exchange of information between the input/output device and the response system130. In yet another embodiment, the input/output circuit118includes any combination of hardware components, communication circuitry, and machine-readable media. In some embodiments, the input/output circuit118includes suitable input/output ports and/or uses an interconnect bus (not shown) for interconnection with a local display (e.g., a touchscreen display) and/or keyboard/mouse devices (when applicable), or the like, serving as a local user interface for programming and/or data entry, retrieval, or other user interaction purposes. As such, the input/output circuit118may provide an interface for the user to interact with various applications (e.g., application114) stored on the client device110. For example, the input/output circuit118includes a keyboard, a keypad, a mouse, joystick, a touch screen, a microphone, a haptic sensor, a car sensor, an IoT sensor, a biometric sensor, an accelerometer sensor, a virtual reality headset, smart glasses, smart headsets, and the like. As another example, input/output circuit118, may include, but is not limited to, a television monitor, a computer monitor, a printer, a facsimile, a speaker, and so on. As used herein, virtual reality, augmented reality, and mixed reality may each be used interchangeably yet refer to any kind of extended reality, including virtual reality, augmented reality, and mixed reality. In some implementations, input/output circuit118of the client device110can receive user input from a user (e.g., via sensors, or any other input/output devices/ports described herein). A user input can be a plurality of inputs, including by not limited to, a gesture (e.g., a flick of client device110, a shake of client device110, a user-defined custom gesture (e.g., utilizing an API), biological data (e.g., stress level, heart rate, hand geometry, facial geometry, psyche, and so on) and/or behavioral data (e.g., haptic feedback, gesture, speech pattern, movement pattern (e.g., hand, food, arm, facial, iris, and so on), or combination thereof, etc. In some embodiments, one or more user inputs can be utilized to perform various actions on client device110. For example, a user can use a gesture, such as a flick or a shake, to quickly invoke an incident response through the response system130from their client device110. With the use of biological and behavioral data, a user could trigger an incident response, access the vendor marketplace, or recall proof of state using custom-defined gestures via an API with input/output circuit118. The drag and drop file tokenization feature can also be activated by a gesture, allowing a user to seamlessly tokenize files and secure them on the blockchain with a simple motion or touch on their client device110. Input/output circuit118may exchange and transmit data information, via network120, to all the devices described herein. In various implementations, input/output circuit118transmits data via network120. Input/output circuit118may confirm the transmission of data. For example, input/output circuit118may transmit requests and/or information to response system130based on selecting one or more actionable items within the interfaces and dashboards described herein. In another example, input/output circuit118may transmit requests and/or information to third party devices150operated one or more vendors. In various implementations, input/output circuit118can transmit data periodically. For example, input/output circuit118may transmit data at a predefined time. As another example, input/output circuit118may transmit data on an interval (e.g., every ten minutes, every ten hours, etc.). The third party device150includes application152, library154, interface circuit156, and input/output circuit158. The application152, library154, interface circuit156, and input/output circuit158may function substantially similar to and include the same or similar components as the components of client device110, such as application112, library114, interface circuit116, and input/output circuit118, described above. As such, it should be understood that the description of the client device110, such as application112, library114, interface circuit116, and input/output circuit118of the client device110provided above may be similarly applied to the application152, library154, interface circuit156, and input/output circuit158of the third party device150. However, instead of a user of a company or institution operations the third party device150, a vendor or providers (e.g., goods or services) operates the third party device150. The response system130may include a logic device, which can be a computing device equipped with a processing circuit that runs instructions stored in a memory device to perform various operations. The processing circuit can be made up of various components such as a microprocessor, an ASIC, or an FPGA, and the memory device can be any type of storage or transmission device capable of providing program instructions. The instructions may include code from various programming languages commonly used in the industry, such as high-level programming languages, web development languages, and systems programming languages. The response system130may also include one or more databases for storing data and an interface, such as a content management circuit135, that receives and provides data to other systems and devices on the network120. The response system130can be run or otherwise be executed on one or more processors of a computing device, such as those described below inFIG.2. In broad overview, the response system130can include a processing circuit132, a processor133, memory134, a content management circuit135, an analysis circuit136, a database140, a front and142. The interface and dashboards generated by content management circuit135can be provided to the client devices110and third party devices150. Generally, the interfaces and dashboards can be rendered at the client devices110and/or third party devices150. The content management circuit135can include a plurality of interfaces and properties, such as those described below inFIGS.7-21. The interfaces and dashboards can execute at the response system130, the client device110, the third party devices150, or a combination of the three to provide the interfaces and dashboards. In some implementations, the interfaces and dashboards generated and formatted by content management circuit135can be provided within a web browser. In another implementation, the content management circuit135executes to provide the interfaces and dashboards at the client devices110and third party devices150without utilizing the web browser. The response system130may be a server, distributed processing cluster, cloud processing system, or any other computing device. Response system130may include or execute at least one computer program or at least one script. In some implementations, response system130includes combinations of software and hardware, such as one or more processors configured to execute one or more scripts. Response system130is shown to include database140and processing circuit132. Database140may store received data. For example, the database140can include data structures for storing information such as, but not limited to, the front end information, interfaces, dashboards, incident information, claim information, user information, vendor information, contract information, invoices, a blockchain ledger, etc. The database140can be part of the response system130, or a separate component that the response system130, the client device110, or the third party device150can access via the network120. The database140can also be distributed throughout system100. For example, the database140can include multiple databases associated with the response system130, the client device110, or the third party device150, or all three. Database140may include one or more storage mediums. The storage mediums may include but are not limited to magnetic storage, optical storage, flash storage, and/or RAM. Response system130may implement or facilitate various APIs to perform database functions (i.e., managing data stored in database140). The APIs can be but are not limited to SQL, ODBC, JDBC, NOSQL and/or any other data storage and manipulation API. Processing circuit132includes processor133and memory134. Memory134may have instructions stored thereon that, when executed by processor133, cause processing circuit132to perform the various operations described herein. The operations described herein may be implemented using software, hardware, or a combination thereof. Processor133may include a microprocessor, ASIC, FPGA, etc., or combinations thereof. In many implementations, processor133may be a multi-core processor or an array of processors. Memory134may include, but is not limited to, electronic, optical, magnetic, or any other storage devices capable of providing processor133with program instructions. Memory134may include a floppy disk, CD-ROM, DVD, magnetic disk, memory chip, ROM, RAM, EEPROM, EPROM, flash memory, optical media, or any other suitable memory from which processor133can read instructions. The instructions may include code from any suitable computer programming language. The data sources160can provide data to the response system130. In some arrangements, the data sources160can be structured to collect data from other devices on network120(e.g., user devices110and/or third-party devices150) and relay the collected data to the response system130. In one example, a user and/or entity may have a server and database (e.g., proxy, enterprise resource planning (ERP) system) that stores network information associated with the user and/or entity. In this example, the response system130may request data associated with specific data stored in the data source (e.g., data sources160) of the user or entity. For example, in some arrangements, the data sources160can host or otherwise support a search or discovery engine for Internet-connected devices. The search or discovery engine may provide data, via the data acquisition engine180, to the response system130. In some arrangements, the data sources160can be scanned to provide additional data. The additional data can include newsfeed data (e.g., articles, breaking news, and television content), social media data (e.g., Facebook, Twitter, Snapchat, and TikTok), geolocation data of users on the Internet (e.g., GPS, triangulation, and IP addresses), governmental databases, generative artificial intelligence (GAI) data, and/or any other intelligence data associated with the specific entity of interest. The system100can include a data acquisition engine180. In various arrangements, the response system130can be communicatively and operatively coupled to the data acquisition engine180. The data acquisition engine180can include one or more processing circuits configured to execute various instructions. In various arrangements, the data acquisition engine180can be configured to facilitate communication (e.g., via network120) between the response system130and systems described herein. The facilitation of communication can be implemented as an application programming interface (API) (e.g., REST API, Web API, customized API), batch files, and/or queries. In various arrangements, the data acquisition engine180can also be configured to control access to resources of the response system130and database140. The API can be used by the data acquisition engine180and/or computing systems to exchange data and make function calls in a structured format. The API may be configured to specify an appropriate communication protocol using a suitable electronic data interchange (EDI) standard or technology. The EDI standard (e.g., messaging standard and/or supporting technology) may include any of a SQL data set, a protocol buffer message stream, an instantiated class implemented in a suitable object-oriented programming language (e.g., Java, Ruby, C#), an XML file, a text file, an Excel file, a web service message in a suitable web service message format (e.g., representational state transfer (REST), simple object access protocol (SOAP), web service definition language (WSDL), JavaScript object notation (JSON), XML remote procedure call (XML RPC)). As such, EDI messages may be implemented in any of the above or using another suitable technology. In some arrangements, data is exchanged by components of the data acquisition engine180using web services. Where data is exchanged using an API configured to exchange web service messages, some or all components of the computing environment may include or may be associated with (e.g., as a client computing device) one or more web service node(s). The web service may be identifiable using a unique network address, such as an IP address, and/or a URL. Some or all components of the computing environment may include circuits structured to access and exchange data using one or more remote procedure call protocols, such as Java remote method invocation (RMI), Windows distributed component object model (DCOM). The web service node(s) may include a web service library including callable code functions. The callable code functions may be structured according to a predefined format, which may include a service name (interface name), an operation name (e.g., read, write, initialize a class), operation input parameters and data type, operation return values and data type, service message format, etc. In some arrangements, the callable code functions may include an API structured to access on-demand and/or receive a data feed from a search or discovery engine for Internet-connected devices. Further examples of callable code functions are provided further herein as embodied in various components of the data acquisition engine180. The data sources160can provide data to the response system130based on the data acquisition engine180scanning the Internet (e.g., various data sources and/or data feeds) for data associated with a specific user or entity (e.g., vendor, insurer). That is, the data acquisition engine180can hold (e.g., in non-transitory memory, in cache memory, and/or in database140) the executables for performing the scanning activities on the data sources160. Further, the response system130can initiate the scanning operations. For example, the response system130can initiate the scanning operations by retrieving domain identifiers or other user/entity identifiers from a computer-implemented DBMS or queue. In another example, a user can affirmatively request a particular resource (e.g., domain or another entity identifier) to be scanned, which triggers the operations. In various arrangements, the data sources160can facilitate the communication of data between the client devices140and third party devices150, such that the data sources160receive data (e.g., over network120) from the client devices140and third-party devices150before sending the data other systems described herein (e.g., response system130). In other arrangements and as described herein, the client devices140and third-party devices150, and the data sources160can send data directly, over the network120, to any system described herein and the data sources160may provide information not provided by any of the client devices140and third party devices150. As used herein, the terms “scan” and “scanning” refer to and encompass various data collection operations, which may include directly executing and/or causing to be executed any of the following operations: query(ies), search(es), web crawl(s), interface engine operations structured to enable the data acquisition engine180to enable an appropriate system interface to continuously or periodically receive inbound data, document search(es), dataset search(es), retrieval from internal systems of previously received data, etc. These operations can be executed on-demand and/or on a scheduled basis. In some embodiments, these operations include receiving data (e.g., device connectivity data, IP traffic data) in response to requesting the data (e.g., data “pull” operations). In some embodiments, these operations include receiving data without previously requesting the data (e.g., data “push” operations). In some embodiments, the data “push” operations are supported by the interface engine operations. One of skill will appreciate that data received as a result of performing or causing scanning operations to be performed may include data that has various properties indicative of device properties, hardware, firmware, software, configuration information, and/or IP traffic data. For example, in an arrangement, a device connectivity data set can be received. In some embodiments, device connectivity data can include data obtained from a search or discovery engine for Internet-connected devices which can include a third-party product (e.g., Shodan), a proprietary product, or a combination thereof. Device connectivity data can include structured or unstructured data. Various properties (sometimes referred to as “attributes”) (e.g., records, delimited values, values that follow particular pre-determined character-based labels) can be parsed from the device connectivity data. The properties can include device-related data and/or IP traffic data. Device-related data can encompass data related to software, firmware, and/or hardware technology deployed to, included in, or coupled to a particular device. Device-related data can include IP address(es), software information, operating system information, component designation (e.g., router, web server), version information, port number(s), timestamp data, host name, etc. IP traffic data can include items included in packets, as described elsewhere herein. Further, IP traffic data included in the device connectivity data can include various supplemental information (e.g., in some arrangements, metadata associated with packets), such as host name, organization, Internet Service Provider information, country, city, communication protocol information, and Autonomous System Number (ASN) or similar identifier for a group of devices using a particular defined external routing policy. In some embodiments, device connectivity data can be determined at least in part based on banner data exposed by the respective source vendor or insurer. For example, device connectivity data can include metadata about software running on a particular device of a source entity. In various arrangements, vendors and users can utilize Internet-wide scanning tools (e.g., port scanning, network scanning, vulnerability scanning, Internet Control Message Protocol (ICMP) scanning, TCP scanning, UDP scanning, semi-structured and unstructured parsing of publicly available data sources) for collecting data (e.g., states and performance of companies, corporations, users). Further, in addition to this data, other data collected and fused with the data obtained via scanning may be newsfeed data (e.g., articles, breaking news, television), social media data (e.g., Facebook, Twitter, Snapchat, TikTok), geolocation data of users on the Internet (e.g., GPS, triangulation, IP addresses), governmental databases, and any other data associated with the specific user or entity (e.g., vendor or insurer), their capabilities, configurations, cyber insurance policy, coverage, attestations, questionnaires and overall state of aforementioned attributes. In some arrangements, scanning occurs in real-time such that the data acquisition engine180continuously scans the data sources160for data associated with a specific vendor or user (e.g., real-time states of specific vendors or users, real-time threats, real-time performance). In various arrangements, scanning may occur in periodic increments such that the data acquisition engine180can scan the Internet for data associated with the specific vendor or user periodically (e.g., every minute, every hour, every day, every week, and any other increment of time.) In some embodiments, data acquisition engine180may receive feeds from be various data aggregating systems that collect data associated with specific vendors or users. For example, the response system130can receive specific vendor or user data from the data sources160, via the network120and data acquisition engine180. The information collected by the data acquisition engine180may be stored in database140. In some arrangements, an entity (e.g., company, vendor, insurer, any service or goods provider, etc.) may submit data to response system130and provide information about their products or services, pricing, capabilities, statuses, etc., which may be stored in database140. Memory134may include analysis circuit136. The analysis system136can be configured to perform data fusion operations, including operations to generate and/or aggregate various data structures stored in database140, which may have been acquired as a result of scanning operations or via another EDI process. For example, the analysis circuit136can be configured to aggregate entity data stored in the database140. The entity data may be a data structure associated with a specific entity and include various data from a plurality of data channels. In some embodiments, the analysis circuit136can be configured to aggregate line-of-business data stored in the database140. The line-of-business data may be a data structure associated with a plurality of line-of-business of an entity and indicate various data from a plurality of data channels based on line-of-business (e.g., information technology (IT), legal, marketing and sales, operations, finance and accounting). The analysis circuit136can also be configured to receive a plurality of user and entity data. In some arrangements, the analysis circuit136can be configured to receive data regarding the network120as a whole (e.g., stored in database140) instead of data specific to particular users or entities. The received data that the analysis circuit136receives can be data that response system130aggregates and/or data that the response system130receives from the data sources160and/or any other system described herein. As previously described, the response system130can be configured to receive information regarding various entities and users on the network120(e.g., via device connectivity data). Further, the response system130can be configured to receive and/or collect information regarding interactions that a particular user or entity has on the network120(e.g., via IP traffic data). Further, the response system130can be configured to receive and/or collect additional information. Accordingly, the received or collected information may be stored as data in database140. In various arrangements, the database140can include user and entity profiles. The response system130can be configured to electronically transmit information and/or notifications relating to various metrics, dashboards (e.g., graphical user interfaces) and/or models it determines, analyzes, fuses, generates, or fits to user data, entity data, and/or other data. This may allow a user of a particular one of the client devices110and third party devices150to review the various metrics, dashboards, or models which the response system130determines. Further, the response system130can use the various metrics to identify remediation actions for users and entities. The analysis circuit136implements data fusion operations of the response system130. In various arrangements, the analysis circuit136can be configured to receive a plurality of data (e.g., user and entity data) from a plurality of data sources (e.g., database140, client devices140, third party devices150, data sources160) via one or more data channels (e.g., over network120). Each data channel may include a network connection (e.g., wired, wireless, cloud) between the data sources and the response system130. In some arrangements, the analysis circuit136can also be configured to collect a plurality of data from a particular data source or from a plurality of data sources based on electronically transmitting requests to the data sources via the plurality of data channels, managed and routed to a particular data channel by the data acquisition engine180. A request submitted via the data acquisition engine180may include a request for scanning publicly available information exposed by a user or entity. In some embodiments, the request submitted via the data acquisition engine180may include information regarding access-controlled data being requested from the user or entity. In such cases, the request can include trust verification information sufficient to be authenticated by the target entity (e.g., multi-factor authentication (MFA) information, account login information, request identification number, a pin, certificate information, a private key of a public/private key pair). This information should be sufficient to allow the target entity to verify that a request is valid. In various arrangements, the analysis circuit136can be configured to initiate a scan, via the data acquisition engine180, for a plurality of data from a plurality of data sources based on analyzing device connectivity data, vendor information, scheduling information (e.g., team members), network properties (e.g., status, nodes, element-level (sub-document level), group-level, network-level, size, density, connectedness, clustering, attributes) and/or network information (e.g., IP traffic, domain traffic, sub-domain traffic, connected devices, software, infrastructure, bandwidth) of a target computer network environment and/or environments of the entity or associated with the entity. The operations to fuse various properties of data returned via the scan can include a number of different actions, which can parse device connectivity data, packet segmentation, predictive analytics, cross-referencing to data regarding known vulnerabilities, and/or searching data regarding application security history. These operations can be performed to identify costs of vendors, services offered, hosts, ports, and services in a target computer network environment. The target computer network environment can be identified by a unique identifier, such as a domain identifier (e.g., a top-level domain (TLD) identifier, a subdomain identifier, a URL string pointing to a particular directory), an IP address, a subnet, etc. Further, the target computer network environment can be defined with more granularity to encompass a particular component (e.g., an entity identified by an IP address, software/applications/operating systems/exposed API functions associated with a particular port number, IP address, subnet, domain identifier). In some arrangements, one or more particular target computer network environments can be linked to an entity profile (e.g., in the database140). In one example, scanning can include parsing out packet and/or device connectivity data properties that may indicate available UDP and TCP network services running on the target computer network environment. In another example, scanning can include parsing out packet and/or device connectivity data that indicates the operating systems (OS) in use on the target computer network environment. In various arrangements, vendor information can be determined based accessing a vendor device (e.g.,150) or website of the vendor to collect vendor information (e.g., via an API call). In various arrangements, vulnerabilities and incidents can be determined based on any software feature, hardware feature, network feature, or combination of these, which could make an entity vulnerable to cyber threats, incidents, such as hacking activities, data breaches, and cyberattacks. In turn, cyber-threats (sometimes referred to herein as “cyber-indents” or “incidents”) increase the probability of cyber-incidents. Accordingly, a vulnerability or incident can be a weakness that could be exploited to gain unauthorized access to or perform unauthorized actions in a computer network environment (e.g., system100). For example, obsolete computing devices and/or obsolete software may present vulnerabilities and/or threats in a computer network environment. In another example, certain network frameworks may present vulnerabilities and/or threats in a computer network environment. In yet another example, business practices of an entity may present vulnerabilities and/or threats in a computer network environment. In yet another example, published content on the Internet may present vulnerabilities in a computer network environment. In yet another example, third-party computing devices and/or software may present vulnerabilities and/or threats in a computer network environment. Accordingly, as shown, all devices (e.g., servers, computers, any infrastructure), all data (e.g., network information, vendor data, network traffic, user data, certificate data, public and/or private content), all practices (e.g., business practices, security protocols), all software (e.g., frameworks, protocols), and any relationship an entity has with another entity can present vulnerabilities and/or threats in a computer network environment that could lead to one or more cyber-incidents. In broad view, the analysis circuit136can also be configured to receive company and vendor information regarding the company/vendor. In some implementations, the analysis circuit136can receive a registration request and register user accounts (e.g., accounts). For example, a user of library114may register their user account with a client device such that the client device110can execute the library114and perform various actions. Registering a client device110or user (or vendor) can include, but not limited to, providing various identifying information (e.g., device name, geolocation, identifier, etc.), platform designations (e.g., iOS, Android, WebOS, BlackBerry OS, etc.), user actions (e.g., activation gesture, haptic, biometric, etc.), authentication information (e.g., username, password, two-step criteria, security questions, address information, etc.). Once the analysis circuit136approves a registration request, the information associated with the request may be stored in database140. Additionally, a notification may be transmitted to the client device110indicating the user, vendor, or client device110(or third party device150) is registered and can utilize the dashboards to perform actions associated with one or more applications. In various implementations, analysis circuit136performs statistical operations on received data to produce statistical measurements describing the received data. For example, analysis circuit136may determine capabilities of individuals, objectives, cost estimates, etc. In various implementations, the statistical operations can be calculated based on performing various statistical operations and analysis. In some implementations, received data and previously collected data stored in database140can be used to train a machine-learning model. That is, predictions regarding vulnerabilities and incidents could be based on artificial intelligence or a machine-learning model. For example, a first machine-learning model may be trained to identify particular incidents and output a prediction. In this example, a second machine-learning model may be trained to identify remediation actions based on incident. In various implementations, machine learning algorithms can include, but are not limited to, a neural network, convolutional neural network, recurrent neural network, linear regression model, and sparse vector machine). The various computing systems/devices described herein can input various data (e.g., event logs, debugging information and so on) into the machine learning model, and receive an output from the model indicating a particular action to perform. In some implementations, analysis circuit136can be configured to perform source testing on one or more networks. Source testing on one or more networks can include performing various test plans. During the source testing, various malfunctions and exceptions can be identified. Additionally, the network can be identified such that the testing occurs on a designated network (e.g., or multiple designated content networks). Memory134also includes content management circuit135. The content management circuit135may be configured to generate content for displaying to users and vendors. The content can be selected from among various resources (e.g., webpages, applications). The content management circuit135is also structured to provide content (e.g., via a graphical user interface (GUI)) to the user devices140and/or third party devices150), over the network120, for display within the resources. For example, in various arrangements, a claim dashboard or incident response dashboard may be integrated in a mobile application or computing application or provided via an Internet browser. The content from which the content management circuit135selects may be provided by the response system130via the network120to one or more user devices110and/or third party devices150. In such implementations, the content management circuit135may determine content to be generated and published in one or more content interfaces of resources (e.g., webpages, applications). The content management circuit135can be configured to interact with a database management system or data storage vault, where clients can obtain or store information. Clients can use queries in a formal query language, inter-process communication architecture, natural language or semantic queries to obtain data from the DBMS. In some implementations, one or more clients obtain data from the DBMS using queries in a custom query language such as a Visualization API Query Language. In some implementations, the content management circuit135can be configured to provide one or more customized dashboards (e.g., stored in database140) to one or more computing devices (e.g., user devices140, third party devices150) for presentation. That is, the provided customized dashboards (also referred to herein as “customized interface”) can execute and/or be displayed at the computing devices described herein. In some arrangements, the customized dashboards can be provided within a web browser or installed application. In some arrangements, the customized dashboards can include PDF files. In some arrangements, the customized dashboards can be provided via email. According to various arrangements, the customized dashboards can be provided on-demand or as part of push notifications. In various arrangements, the content management circuit135executes operations to provide the customized dashboards to the user devices140and third party devices150, without utilizing the web browser. In various arrangements, the customized dashboards can be provided within an application (e.g., mobile application, desktop application). The dashboard from which the content management circuit135generates may be provided to one or more users or entities, via the network120. In some arrangements, the content management circuit135may select dashboards and/or interfaces associated with the user or entity to be displayed on the user devices140or third party devices150. Additional details regarding the dashboards and the content presented are described in detail with reference toFIGS.7-21. In an example arrangement, an application executed by the user devices140and/or third party devices150can cause the web browser to display on a monitor or screen of the computing devices. For example, the user may connect (e.g., via the network120) to a website structured to host the customized dashboards. In various arrangements, hosting the customized dashboard can include infrastructure such as host devices (e.g., computing device) and a collection of files defining the customized dashboard and stored on the host devices (e.g., in a database). The web browser operates by receiving input of a uniform resource locator (URL) into a field from an input device (e.g., a pointing device, a keyboard, a touchscreen, mobile phone, or another form of input device). In response, the content management circuit135executing the web browser may request data such as from the database140. The web browser may include other functionalities, such as navigational controls (e.g., backward and forward buttons, home buttons, other navigational buttons or items). The content management circuit135may execute operations of the database140(or provide data from the database140to the user devices140, and/or third-party devices150for execution) to provide the customized dashboards at the user devices140and/or third-party devices150. In some arrangements, the content management circuit135can include both a client-side application and a server-side application. For example, a content management circuit135can be written in one or more general purpose programming languages and can be executed by user devices140and/or third-party devices150. The server-side content management circuit135can be written, for example, in one or more general purpose programming, or a concurrent programming language, and can be executed by the response system130. The content management circuit135can be configured to generate a plurality of customized dashboards and their properties, such as those described in detail below relative to exampleFIGS.7-21. The content management circuit135can generate customized user-interactive dashboards for one or more users and entities, such as the client device110and third party devices150, based on data received, collected, and/or aggregated from the analysis circuit136, any other computing device described herein, and/or any database described herein (e.g.,140). The generated dashboards can include various data (e.g., data stored in database140and/or data sources160) associated with one or more entities including scheduling information, profile information, cybersecurity risk and/or vulnerabilities cybersecurity vulnerabilities (e.g., malware, unpatched security vulnerabilities, expired certificates, hidden backdoor programs, super-user and/or admin account privileges, remote access policies, other policies and procedures, type and/or lack of encryption, type and/or lack of network segmentation, common injection and parameter manipulation, automated running of scripts, unknown security bugs in software or programming interfaces, social engineering, and IoT devices), insurer and vendor information (e.g., policies, contracts, products, services, underwriting, limitations), incident information, cyberattack information (e.g., phishing attacks, malware attacks, web attacks, and artificial intelligence (AI)-powered attacks), remediation items, remediation actions/executables, security reports, data analytics, graphs, charts, historical data, historical trends, vulnerabilities, summaries, help information, domain information, and/or subdomain information. As used herein, a “cyber-incident”may be any incident where a party (e.g., user, individual, institution, company) gains unauthorized access to perform unauthorized actions in a computer network environment. The database140can also include data structures for storing information such as system definitions for customized dashboards generated by content management circuit135, animated or other content items, actionable objects, graphical user interface data, and/or additional information. The analysis circuit136can be configured to determine organization incident readiness. Readiness is the process an organization follows to prepare for a cyber incident before it happens. This includes entering information that may be needed at the initiation of an incident by incident response teams and breach counsel. Readiness levels are calculated by binary completion of the n tasks that are included in that organization's readiness activities. An organization with 10 readiness steps and 5 completed shows as 50%. In some implementations, determining organization incident readiness can include integrating readiness (e.g., insurer data and other vendor data) into third party devices150. For example, the insurer data of a company's insurer can be recorded and stored at a third party device150. In various implementations, determining organization incident readiness can include the analysis circuit determining organization capabilities, limitations, cyber threats, and specific focus associated with cyber threats. Additionally, organization incident readiness can be provided to incident response providers (e.g., security providers, firmware providers, software providers, infostructure providers). The analysis circuit136can also be configured to automatically route incidents and claims to vendors associated with a company or user (e.g., client device110) and in turn contracting and activating an incident response. In some implementations, a response plan can be submitted by a company and the analysis circuit136can decompose and analyze the response plan to determine actionable tasks and activities to complete (e.g., by the company or after contracting with a vendor). In various implementations, the determined organization incident readiness can be stored (e.g., by the analysis circuit136) as a block in a blockchain (or on a ledger) that can metadata identifying the readiness including, but not limited to, a time stamp, proof of date, and artifacts. In various implementations, the data regarding key milestones (e.g., capabilities pre-incident, what happened after the incident occurred, root cause, recoding) can be stored on a blockchain (e.g., such that it is immutable). In particular, key milestones can be traceable and linkable within a blockchain (or ledger) such that issues can be identified, actionable tasks can be tracked, work is routed to vendors (e.g.,150), and proof of resolution is recorded. In some implementations, database140can include a plurality of ledgers or blockchains and the database140can be a node of a plurality of nodes on a ledger or blockchain. It should be understood that the various data and information described herein can be implemented on a blockchain. For example, the blockchain can be used to provide for irrefutable proof in a data set of the data, locations, capabilities, configurations, that were in place prior to an incident. In another example, the block can be used to link the incident occurrence with what worked (e.g., effective in preventing an incident) and what did not work (e.g., vulnerability that led to the incident). For example, the irrefutable permanent ledgers (or blockchain) may be used by users at points in the process where they wish to record proofs on chain. This may include configurations, capabilities, assets, policies, threats, actors, claims, incident reports, cyber threat intelligence artifacts, and any other state-based attribute that needs to be recorded and may be shared with others to irrefutably prove that the state of that attribute was “x” at time “t”. Combinations of attributes for different data, assets, configurations, capabilities, are collected and rolled up to show if any elements have changed through the use of Merkel Trees, allowing a check of the top-most hash of the combination of downstream values facilitating a single checkpoint to determine if any other elements and configurations, combinations of parameters is the same or if they have changed. In various implementations, the analysis circuit136can intake potential or current incidents based on an embedded widget on remote web sites or within remote web applications. This allows an incident response provider or vendor (sometimes referred to herein as “IR providers” or IR vendors”) the ability to seamlessly intake incident response requests for assistance from their web site or one of their sales channel partner sites and have it load directly into the incident intake process within responder. In turn, an embedded widget could be communicably coupled to the analysis circuit136(e.g., via network120) to allow the analysis circuit to start an incident response process (e.g., at proposal stage) and continue through a workflow to achieve response readiness based on pre-defined logic or rules. This rule mechanism can allow for the user to specify specific attributes, collection of attributes, order, and routing method for connecting inbound requests to those who are best-fit to execute on the requests. For example, when an inbound instance of an incident response can be routed to a claim manager based on pre-defined logic or rules, such as to route inbound cases to the IR provider that is active currently, or to the provider who specializes in ransomware extortion cases where the ransom exceeds 10 million, or to round-robin inbound cases among a set of panel IR providers, etc. In some implementations, the analysis circuit136can facilities invoice processing within an incident response process across different insurers. Furthermore, throughout an incident response conditions can be modified, added, or removed to route tasks (or work) to different vendors or partners (e.g.,150). In some implementations, the analysis circuit136can also be configured to collect incident submission data, normalize the data (e.g., based on historical data or trends), and automatically submit insurance claims based on the normalized data. Moreover, the analysis circuit136can connect the underlying root cause to the capability failure or procedural issue and have that data submitted with the insurance claim. For example, the analysis circuit136can connect underlying root cause back to the insurers underwriting questions. In various implementations, the analysis circuit136can integrate organization incident readiness into all related parties to a company. As such, the analysis circuit136can integrate incident response activation and collaborative across business, teams, insurers, etc. Further, the analysis circuit136can be configured to link the root cause of an incident to the capability failure or procedural issue and then link back the insurers underwriting questions. The content management circuit135can also be configured to enable a user (e.g., of a company) to purchase and extended services via the generated dashboards. In some implementations, the content management circuit135allows the user (e.g., via a step through process) to integrate into technology and vendors to resolve issues (e.g., incidents) and/or prevent incidents in the future. For example, the dashboards can provide users integration points for incident response workflow management. As such, the content management circuit135can generate dashboards (and/or interfaces) on an application (e.g.,112or152) for purchasing products, insurances, and services. In particular, the generated dashboards can provide users of the application with a unified (or universal) view of tasks, activities, and progress/status tracking of incidents, claims, etc. The dashboards can also tie back to incident readiness and managing the incidents through resolution. The content management circuit135can also generate the dashboards to include collaboration tools (e.g., video calls, calendar, chats), and the dashboards can include real-time status tracking of policies, incidents, claims, insurers such that policy aligned tasks and status updated can be provide for incident responses and claims. Referring now toFIG.1B, a block diagram depicting a more detailed architecture of certain systems or devices of system100. System100includes the data acquisition engine180and response system130described in detail with reference toFIG.1A. However, it should be understood that the response system130also encompasses the capability to generate content and dashboards tailored for each aspect of the response process, including the response, adapter, and designer components. These content and dashboards are generated by the content management circuit135and can be seen in various figures ranging fromFIGS.7-21. To illustrate further, the response system130enables the presentation of diverse information related to an organization's security and threats through the adapter dashboard and architecture. This facilitates a comprehensive understanding of the security landscape and helps inform decision-making processes. Additionally, the dashboard functionality can be customized by the vendor and/or organization using the designer dashboard and architecture. This empowers them to tailor the visual representation of data, making it more intuitive and aligned with their specific requirements. Furthermore, the responder dashboard and architecture provided by the response system130enable the vendor and/or organization to effectively prepare for, track, and update incidents and readiness. This comprehensive dashboard encompasses the entire incident response lifecycle, from the initial incident detection and response through to the final incident closure and claim submission. By leveraging the responder dashboard and architecture, the vendor and/or organization can ensure smooth incident management, streamline processes, and facilitate efficient collaboration among stakeholders. In the depicted architecture, both organizations and vendors operating the third party devices150or client devices110have the ability to store states162and indexes163within the library154(or library114). In some implementations, these states162and indexes163can be determined based on data derived from various datasets, including the organization dataset164, performance dataset165, and vendor dataset166. In some implementations, the organization dataset164encompasses a wide range of information such as firmographics, data related to locations, assets, and capabilities of the third-party or client organization. This dataset provides a comprehensive understanding of the organization's profile and resources. In some implementations, the performance dataset165includes diverse sets of data, including threat data, actor data, vector data, incident data, claim data, capability data, vendor data, organization data, and team member data. These performance-related datasets capture information for assessing the organization's security posture, incident history, and overall operational performance. They enable effective monitoring, analysis, and decision-making in incident response activities. In some implementations, the vendor dataset166contains information related to offerings (cybersecurity protection plans), terms, team member data, configuration data, configuration state data, pricing details, detection data, alert data, incident data, and intelligence data. This dataset enables organizations to gain insights into the capabilities and services provided by vendors, facilitating informed decision-making when selecting and collaborating with specific vendors. In general, the states162and indexes163, derived from the datasets, are utilized as input by the data acquisition engine180(or analysis circuit136) to output a security posture. In some implementations, the data acquisition engine180is configured to scan and perform data collection based on accessing vendor embedded applications175, via ecosystem partner APIs174. This enables seamless integration with vendor systems, allowing for efficient retrieval and synchronization of relevant data. In the depicted architecture, the states162and indexes163improve the efficient operations of the response system130. These states162and indexes163can stored within the library154(or library114) and are determined based on data from various datasets, including the organization dataset164, performance dataset165, and vendor dataset166. In some implementations, the states162represent the current condition or status of the organization or vendor operating the third-party150or client devices110. They encapsulate information such as system configurations, security policies, incident response readiness, and other relevant parameters. By maintaining these states, the response system130can quickly access and reference the most up-to-date information about the organization's or vendor's environment. Additionally, in some implementations, the indexes163serve as pointers or references to specific data or resources within the library154(or library114). They streamline the retrieval and access of critical information, ensuring efficient data processing and analysis. These indexes are designed to optimize search operations and enable rapid access to relevant datasets, contributing to the overall responsiveness and effectiveness of the response system130. Accordingly, to ensure the accuracy and currency of the states162and indexes163, the data acquisition engine180can be configured to scan and collect data by interacting with the vendor embedded applications175. The communication can occur through ecosystem partner APIs174, establishing a connection between the response system130and the embedded applications175used by vendors. Through this communication, the data acquisition engine180can retrieve real-time (or near real-time) information from the vendor's systems, including offerings, configurations, alerts, incidents, and other relevant data. In some implementations, the engine180can utilize the retrieved data to update and synchronize the states162and indexes163, providing that the response system130has the latest and accurate information to support incident response activities. Expand further on states162and indexes163, the data acquisition engine180can maintain the security posture of the organization. That is, the data acquisition engine180can actively check a vendor's API for any changes in the configuration “State,” the data acquisition engine180that the security posture remains up to date and aligned with the evolving environment. By recording these configuration updates to the corresponding index, the data acquisition engine180and response system130establishes a view of the organization's security landscape. This approach goes beyond static assessments and provides a dynamic and real-time perspective on the organization's security posture. By linking the configuration data with real incident data and other relevant metadata, the response system130enhances the accuracy and actionability of the match, enabling quick and effective response to potential threats. In various arrangements, this continuous monitoring and adaptation of the security posture over time is provided and/or presented in a posture stream (as shown with reference toFIG.18A), which captures and analyzes the evolving information. As new data points are gathered and recorded in the posture stream, the response system130can execute proactive incident response activities. As used herein, a “security posture” refers to the current state and overall cybersecurity risk profile of an organization or vendor. It is determined based on various factors and information collected from entity data, including system configurations, security policies, incident response readiness, and other relevant parameters. In some arrangements, the data acquisition engine180(or analysis circuit136) scans and collects data from vendor embedded applications through ecosystem partner APIs, ensuring the accuracy and currency of the states and indexes used to represent the security posture. In various arrangements, the analysis circuit136utilizes a distributed ledger to tokenize and broadcast the security posture, ensuring transparency and immutability. The analysis circuit136can also be configured to model the security posture and multiple security objectives to generate a set of cybersecurity attributes specific to the entity. Furthermore, the data acquisition engine180is shown to gather data from blockchain170(e.g., ledgers storing various immutable information about entities, vendors, and corporations) via code168and smart contracts169that are executed by logic handling167(e.g., of the data acquisition engine180). In some implementations, data acquisition engine180can communicate with response system130directly (e.g., via a wired or hard-wired connection) or via APIs171. To enable user access and interaction with the dashboards and content generated by the response system130, user access172is provided. Users, including organizations, vendors, and entities, can access the dashboards and content through dedicated applications such as application112or application152. These applications can be accessed through user devices, such as client device110, or through third-party devices150. Additionally, user access172to the dashboards and content can be provided to users (e.g., organizations, vendors, entities) via an application (e.g.,112or152) a user device (e.g.,110) and/or third party device150. Additional, fewer, or different systems and devices can be used. It is important to note that the depicted system and devices are not exhaustive, and additional, fewer, or different systems and devices may be employed depending on specific implementation requirements. The architecture can be tailored to suit the unique needs of organizations, vendors, and entities, allowing for flexibility and customization in the deployment of the response system130. In addition to gathering data from the blockchain170, the response system130can establish a communication channel with the blockchain170. This communication enables the response system130to interact with the blockchain170in a secure and decentralized manner. By directly accessing the blockchain170, the response system130can leverage its inherent properties of immutability, transparency, and distributed consensus to enhance the integrity and reliability of incident-related data and information. Accordingly, the response system130can use blockchain170to record and verify incident details, maintain an auditable trail of actions and transactions, and ensure the integrity of information throughout the incident response process. It will be recognized that some or all of the figures are schematic representations for purposes of illustration. The figures are provided for the purpose of illustrating one or more embodiments with the explicit understanding that they will not be used to limit the scope or the meaning of the claims. Referring now toFIG.2, a depiction of a computer system200is shown. The computer system200that can be used, for example, to implement a system100, incident response system130, client devices110, third party devices150, data sources160, and/or various other example systems described in the present disclosure. The computing system200includes a bus205or other communication component for communicating information and a processor210coupled to the bus205for processing information. The computing system200also includes main memory215, such as a random-access memory (RAM) or other dynamic storage device, coupled to the bus205for storing information, and instructions to be executed by the processor210. Main memory215can also be used for storing position information, temporary variables, or other intermediate information during execution of instructions by the processor210. The computing system200may further include a read only memory (ROM)220or other static storage device coupled to the bus205for storing static information and instructions for the processor210. A storage device225, such as a solid-state device, magnetic disk or optical disk, is coupled to the bus205for persistently storing information and instructions. The computing system200may be coupled via the bus205to a display235, such as a liquid crystal display, or active matrix display, for displaying information to a user. An input device230, such as a keyboard including alphanumeric and other keys, may be coupled to the bus205for communicating information, and command selections to the processor210. In another arrangement, the input device230has a touch screen display235. The input device230can include any type of biometric sensor, a cursor control, such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor210and for controlling cursor movement on the display235. In some arrangements, the computing system200may include a communications adapter240, such as a networking adapter. Communications adapter240may be coupled to bus205and may be configured to enable communications with a computing or communications network120and/or other computing systems. In various illustrative arrangements, any type of networking configuration may be achieved using communications adapter240, such as wired (e.g., via Ethernet), wireless (e.g., via Wi-Fi, Bluetooth), satellite (e.g., via GPS) pre-configured, ad-hoc, LAN, WAN. According to various arrangements, the processes that effectuate illustrative arrangements that are described herein can be achieved by the computing system200in response to the processor210executing an arrangement of instructions contained in main memory215. Such instructions can be read into main memory215from another computer-readable medium, such as the storage device225. Execution of the arrangement of instructions contained in main memory215causes the computing system200to perform the illustrative processes described herein. One or more processors in a multi-processing arrangement may also be employed to execute the instructions contained in main memory215. In alternative arrangements, hard-wired circuitry may be used in place of or in combination with software instructions to implement illustrative arrangements. Thus, arrangements are not limited to any specific combination of hardware circuitry and software. Referring now toFIG.3, the data acquisition engine180and analysis circuit136of the response system130, as depicted inFIG.1, depict an architecture that facilitates efficient data acquisition and analysis. In some implementations, a user dataset142, containing diverse data associated with different entities and users, can be securely stored in the database140. The systems and devices illustrated inFIG.3communicate and exchange information over the network120, which enables seamless integration and collaboration among the components. The data acquisition engine180encompasses various components designed to support the execution of applications112and152. These components include, but are not limited to, the platform application infrastructure302, platform application code304, platform application APIs306, and platform application datasets and indexes308. Together, these elements form the support of the data acquisition engine180, providing the structures and resources to ensure the efficient functioning of the applications. Additionally, integration APIs310and blockchain APIs312are integrated into the data acquisition engine180, enabling seamless execution of API requests, data retrieval from blockchains, access to data sources160, and integration with various vendors and third parties for streamlined data exchange. These integration APIs310facilitate the secure and reliable flow of information, ensuring the responsiveness and effectiveness of the data acquisition process. The analysis circuit136is shown to include, but is not limited to, a security stack designer and composition (SSDC) system137, an incident response collaboration (IRC) system138, and a security program orchestration (SPO) system139. For example, the SSDC system137walk users through identifying what data and computations are most important, where the data resides, what vendor product, service, and procedural capabilities are in place to prevent/detect/respond to cyber-attacks, and based on these visualized gaps, determine what to prioritize. The analysis circuit136includes several components that improve the capabilities of the response system130. One of these components is the security stack designer and composition (SSDC) system137, which is configured to guide users through the process of identifying and addressing potential vulnerabilities and gaps in their security infrastructure. In some implementations, the SSDC system137provides users with a systematic approach to evaluate the significance of their data and computational processes, determining their criticality in the context of cybersecurity. By utilizing the SSDC system137, users can gain insights into the specific locations where their data is stored and processed, allowing for a comprehensive understanding of potential security risks. In general, the SSDC system137employs various techniques to identify specific locations where data is stored and processed within an organization's infrastructure. In particular, by leveraging data mapping and inventory techniques that enable the SSDC system137to identify data repositories, databases, file systems, and other storage systems where data is stored. For example, the SSDC system137can analyze network traffic and data flows within an organization's network to identify sources and destinations of data. By monitoring network communication and analyzing data packets, the SSDC system137can trace the path of data transmission and determine the endpoints where data is stored or processed. Additionally, the SSDC system137can utilize data discovery and scanning mechanisms (e.g., using data acquisition engine180) to identify data repositories within an organization's infrastructure. This may involve scanning file systems, databases, cloud storage, and other data repositories to identify the locations where sensitive or critical data resides. In some implementations, the SSDC system137can integrate with data classification tools or metadata repositories (e.g., data sources160) to gather information about the nature and sensitivity of the data. By understanding the characteristics and classification of data, the SSDC system137can identify the specific locations where sensitive data is stored or processed. By combining these techniques, the SSDC system137can provide organizations with a comprehensive view of the locations where data is stored and processed. It enables organizations to understand the data flow across their infrastructure and gain insights into the potential security risks associated with specific data storage and processing environments. For example, an organization that utilizes both on-premises servers and cloud storage for data storage. The SSDC system137can perform an analysis of the organization's network and infrastructure, monitoring data flows between different systems. It would identify the on-premises servers, databases, and file systems where certain data is stored. Additionally, it would detect the cloud storage providers and specific cloud repositories where data is stored. By mapping out these locations, the SSDC system137provides the organization with a clear understanding of the data storage landscape and enables them to apply appropriate security measures to protect the data in each location. In some implementations, the SSDC system137facilitates an assessment of the existing vendor products, services, and procedural capabilities that are currently in place to prevent, detect, and respond to cyber-attacks. This evaluation enables users to identify any gaps or areas of improvement in their security stack. Through visualizations and analysis, the SSDC system137helps users prioritize their security measures based on identified gaps and vulnerabilities. By highlighting areas that require attention, the SSDC system137empowers organizations to allocate their resources effectively and take proactive steps to enhance their overall security posture. Moreover, the SSDC system137is designed to be dynamic and adaptable, accommodating the ever-evolving threat landscape and the changing needs of organizations. It provides a user-friendly interface that simplifies the complex task of security stack design and composition, making it accessible to users with varying levels of technical expertise. In some implementations, the IRC system138can be configured to collect, aggregate, and generate data and data structure that can be presented via application112and152and can be configured to determine level of importance related to matter pre-incidents, pre-associate to internal incident team members, cyber insurers, breach counsel, incident response firms, and security vendors to reduce the time it takes to activate and triage live incidents in the future. By leveraging the capabilities of the IRC system138, organizations can efficiently manage incidents, reduce response times, and ensure collaboration among various stakeholders. In some implementations, the IRC system138can collect and aggregate relevant data. This can include gathering information from various sources such as incident reports, security logs, system alerts, and user-generated data. The IRC system138employs data collection mechanisms to capture and centralize this information, ensuring that incident responders have a comprehensive and consolidated view of the incident landscape. The term “incident landscape” refers to the overall environment and context in which incidents occur within an organization's systems and networks. It encompasses the various factors, elements, and conditions that shape the occurrence and impact of security incidents. The incident landscape includes aspects such as the organization's infrastructure, network architecture, data assets, applications, user activities, potential vulnerabilities, and threat landscape. Understanding the incident landscape is important for incident responders as it allows them to gain insights into the organization's unique security challenges, identify potential attack vectors, assess risks, and develop effective incident response strategies. By comprehensively mapping and analyzing the incident landscape using the IRC system138, organizations can proactively strengthen their defenses, detect and respond to incidents promptly, and minimize the impact of security breaches. In some implementations, the IRC system138can generate data structures that facilitate the organization and presentation of incident-related information. These data structures enable the categorization, classification, and correlation of incident data, making it easier for incident responders to analyze and make informed decisions. The IRC system138can employ various techniques such as data modeling, schema design, and indexing to create efficient and structured data representations. By leveraging the data and data structures generated by the IRC system138, organizations can determine the level of importance related to pre-incident matters. This involves assessing the potential impact and severity of different incident scenarios, identifying critical assets and systems, and evaluating the potential risks and vulnerabilities. This information helps organizations prioritize their incident response efforts, allocating appropriate resources and attention to high-priority incidents. In some implementations, the IRC system138also enables pre-association of internal incident team members, cyber insurers, breach counsel, incident response firms, and security vendors. By establishing these pre-associations, organizations can expedite the activation and triaging of live incidents in the future. The IRC system138can maintain a database of trusted contacts and partners, allowing incident responders to quickly engage the necessary expertise and support when responding to incidents. This reduces response times and enhances the overall efficiency of technology and incident handling. Moreover, the IRC system138facilitates seamless collaboration among various stakeholders involved in incident response. It provides a unified platform where team members can share information, communicate, and coordinate their efforts. The IRC system138may include features such as real-time messaging, task assignment, document sharing, and incident status tracking, enabling effective collaboration and ensuring that all stakeholders are aligned and working towards a common goal. The security program orchestration (SPO) system139can be configured to manage and adapt an organization's security program to address changes in the security posture and cyber threats. In some implementations, it operates by receiving inputs that indicate the changing state of the security posture, which can come from various sources such as technical indicators or human-assisted inputs through APIs or social media sharing. These inputs provide valuable information about emerging threats, vulnerabilities, or changes in the organization's security landscape. Once the SPO system139receives these inputs, it analyzes and evaluates the information to determine the necessary adjustments and changes required in the security program. This involves identifying specific areas or aspects of the security program that need to be modified, such as updating security policies, configurations, access controls, or implementing additional security measures. The orchestration aspect of the SPO system139coordinates and manages the implementation of these changes across the organization's various vendor tools and configurations. It ensures that the necessary modifications are applied consistently and effectively across different security systems and technologies, minimizing any potential gaps or inconsistencies that could compromise the organization's overall security posture. Furthermore, the SPO system139can be configured to automate and streamline the process of implementing security program changes, reducing the manual effort and potential errors associated with manual intervention. It can leverage automation capabilities to efficiently propagate the required changes to the appropriate security tools, configurations, and policies, ensuring that the organization's security program remains up-to-date and aligned with the evolving threat landscape. Referring to the interplay of the analysis circuit136generally, the SSDC system137designs and composes the security stack. It guides users through the process of identifying critical data, determining its storage locations, and understanding the necessary vendor products, services, and procedural capabilities to protect against, detect, and respond to cyber-attacks. By visualizing the existing gaps and vulnerabilities, the SSDC system137helps organizations prioritize their security efforts and make informed decisions to strengthen their security posture. The IRC system138focuses on collaboration and information sharing during incident response. It collects, aggregates, and generates data to be presented via applications112and152. This system facilitates the efficient and effective communication among internal incident team members, cyber insurers, breach counsel, incident response firms, and security vendors. By pre-associating relevant parties and establishing clear lines of communication, the IRC system138reduces the time it takes to activate and triage live incidents in the future, leading to improved incident response capabilities. The SPO system139, on the other hand, plays a crucial role in managing the organization's security program. It receives inputs indicating changes in the security posture or emerging cyber threats, whether through technical indicators or human-assisted inputs. Leveraging these inputs, the SPO system139determines the adjustments required in the security program and orchestrates the implementation of those changes across the organization's various vendor tools and configurations. This ensures that the security program remains up-to-date and aligned with the evolving threat landscape, enhancing the organization's overall security resilience. Accordingly, together, these three systems create a powerful synergy within the organization's security ecosystem. The SSDC system137helps design a robust security infrastructure, the IRC system138enables efficient collaboration and information sharing during incident response, and the SPO system139ensures the agility and adaptability of the organization's security program. By working in tandem, these systems contribute to a proactive and comprehensive approach to improve security, empowering organizations to mitigate risks, respond effectively to incidents, and continuously improve their security posture in a rapidly evolving threat landscape. Referring now toFIGS.4A-4B, a method400for incident response preparedness and readiness through the final incident closure and claim submission. Response system130(e.g., in particular analysis circuit136) or third party device150can be configured to perform method400. Further, any computing device or system described herein can be configured to perform method400. Additionally, all the functions and features described in method400can be performed on an application (described in greater detail with reference toFIGS.7-21). The data acquisition engine180can communicate using APIs with the response system130. In broad overview of the incident response process (i.e., method400), the analysis circuit136can implement method400. The analysis circuit can include various computing systems such as readiness system402, incident system404, cybersecurity connection system406, claim handling system408, and remediation system410can each be systems configured to implement steps within an incident response process. In particular,FIG.4Bshows exemplary activities or tasks performed in each of the steps shown inFIG.4A. Throughout the steps and activities data and data structures can be utilized (e.g., aggregate, collected, or generated) including data of business users412, vendors414, and insurers416. APIs171and API request and returns can be sent and received by the one or more processing circuits to perform method400. Additionally, fewer, or different operations may be performed depending on the particular arrangement. In some arrangements, some or all operations of method400may be performed by one or more processors executing on one or more computing devices, systems, or servers. In various arrangements, each operation may be re-ordered, added, removed, or repeated. Referring to method400in more detail, the analysis circuit136can execute a readiness step by readiness system402, where a readiness analysis is executed. In some implementations, during the readiness step by readiness system402, the analysis circuit136can perform response readiness418and readiness review420. During the response readiness418, the analysis circuit136evaluates the organization's level of preparedness to effectively respond to incidents. It assesses various factors such as the availability of incident response teams, the adequacy of incident response plans and procedures, the integration of incident response tools and technologies, and the establishment of communication channels and protocols. This evaluation helps identify any gaps or deficiencies in the organization's response capabilities, enabling appropriate measures to be taken to address them. Simultaneously (or in a logical order), the readiness review420conducted by the analysis circuit136involves a thorough examination of the organization's overall readiness for incident response. It encompasses a comprehensive review of the organization's incident response framework, including its policies, procedures, documentation, and training programs. The analysis circuit136examines whether the organization's incident response framework aligns with industry best practices, regulatory requirements, and internal objectives. It also assesses the organization's ability to effectively coordinate and collaborate with external stakeholders, such as incident response providers, cyber insurers, breach counsel, and other relevant parties. In some arrangements, the readiness system402is configured to access the entity data of an organization and utilize this information to determine the organization's security posture. The readiness system402can take into account various parameters such as the entity's cybersecurity policies, system configurations, incident response readiness, and others. It can then model the security posture along with a plurality of security objectives of the organization to generate a set of cybersecurity attributes. The analysis circuit136can also execute an incident step by incident system404, where an incident analysis is executed. In some implementations, during the incident step by incident system404, the analysis circuit136can perform incident response activation422and incident response management424. During the incident response activation422, the analysis circuit136triggers the actions to initiate the incident response process. It activates the predefined incident response plans, procedures, and resources to ensure a swift and coordinated response. This includes notifying the incident response team, engaging relevant stakeholders and vendors, and initiating communication channels to exchange critical information. In some arrangements, the incident system is configured to maintain the relationship between the entity and third-party cybersecurity providers. That is, it is configured to model a plurality of cybersecurity protection plans between the entity and a third-party. In particular, it provides a framework for integrating third-party cybersecurity solutions into the entity's systems, ensuring that these solutions align with the entity's security objectives and can effectively address its cybersecurity needs. Simultaneously (or in a logical order), the analysis circuit136executes incident response management424, which involves the ongoing coordination, monitoring, and control of the incident response activities. For example, it ensures that the incident response team follows the established procedures, communicates effectively, and collaborates seamlessly to address the incident. The analysis circuit136provides real-time insights and updates on the incident's status, facilitates information sharing between team members, and tracks the progress of incident containment, eradication, and recovery efforts. By effectively managing the incident response, the analysis circuit136helps minimize the impact of the incident and accelerates the return to normal operations. By performing the incident response activation422, it initiates a rapid and coordinated response, while the incident response management424ensures effective coordination and control throughout the incident response process. This incident analysis and response approach facilitated by the analysis circuit136allows organizations to mitigate the impact of incidents, minimize downtime, and protect their critical assets and operations. The analysis circuit136can also execute a proposal/quote/contract step by cybersecurity connection system406, where a proposal/quote/contract generation is executed. In some implementations, during the proposal/quote/contract step by cybersecurity connection system406, the analysis circuit136can perform invoice management426. During the proposal/quote/contract step by cybersecurity connection system406, the analysis circuit136leverages its capabilities to generate comprehensive and accurate proposals, quotes, and contracts. It takes into account the specific requirements, parameters, and preferences of the involved parties, ensuring that the proposed terms align with their respective needs. The analysis circuit136utilizes relevant data, such as pricing information, service level agreements (SLAs), and contractual obligations, to generate customized proposals and quotes. In some implementations, within the proposal/quote/contract step by cybersecurity connection system406, the analysis circuit136incorporates invoice management426functionality. This feature enables the efficient handling and tracking of invoices related to the proposed services or products. The analysis circuit136ensures that accurate and timely invoices are generated, shared, and managed throughout the invoicing process. It may include features such as invoice creation, validation, tracking, and payment processing, streamlining the financial aspect of the proposal/quote/contract lifecycle. In some arrangements, the cybersecurity connection system406can be configured to determine and provide (i.e., connect) a cybersecurity protection plan, utilizing one or more protection parameters. The plans can correspond to a new cybersecurity attribute that has been identified as necessary to protect the organization. The cybersecurity connection system406makes this protection plan available to the entity, which can then choose to activate it based on its specific needs and acceptance of the plan's terms. The analysis circuit136can also execute a claims step by claim handling system408, where claims are generated and tracked. In some implementations, during the claims step by claim handling system408, the analysis circuit136can perform proof of readiness429, provide an application430(e.g., application112and152), generate and provide questionaries428, and perform claim management427. In some implementations, proof of readiness429involves gathering and presenting evidence to substantiate the readiness of the organization in handling incidents and responding effectively. The analysis circuit136collects relevant data, such as incident response plans, documentation, training records, and compliance certifications, to demonstrate the organization's preparedness. Additionally, the analysis circuit136provides an application430, such as application112and152, to facilitate the claims process. This application serves as a centralized platform where users can access and submit their claims. It streamlines the entire claims workflow, enabling efficient communication, documentation, and tracking of the claims from initiation to resolution. In some arrangements, the claim handling system408is configured to monitor the environmental data of the entity while modeling at least one of the plurality of cybersecurity protection plans. That is, the claim handling system408monitors for any anomalies or signs of potential cybersecurity incidents in the entity's environment. When it detects a new cybersecurity incident associated with the entity from the environmental data, it generates a report, enabling the entity or vendor to promptly respond to the incident and prevent further damage. In some implementations, as part of the claims step by claim handling system408, the analysis circuit136also generates and provides questionnaires428. These questionnaires are designed to gather specific information related to the incident or the claim being submitted. They serve as a structured means to collect relevant details and documentation that are necessary for claim evaluation and processing. Moreover, the analysis circuit136encompasses claim management427functionalities during the claims step by claim handling system408. This includes activities such as claim validation, documentation management, claim status tracking, and communication with involved parties. The analysis circuit136ensures that claims are effectively managed, providing transparency and visibility into the progress and status of each claim. The analysis circuit136can also execute a remediation step by remediation system410, where remediations are executed. In some implementations, during the remediation step by remediation system410, the analysis circuit136can perform remediation tasks432, open readiness issues and gaps434, and execute underwriting436(e.g., of organizations to determine what type of vendor plans, products, or services they may qualify for). The execution of remediation tasks432includes implementing specific actions or measures to mitigate vulnerabilities, resolve security gaps, and address any identified weaknesses in the organization's security infrastructure. The analysis circuit136can provide guidance and instructions to stakeholders, outlining the necessary steps to remediate the identified issues effectively. In some arrangements, the remediation system410is configured to execute one or more remediation actions to mitigate a vulnerability or security gap. It bases its actions on the security posture of the entity. If a vulnerability is detected or a security gap is identified, the remediation system410executes to address the issue, employing a range of remediation actions such as patching software, modifying system configurations, or enhancing security policies. Additionally, the analysis circuit136facilitates the process of opening readiness issues and gaps434. It identifies areas where the organization may have shortcomings or deficiencies in its preparedness for potential incidents or security threats. By highlighting these gaps, the analysis circuit136helps organizations prioritize and allocate resources to address the identified issues and enhance their overall readiness posture. Moreover, the analysis circuit136can execute underwriting436, which involves evaluating organizations to determine the type of vendor plans, products, or services they may qualify for. Through a comprehensive assessment, the analysis circuit136analyzes various factors, such as the organization's security measures, incident response capabilities, risk management practices, and compliance with industry standards. Based on the evaluation, the analysis circuit136provides insights and recommendations on suitable vendor offerings that align with the organization's specific requirements and level of readiness. In some arrangements, the readiness system402is configured to continuously update the security posture of the entity. It does this by monitoring dynamic changes in the entity data, which can involve alterations in system configurations, updates to security policies, new cyber threats, and shifts in the cyber risk landscape. This continuous updating of the security posture ensures that the organization's security status always reflects the most current conditions. It enables the analysis circuit136to react to emerging threats or vulnerabilities, providing real-time protection for the entity's data and systems. In some arrangements, the readiness system402can also be configured to tokenize and broadcast the security posture to a distributed ledger. This process involves converting the security posture into a format suitable for recording on a blockchain (e.g., a type of distributed ledger). It then broadcasts this tokenized data across the network of computers that maintain the ledger. Additionally, the readiness system402provides a public address of the tokenized updated security posture on the distributed ledger. This public address can be accessed by a plurality of third-parties for verification. This transparent and immutable record-keeping enhances trust among stakeholders and provides a verifiable proof of the entity's security posture. In some arrangements, the readiness system402is further configured to generate a security roadmap. This roadmap includes a plurality of phases associated with the modeling of the set of cybersecurity attributes. Each cybersecurity attribute of the set is assigned a phase associated with the security roadmap of the entity. For example, the roadmap serves as a strategic plan that outlines the steps the entity needs to (or should) take to enhance its security posture. It provides a clear pathway to achieving the entity's security objectives, ensuring that efforts are well-coordinated and resources are optimally utilized. By assigning each cybersecurity attribute to a phase of the roadmap, the readiness system402ensures that each aspect of the entity's security is appropriately addressed. In some arrangements, the cybersecurity connection system406can create and set in motion a cybersecurity protection obligation, in provide plans425, between the entity and the third-party upon receiving an activation of the cybersecurity protection plan. The cybersecurity protection obligation can be a binding agreement or contract that outlines the responsibilities and roles of the entity and the third-party in securing the entity's systems and data. This protection obligation is characterized by several protection attributes, which may involve various elements such as the scope of protection, the duration of the contract, the specific cybersecurity services to be provided, the response time in the event of a security incident, and the terms of service termination or renewal. Moreover, the cybersecurity connection system406can identify multiple cybersecurity protection plans (e.g., at provide plans425) associated with various third-parties. These could include a wide array of cybersecurity service providers, each offering distinct protection plans. For instance, a first cybersecurity protection plan could be offered by a first third-party, while a second cybersecurity protection plan could be offered by a different third-party. Each of these protection plans can be associated with the new cybersecurity attribute identified during the modeling process, indicating that they are specifically designed to address this aspect of the entity's cybersecurity needs. In some arrangements, each cybersecurity protection plan, in turn, is associated with one of several availability states. These states provide an immediate understanding of the plan's status regarding its accessibility for the entity. The “available now” state means that the plan is currently accessible for implementation. The “available pending” state signifies that the plan will become accessible in the future, perhaps subject to certain conditions or the passing of a certain period. Conversely, an “unavailable” state denotes that the plan is not currently accessible, possibly due to it being phased out, fully subscribed, or not being offered in the entity's region. Additional or fewer states can be added. This system of availability states allows the entity to quickly determine which plans are viable options for enhancing their cybersecurity posture. In some arrangements, the incident system404can establish a data (e.g., continuous, in real-time, periodically) monitoring channel between the entity and the third-party. This communication stream allows for real-time (or near real-time) detection and response to any potential cybersecurity incidents. To achieve this, a first communication connection is established using a first application programming interface (API) between the entity's computing system (e.g.,110) or one or more entity assets (e.g.,110) and the incident system404. This connection allows the incident system404to continuously monitor the entity's systems and data for any signs of a cybersecurity incident. Simultaneously, a second communication connection is established using a second API between a third-party computing system (e.g.,150) and the incident system404. This connection enables the third-party, often a cybersecurity service provider, to also monitor the entity's systems and data, providing an additional layer of protection and vigilance. Moreover, the claim handling system408can be configured to quickly respond to any detected cybersecurity incidents. Upon detection of a new cybersecurity incident, the claim handling system408generates alerts and provides a real-time dashboard for the entity and vendor. This dashboard provides an overview of the entity's cybersecurity posture, details of the detected incident, recommended response actions, and updates on the response process. This real-time information allows the entity to rapidly understand and react to the cybersecurity incident, minimizing potential damage and downtime. In some arrangements, the remediation system410can use predictive analytics to identify potential security gaps before they can be exploited. It analyses patterns in the entity's data and behaviors, as well as trends and threats in the broader cybersecurity landscape, to predict where vulnerabilities might arise. Upon identifying a potential security gap, the remediation system410proactively executes one or more remediation actions. These actions could involve updating security policies, patching software vulnerabilities, reconfiguring system settings, providing cybersecurity training to employees, or implementing additional cybersecurity measures. Referring now toFIG.5, a vendor-provider marketplace500, according to some arrangements. The vendor-provider marketplace500depicts generally the interactions between vendors510and users530(e.g., directly or through partners420) as well as vendors510to partners520. For example, each vendor520can include, but is not limited to, offerings, terms, APIs, and data that can be provided and/or exchanged with to the response system130via the data acquisition engine180and other vendors, incident response firms, and breach counsel (e.g., law firm) (collectively referred to as “partners520”). In some implementations, those partners520can communicate with the data acquisition engine180ofFIG.1Ato generate dashboards of an application (e.g.,112,152) and store data in database140for future use. Expanding on the vendor-provider marketplace500depicted inFIG.5, this marketplace serves as a central hub for interactions between vendors510, users530, and partners520, facilitating the exchange of offerings, terms, APIs, and data. Each vendor510within the marketplace encompasses a range of products, services, and resources that can be made available to users530directly or through the engagement of partners520. These partners520, which include incident response firms, breach counsel (such as law firms), and other relevant entities, play a crucial role in providing expertise and additional support to enhance incident response capabilities (i.e., a type of cybersecurity attribute). Through seamless communication with the data acquisition engine180, the partners520can actively engage in generating comprehensive dashboards within the application interfaces (e.g.,112,152). These dashboards offer real-time insights and analytics, enabling users530to visualize and assess their incident response readiness, track ongoing incidents, and access relevant data stored in the database140for future reference. The data acquisition engine180serves as a communication bridge, allowing partners520to contribute information and leverage the functionalities of the response system130. It should be understood that the vendors510, partners520, and users530depicted in the vendor-provider marketplace500can all be executed by computer systems, exemplified by the computing system200shown inFIG.2. These computer systems enable seamless collaboration, data exchange, and transactional activities within the marketplace, ensuring a dynamic and efficient ecosystem for incident response management. In certain implementations, the vendor-user interaction within the marketplace535extends beyond mere browsing and exploration. Vendors510and users530have the capability to place orders directly through the marketplace535, initiating a streamlined process facilitated by the data acquisition engine. This integration of ordering functionality enhances the efficiency and convenience of the marketplace, enabling seamless transactions between vendors and users. Notably, the marketplace535serves as a platform for programmatic connectivity, enabling new partners to establish collaborative relationships efficiently. The marketplace incorporates contracting workflows and partnering processes, which are seamlessly facilitated through the application interface. Once a partnership is ratified, the partners can immediately engage in business activities within the platform, leveraging the full range of services and offerings available. This includes the ability to submit proposals, engage in reselling, establish technical connectivity for provisioning and licensing, establish API connections for data sharing, utilization, and presentation on the platform, and leverage pre-defined programmable logic for user, vendor, and partner interactions. In some implementations, the marketplace535introduces dynamic and automated workflows that enable efficient routing of inbound orders to the appropriate partner based on predefined criteria. This programmable logic ensures that orders are seamlessly directed to the designated partner for processing and fulfillment. Furthermore, programmatic activation of contracts and seamless order fulfillment processes are executed, ensuring a smooth and rapid delivery of the purchased offering, whether it is a product or service. The marketplace ecosystem facilitates the seamless integration of vendors, partners, and users, streamlining the entire order management process and enabling timely and efficient delivery of products and services. Distinguishing itself from other vendor marketplaces, this embedded marketplace535is seamlessly integrated within the applications and APIs (171) spanning the entire system architecture. This unique integration enables vendor offerings to be presented to users precisely when they need them, seamlessly integrating within the user flow during various stages of cybersecurity incident response planning, testing, and execution. Moreover, the marketplace becomes an integral part of the design and composition processes for constructing a robust cybersecurity stack, as well as during security program orchestration and adaptation to ensure the ongoing effectiveness of the cybersecurity program. By embedding the marketplace within the applications and APIs, users530have immediate access to a comprehensive array of vendor offerings precisely at the point of need. Whether users are developing their incident response plans, conducting tests, executing response strategies, or adapting their security programs, the marketplace seamlessly integrates within their workflow, providing timely and relevant vendor options to enhance their cybersecurity capabilities (i.e., cybersecurity attributes). This unique approach eliminates the need for users to navigate separate platforms or search for vendors independently, streamlining the entire process and promoting efficiency in decision-making and procurement. Additionally, this embedded marketplace fosters a holistic approach to cybersecurity management, facilitating collaboration between users530and vendors510throughout the entire ecosystem. By offering vendor options during incident response planning, testing, and execution, users can make informed decisions and select the most suitable solutions to mitigate risks effectively. Similarly, during the design and composition of their cybersecurity stack, users530can access a diverse range of vendor offerings directly within the application interface, enabling them to build a comprehensive and tailored security infrastructure. Additionally, during security program orchestration and adaptation, the marketplace535provides users with valuable insights and options to enhance the effectiveness and resilience of their security programs, ensuring continuous protection against evolving threats. It should be understood that the embedded marketplace's architecture allows for flexibility and scalability, accommodating additional systems, devices, data structures, and data sources as required. The marketplace can adapt to the evolving needs of users and vendors, expanding its offerings and functionalities to meet the dynamic nature of the cybersecurity landscape. This adaptability ensures that the marketplace remains a valuable resource for users, providing access to the latest innovations and vendor solutions while facilitating seamless collaboration and partnership within the cybersecurity ecosystem. Referring now toFIG.6, a method600for capturing the state of capabilities (sometimes referred to herein as “cybersecurity attributes”) (e.g., vendor technologies or configurations) in place and in use by users, retrieving state, and sharing of state at points in time as well as over a time period. Response system130(e.g., in particular analysis circuit136or data acquisition engine180) or third party device150can be configured to perform method600. Further, any computing device or system described herein can be configured to perform method600. Additionally, all the functions and features described in method600can be performed on an application (described in greater detail with reference toFIGS.7-21). In broad overview of method600, capabilities610and620associated with a corporation of business can be received (e.g., capability A with configuration A1 and configuration A2, and capability bundle B with capability B1 and B2, where capability B1 has a configuration B1A and capability B2 has a configuration B2A). The capabilities610and620can be checked (e.g., check state) and the capabilities can be written to a ledger (e.g., database and/or blockchain170) in steps630. Once the capabilities610and620are received, a thorough check is conducted to verify their state and ensure their accuracy and validity. This check entails examining the current status and parameters of each capability, evaluating factors such as readiness, compatibility, and compliance with established standards or requirements. By performing this comprehensive assessment, any discrepancies or issues pertaining to the capabilities can be identified and addressed. In some implementations, following the verification process, the next step involves recording the capabilities into a ledger. This ledger serves as a secure and reliable storage medium, which can take the form of a database or a blockchain170. The capabilities, along with their associated configurations, are meticulously documented and stored within the ledger, ensuring the integrity and traceability of the information. This enables easy access to the capabilities' details, their respective configurations, and any historical changes or updates that may occur over time. By writing the capabilities to the ledger, organizations gain a centralized and auditable repository that securely maintains a record of their capabilities. Additionally, the ledger ensures transparency and accountability by providing an immutable and tamper-proof audit trail of the capabilities and their configurations. In turn in steps630, a process can occur if a state has changed including, but not limited to, checking rules, execute rules, and notifying, changing state, and/or do nothing. If a change occurred (i.e., trigger condition, e.g., capability A changed, the data acquisition engine180may determine to change it back (or role it back), capability B changed and in turn then vendor technology is configured to change Y associated with the business or corporation) then the one more processing circuits can programmatically connect to vendor technology to change a configuration (e.g., utilizing API calls). In some implementations, at steps640, the data acquisition engine180can communicate with vendor tools to change particular configurations. Upon detecting a change in state, the data acquisition engine180evaluates the trigger condition, such as the alteration of capability A or capability B. Based on this evaluation, a decision is made regarding the appropriate course of action. For example, if capability A experiences a change, the data acquisition engine180may determine that a rollback is necessary to revert the capability back to its previous state. Similarly, if capability B undergoes a change, the vendor technology associated with the business or corporation can be configured to adjust Y accordingly, aligning it with the modified capability. To effectuate these changes, the processing circuits within the data acquisition engine180(or within response system130) establish programmatic connections with the vendor technology responsible for managing the configurations (e.g., at step640). Moving forward to step640, the data acquisition engine180actively engages in communication with the vendor tools to implement the desired changes. Through this interaction, the data acquisition engine180can efficiently orchestrate the configuration changes required to align the capabilities with the desired state. With reference toFIG.6, the one or more processing circuits can utilize the various data structures (e.g., assets, locations, capabilities, threats) to collect, attribute, and adapt to determine if a trigger condition occurred (e.g., historical data of the corporation or business can be used to determine if a trigger condition occurred) In turn, the one or more processing circuits can execute one or more functions such as make an API request (e.g., to vendor, insurer, or business), store information in a database, and/or update a blockchain ledger (e.g., at step630). Additionally, fewer, or different operations may be performed depending on the particular arrangement. In some arrangements, some or all operations of method600may be performed by one or more processors executing on one or more computing devices, systems, or servers. In some implementations, in order to identify the occurrence of a trigger condition, historical data of the corporation or business is often utilized. This historical data provides valuable insights into the past behavior and patterns of the organization, allowing the processing circuits to make informed decisions regarding trigger condition identification. Upon determining that a trigger condition has indeed occurred, the one or more processing circuits initiate a series of functions and operations to address the situation. These functions may include making API requests to relevant entities such as vendors, insurers, or other businesses. Through these API requests, the processing circuits can retrieve crucial information or initiate specific actions necessary to respond to the trigger condition effectively. Additionally, in certain implementations, the processing circuits can update a blockchain ledger, providing a secure and immutable record of the trigger condition and any associated changes made as a result. In various arrangements, each operation may be re-ordered, added, removed, or repeated. This system will be used to deliver state of a business security configuration to enable insurance underwriting, whereby the facts of the state of the business user computing environment are known, and provable. This provides the underwriting insurer the ability to collect irrefutable proof-of-state of the business environment as part of their pre-underwriting and risk selection process and can be then used to enable programmatic binding as part of their application process. The system can be further utilized to enable programmatically and dynamically adaptable insurance products that change the coverage level based on the factual changes in state of the computing environment at the insured through the policy period. This allows the insurer to ensure that the insured has followed the underwriting criteria throughout the term of the policy. Cyber insurance renewals can be programmatically generated and automatically bound based on the binary data provided by the system during renewal as the insurer knows what the compliance history has been for the insured as well as the facts of the current state of the vendor capabilities and configurations in the insureds computing environment. In various arrangements, the underwriting process begins by collecting data from the insured's security tools and configurations. This data can then be analyzed and matched against the specific underwriting requirements defined by the insurer. The collected data acts as irrefutable proof of the insured's security posture, providing the insurer with a holistic understanding of the risk associated with the insured's business environment. In some arrangements, once the data has been collected and matched to underwriting requirements, the processing circuits can wrap this information with the necessary context and metadata through a broker. The broker acts as an intermediary, consolidating and structuring the data in a standardized format that can be seamlessly transmitted to the insurer's quoting API. This integration improves the underwriting application process, enabling the insurer to access the factual data of the insured's security configuration and computing environment. With this data in hand, the insurer can programmatically assess the risk and make informed decisions regarding coverage and policy terms. This automated and dynamic approach empowers insurers to offer adaptable insurance products that can be adjusted based on the factual changes in the insured's computing environment throughout the policy period, ensuring ongoing compliance with underwriting criteria and tailored coverage for the insured. Additionally, the system facilitates the automatic generation and binding of cyber insurance renewals based on the binary data provided during the renewal process. By utilizing the compliance history and the up-to-date facts of the insured's computing environment, the insurer can renew the policy while maintaining a comprehensive understanding of the insured's risk profile and ensuring continuous coverage. Generally,FIGS.7A-7O,8A-8E,9A-9H,12,14A-14B,17,18A-18C,19, and20A-20Bdepict the organization or entity dashboards and the interactive items and actionable items the organization can interact with (e.g., using client device110). Referring now toFIGS.7A-7O, which generally relate to an onboarding process for a response system130(referred to herein as “Responder”) that facilitates incident response. Specifically,FIGS.7A-7Oprovides a quick start guide for organizations to onboard into Responder. As shown inFIGS.7A-7O, the quick start guide includes a series of steps that guide organizations through the process of getting ready to use Responder for incident response. Upon interaction with a client device110(e.g., of a customer), Responder's response system130initiates an onboarding process. The onboarding process begins with a guided wizard that assists the organization in setting up key components of the system, such as inviting team members and identifying critical assets. The guide also provides general guidance on the different features of Responder. As the organization progresses through the steps in the quick start guide, Responder provides helpful tutorial videos to guide the user through each of the different features. Additionally, as the organization fills out the initial steps in the guide, Responder becomes more tailored to the organization's specific needs. Specifically, Responder will provide details on how the organization's specific vendors will help during an incident, rather than just general information on incident response vendors. The first step in the onboarding process is inviting the organization's internal team and copying and sending them an invite link. From there, the guide assists the organization in identifying key assets and understanding how Responder and its vendors will assist during an incident. By providing a comprehensive onboarding process, Responder streamlines incident response and facilitates effective communication and collaboration among all stakeholders. Referring now toFIG.7A, the interface700presents (e.g., on client device110) a step in the onboarding process for Responder, which can include inviting the organization's internal team and its members to join the platform. The response system130streamlines this process by allowing the user to copy and send an invite link to team members via email. Alternatively, users can type in the emails manually and send the invitations that way. In some implementations, once the invitations are sent, the user can proceed to the next step of the onboarding process. This step can include involves identifying critical assets, such as systems and applications, that should be protected and monitored during any incident (e.g., cyber incident). Responder's onboarding guide provides detailed guidance on how to identify these assets and incorporate them into the platform provided by response system130for effective incident response. In addition to facilitating the invitation and asset identification process, Responder's onboarding guide can provide an overview of the platform's various features and functionalities. This includes tutorials on how to use Responder's response system130, how to communicate with internal and external stakeholders, and how to utilize Responder's vendor network for additional support during incidents. Referring now toFIGS.7B,7C, and7D, Responder's onboarding process continues with the identification of critical assets that should be protected and monitored during an incident. Interface700presents an interactive view that allows the user to drag and drop critical assets into actionable items702and704. These actionable items allow the user to upload critical assets from any file rile or connect to a cloud providers such as Amazon, AWS, Google Cloud, or Azure to pull in assets that have already been managed elsewhere. This intuitive interface700improves asset identification for users, ensuring that Responder's response system130can effectively monitor and protect them during an incident. Thus, responder's onboarding guide provides detailed guidance on how to identify and manage critical assets, ensuring that users have a clear understanding of the platform's capabilities and functionalities. In addition to uploading critical assets, Responder's onboarding guide provides guidance on how to configure alerts and notifications for these assets. This ensures that the appropriate stakeholders are notified by the response system130in real-time (or near real-time) if an incident occurs, allowing for rapid response and effective incident management. Some arrangements relate to modeling data, the Responder can be configured to accept data from a user device via an application programming interface (API), tokenize and extract content of the data into a plurality of tokens, generate a unique identifier for each of the plurality of tokens, store a mapping between the unique identifier and each of the plurality of tokens, populate, from each of the plurality of tokens, a plurality of fields of a data object based on the extracted content of the data stored in each of the plurality of tokens, and verify accuracy of the populated plurality of fields. In general, the response system130can be configured to automate the process of filling out questionnaires and applications by allowing users to upload files containing relevant information. In some arrangements, the following method can be implemented to tokenize and index data using the response system130. The method can include (1) accepting files, where the response system130can allow a user to drag and drop files (e.g., docx, csv, xls, ppt, pdf, etc.) into the application or upload files via an API. The method can further include (2) indexing all the content and chuck the data, (3) creating a unique index, and (4) bashing up against a model. Additionally, the method can further include, from the files, (5) allowing the user to answer the questions for underwriting purposes. The method can further include (e.g., within the questioner) (6) determining or receiving data locations, (7) identifying if the evidence that is mapped, (8) determining if the location of the storage matters, (9) identifying the readiness application side. In some arrangements, indexing and chunking data can include indexing all content and breaking it down into smaller, more manageable chunks. This process allows the response system130to efficiently parse and analyze the data, ultimately extracting the relevant information necessary for completing the questionnaire or application. In some arrangements, a unique index can be created for each chunk of data, which serves as a reference point for the system. In some arrangements, the chunked and indexed data can be compared against a pre-existing model, which could be a knowledge base or a machine learning model. Expand on tokenizing and indexing, tokenizing the data can provide several improvements, such as facilitating easier data analysis by breaking data into smaller units simplifies the process of searching, sorting, and comparing information within the uploaded files. Additionally, tokenizing can enhance data processing efficiency by using smaller data units it allow the system to handle and process data faster and more efficiently. Furthermore, tokenizing can improve data extraction accuracy by identifying and extracting relevant information to recognize patterns, relationships, or similarities among the tokens. With regards to indexing, the process can include (1) assigning a unique identifier to each token (e.g., where each token is assigned a unique identifier, enabling the system to differentiate between tokens and ensuring accurate data retrieval), (2) creating an index data structure (e.g., the system constructs an index data structure that stores the mapping between the unique identifiers and their respective token positions within the files), and (3) updating the index as necessary (e.g., as new data is added, modified, or removed from the uploaded files, the index is updated accordingly to maintain its accuracy and usefulness). For example, the automation can be performed on a questioner that allows the user to answer questions for underwriting purposes. Some questions can include: Does the Applicant conduct security vulnerability assessments to identify and remediate critical security vulnerabilities on the internal network and Applicant's public website(s) on the Internet; Does the Applicant install and update an anti-malware (also known as endpoint protection, DR) solution on all systems commonly affected by malicious software? If yes, explain how you know. If no, explain why not; Does the Applicant use any software or hardware that has been officially retired (i.e., considered “end-of-life”) by the manufacturer (e.g., Windows XP); Does the Applicant update (e.g., patch, upgrade) commercial software for known security vulnerabilities per the manufacturer's advice; Does the Applicant update open source software (e.g., Java, Linux, PHP, Python, OpenSSL) that is not commercially supported for known security vulnerabilities; Does the Applicant have processes established that ensure the proper addition, deletion, and modification of user accounts and associated access rights. In some arrangements, the response system130can process the uploaded files and tokenizes the text and data contained within, converting them into identifiable units of information. In some arrangements, the response system130can index the tokenized information, enabling efficient searching, retrieval, and extraction of relevant data. In some arrangements, the response system130can map the indexed information to corresponding fields in the questionnaires or applications, based on pre-defined rules and logic. In some arrangements, the response system130can automatically prefill out the questionnaires or applications with the extracted and mapped information, thereby reducing the user's time and effort. In some arrangements, the user can review the prefilled information and make any necessary adjustments before submitting the completed questionnaire or application. The response system130can include a user interface, a file processing module (or circuit or system), a tokenization and indexing engine (or circuit or system), a mapping module or system, and a form prefilling module or system. The user interface can allow users to drag and drop or upload files, while the response system130can process the uploaded files and sends them to the tokenization and indexing engine. The response system130can tokenize and index the information, which can then be mapped to the appropriate fields in the questionnaires or applications by the response system130. In some arrangements, the response system130can fill out the forms with the extracted information, allowing users to review and submit the completed forms. Advantages of the present disclosure include time and effort savings for users, a reduction in errors due to manual data entry, and an increased likelihood of consistent and accurate information across multiple forms. In some respects, the discloser has been described with respect to a limited number of embodiments, it will be appreciated that many variations, modifications, and other applications of the disclosure may be made within the scope of the appended claims. Accordingly, the response system130offers a user-friendly and efficient feature that allows users to drag and drop files, such as docx, csv, xls, ppt, pdf, and other formats, into the application or upload them via an API. This feature improves the process of providing information for insurance questionnaires or applications, and cybersecurity questionnaires. The system automates the extraction and utilization of the relevant information within the files, reducing manual effort, time consumption, and the potential for errors. In some arrangements, the response system130incorporates a seamless and intuitive user interface that allows users to drag and drop files of various formats into the system. This interface also provides users with the option to upload files using an API, making it easy to integrate the system into their existing workflow. The system is designed to accommodate a wide range of file formats, ensuring that users can provide information from multiple sources and formats without the need for time-consuming conversions. In some arrangements, once the files are uploaded or provided, the response system130processes them by tokenizing and indexing their content. The tokenization process breaks down the content into smaller units, making it easier to analyze and extract relevant information. The indexing process assigns unique identifiers to each token and maps them to their positions within the files, allowing for efficient data retrieval during the subsequent steps. After tokenization and indexing, the system compares the processed data against a pre-existing model, such as a knowledge base or a machine learning model, to identify pertinent information. This comparison enables the system to extract valuable insights and verify the accuracy of the information. With the relevant data extracted, the system proceeds to pre-fill the fields in the insurance questionnaires or applications, and cybersecurity questionnaires. Referring now toFIGS.7E,7F, and7G, Responder's onboarding process continues with the configuration of insurers and panel providers. Interface700presents interactive items and content706for configuring cyber insurance, interactive items and content708for configuring breach coaches, and interactive items and content710for configuring incident response providers. In some implementations, the panel provider configuration process takes the user through steps to help them identify their current cyber insurer, breach coach, and incident response provider. For example, in the first step, the user is asked if they have an existing cyber insurer. If the answer is yes, the user can provide details such as the policy number, broker, email, and existing policy documents. Once completed, the user can proceed to the next step. In the second step, the user is asked if they have an existing breach coach. If the answer is yes, the user can select from a list of offerings that the breach coach has already uploaded into the system. The user can also upload partnership documents and verify the partnership with the breach coach. Once completed, the user can proceed to the final step. In some implementations, in the final step, the user is asked if they have an existing incident response provider. If the answer is yes, the user can either choose from a vendor who is already in the system or create a new vendor on the fly. If the vendor is off panel, the user can provide more information about the off panel agreement, including pricing, payment terms, and a brief description of the offering. The user can also upload partnership documents and request a code from the vendor to verify the partnership. Once the code is received and verified, the user can confirm the information and sign up the vendor to join the incident. Accordingly, this streamlined configuration process ensures that insurers and panel providers are properly integrated into Responder's response system130, allowing for effective collaboration and communication during an incident. Responder's onboarding guide provides detailed guidance on how to configure insurers and panel providers, ensuring that users have a clear understanding of the platform's capabilities and functionalities. Referring nowFIGS.7H,7I, and7J, Responder's onboarding process continues with the response planning phase. Interface700presents a response plan selection page where the organization is asked if they already have a response plan in place. If the answer is yes, the user can upload it directly to Responder. If the answer is no, the user can choose to purchase a response plan that is best suited to their organization based on their DNA. Responder's onboarding guide provides users with a range of response plans to choose from, including response plan712, response plan714, and response plan716. In some implementations, if the user wants more information about a particular plan, they can click on “view details” to see additional files and assets related to that plan. The response plans are designed to provide organizations with a comprehensive framework for responding to incidents effectively. In particular, these plans can be tailored to the specific needs of the organization, based on their size, industry, and other factors. This ensures that the organization is well-prepared to handle incidents when they occur, reducing the risk of costly downtime, reputational damage, and potential delayed incident responses causing increased exposure. In addition to providing response plans, Responder's onboarding guide also provides guidance on how to develop a customized response plan based on the organization's unique needs. This includes guidance on identifying key stakeholders, defining roles and responsibilities, and establishing communication protocols. Referring now toFIGS.7K and7L, Responder's onboarding process continues with the resilience testing phase. Once the user has added their panel providers and selected a response plan, they can move on to verify that the plan is in place and conduct resilience testing using response system130. In some implementations, running tests to evaluate the efficacy of an incident response plan is a component of the onboarding process for an incident response platform. Interface700presents interactive items718and720that allow the user to select from a range of potential tests and run a test of their incident response plan. In some implementations, this feature is designed to streamline the testing process and ensure that the incident response plan is effective and resilient in the face of real-world incidents. For example, to further enhance the testing process, Responder offers the option to link with third-party tools such as COMPANY X. This integration enables users to conduct more comprehensive and accurate tests of their incident response plan. Third-party tools provide an added layer of expertise and testing capabilities that can help organizations identify gaps and weaknesses in their incident response plan, which can then be addressed promptly. When running tests through the Responder platform, the analysis circuit136of response system130can execute the tests and provide detailed analysis of the results. The analysis circuit136can identify areas of strength and weakness within the incident response plan, enabling users to fine-tune and optimize their plan accordingly. This level of analysis provides improvement to security architecture by ensuring that the incident response plan is effective, reducing the risk of costly downtime and reputational damage in the event of an incident. Once the test is completed, the user can store the results and review the findings using interactive items722and724. Any issues that are identified can be remediated promptly to ensure that the incident response plan is as effective as possible. In some implementations, Responder offers a set of recommendations for addressing identified weaknesses in the incident response plan, providing the user with clear guidance on how to optimize their plan for maximum effectiveness. For example, the analysis circuit136can simulate a phishing email and send it to employees within the organization. The analysis circuit136can track how many employees click on the link in the email, and provide detailed analysis of which employees are most susceptible to phishing attacks. This test can be used to identify areas where additional employee training is needed to improve the organization's overall security posture. In another example, the analysis circuit136can simulate a malware attack and track how quickly the organization and its providers are able to detect and respond to the threat. The analysis circuit136can also identify which systems were affected by the malware and provide recommendations for how to remediate the issue. This test can help the organization to ensure that its malware detection and response processes are working effectively. In yet another example, the analysis circuit136can perform a network segmentation test to ensure that critical systems are properly isolated from less critical systems. The analysis circuit136can identify any areas where the network may be vulnerable to attack, and provide recommendations for how to improve network segmentation to better protect critical assets. In yet another example, the analysis circuit136can simulate an incident response scenario and track how well the organization and its providers are able to respond to the threat. The analysis circuit136can identify areas where the incident response plan may need to be updated or improved, and provide recommendations for how to optimize the plan for maximum effectiveness. Referring now toFIGS.7M,7N, and7O, Responder's onboarding process continues with the “what to expect during an incident” updates. Once the user has identified their panel providers and incident response team, Responder provides detailed information on how each vendor will help during an incident. Interface700presents interactive items702, which allow the user to see how each vendor will assist during an incident. In some implementations, this information is provided by the vendor when they sign up and create different offerings. For example, if a vendor created an offering for managed incident response, they would have provided information on how they will assist during an incident. This information is displayed in Responder's interface700, along with product documentation that the user can access online. In general, Responder is an incident response platform that provides organizations with a set of tools and resources to help them prepare for and respond to security incidents. The platform consists of both a backend and frontend component that work together to provide a seamless user experience. The backend components of Responder processes and stores data, executes instructions, and manages the various components of a computing system. It can be executed either by the response system130or by a user device such as a client device110or a third-party device150. The backend is designed to be scalable and flexible, allowing it to adapt to the needs of organizations of all sizes. The frontend components of Responder presents data and content to the user in a clear and intuitive manner. It includes graphical user interface (GUI) code that can be displayed or presented on a user device. The GUI is interactive and customizable, allowing users to easily navigate and interact with the various features of the Responder platform. The frontend includes actionable and interactive content from interface700, as well as additional content that is relevant to the user's specific incident response needs. As shown, the frontend and backend components of Responder work together to provide users with an intuitive incident response platform. The backend processes and stores data, executes instructions, and manages the various components of the system, while the frontend presents this data to the user in an actionable format. Additional details regarding the backend and frontend features of Responder are described herein. Generally,FIGS.8A,8B, and8Cprovide an in-depth look at the home dashboard800, whileFIGS.8D and8Epresent additional interactive items that enhance its functionality. The home dashboard800serves as a centralized hub where organizations can manage various aspects of their operations and optimize their incident readiness InFIGS.8A,8B, and8C, interactive items801and802empower organizations to specify their current capabilities, assess their existing incident readiness, and establish specific rules of engagement. These features enable organizations to define their preparedness and align their incident response strategies with their unique requirements. The home dashboard800also offers convenient billing and payment management capabilities. As depicted by the payment icon806, organizations can seamlessly connect their billing information and make invoice payments directly from the dashboard. This streamlined process eliminates the need for manual invoicing and enhances the overall financial management experience. In some implementations, the home dashboard800allows organizations to report new incidents or conduct readiness tests with ease. By selecting the designated test item804, organizations can initiate simulations or check the effectiveness of their preparedness measures. This functionality enables proactive testing and validation of incident response protocols, ensuring organizations remain well-prepared in the face of potential threats. FIGS.8D and8Eshowcase additional interactive items within the home dashboard800. These items further enrich the user experience and provide organizations with greater control and visibility over their incident response processes. The specific functionalities depicted in these figures may vary but can include features such as incident tracking, incident analytics, incident categorization, response timeline management, incident reporting, and more. In some implementations, the home dashboard800presents a view of incidents (e.g.,808,810, and812), both active and past, allowing organizations to interact with each incident and gain valuable insights into their status. For instance, active incident808provides detailed information about the attack and the current progress of the attacker809A. This includes insights into various stages of the attack, such as reconnaissance, foothold establishment, lateral movement, staging, exfiltration, and public exposure. The dashboard also highlights the response team's progress809B in addressing the incident, including key stages such as intake, investigation, expulsion, and recovery. In some implementations, the home dashboard800presents various metrics809C related to the incident, providing a holistic view of its timeline and impact. These metrics may include the time it took to engage with the attacker, the start time of the incident, the elapsed time since the incident began, the target completion time to resolve the incident, the current status of the attack, the timing of the first and last actions, and the earliest identified entry point, such as a specific computer or system. To facilitate real-time monitoring and progress tracking, the response team's progress809B is represented using visual indicators within the home dashboard800. Each step of the incident response process can be displayed as a box, and when a step is completed, the box is highlighted in green or filled in. Additionally, for each action box, a percentage or progress bar may be displayed, providing a visual representation of the response team's advancement at a glance. This dynamic and intuitive visual display enables organizations to assess the ongoing response efforts and understand the real-time progress made in mitigating the incident. The interactive nature of the home dashboard800allows organizations to actively engage with each incident displayed. By interacting with incidents808,810, and812, organizations can delve deeper into incident details, access relevant documentation, collaborate with the response team, provide updates, track actions taken, and review the overall incident management process. This interactive functionality fosters effective communication and coordination, ensuring that organizations can actively participate in the incident response and stay informed about the progress made at all times. InFIG.8D, upon selecting interactive item801, a pop-up814can be presented that includes information about the organization such as, cyber liability insurance, insurer, policy documents, policy number, pre-associated breach or panel coaches, panel incident response providers, etc. In some implementations, this information can be manual entered by a user. In some implementations, this information can be gathered or scanned from various data sources. InFIG.8E, upon selecting interactive item802, a pop-up816can be presented that includes rules for engagement for a collective response to an incident. For example, the rules may include provisions for responding if the adversary becomes active again after being dormant, if the adversary progresses through a specific stage in the attack lifecycle, if certain thresholds are crossed indicating the need for immediate action, or if specific conditions related to ransom payments, disruption or preparation efforts are met. These rules serve as a framework for the organization's response strategy, helping to ensure a coordinated and effective response to security incidents. Referring now toFIGS.9A-9H, an organization dashboard900including a composer interface and a designer interface that can switched between using selective element (or item)901. InFIG.9A, the organization dashboard900features interactive tiles that provide suggested, required, or additional capabilities aligned with the organization's objectives. For instance, the patch management tile902allows organizations to set up and manage this crucial capability with a selected vendor. These tiles serve as actionable prompts, guiding organizations to address specific areas of focus. In some implementations, certain objectives may trigger the organization to enter into a vendor selection process. As depicted inFIG.9A, when it comes to establishing a security stack, the organization is presented with estimated costs per year or month. By selecting the appropriate selectable item, the organization can proceed with the vendor selection process, ensuring alignment between their needs and the chosen vendor's offerings. The organization dashboard900goes beyond providing mere suggestions and capabilities. It encompasses a composer interface and a designer interface, allowing organizations to fine-tune their objectives and tailor their strategies accordingly. The selectable element901facilitates navigation between these interfaces, enabling organizations to switch between the composer and designer perspectives. Within the composer interface, organizations can gain the ability to define, refine, and orchestrate their objectives. They can set specific targets, establish key performance indicators, and align their capabilities accordingly. This interface provides a view of the organization's goals and progress, allowing them to track their journey towards achieving desired outcomes. On the other hand, the designer interface offers organizations a creative and intuitive space to design and shape their capabilities. They can explore different configurations, experiment with various options, and customize their solutions based on their unique requirements. This interface empowers organizations to craft tailored approaches that align with their specific needs and preferences. InFIG.9B, the organization dashboard900expands to display a comprehensive list of existing capabilities. InFIG.9C, the organization dashboard900introduces additional capabilities that can bolster the organization's protection. For example, the user testing tile904allows organizations to select a vendor specifically for user testing, enhancing their overall security measures. Furthermore, organizations have the flexibility to assign these capabilities to different phases of implementation, such as phase1, phase2, or phase3, depending on their strategic priorities and roadmap. With regards to the designer view of the organization dashboard900inFIG.9D, organizations gain access to valuable information and indicators that help evaluate their security posture. The my data section906presents an indicator, represented by bullets (or other indicators), that denotes the level of exposure or threat resistance. For example, more filled-in bullets indicate a higher level of safety, providing organizations with a visual representation of their data security. Similarly, the my threats section908highlights the specific threats that the organization is currently facing, enabling them to prioritize mitigation efforts accordingly. The my locations section910showcases the various hardware and infrastructure components within the organization's environment, such as desktops, AWS, Google Cloud, G Suite, laptops, and more. Additionally, the capabilities section912outlines the specific security capabilities implemented by the organization, such as security operations (sec ops), web security, application security, access management, and others. Returning to the composer view of the organization dashboard900inFIG.9E, organizations gain insights into different phases of their security implementation. The vendor efficacy metrics924provide a means to evaluate the performance and effectiveness of vendors associated with each capability. The dashboard also offers a range of endpoint protections922, such as antivirus software, firewalls, or intrusion detection systems. These endpoint protection options can be sorted and filtered using the available filters926, ensuring that organizations can identify and select the most suitable solutions for their specific needs. In some implementations, the organization dashboard900introduces the “improvements” section914, which allows organizations to identify areas where enhancements are required. They can order improvements by prioritizing them through the selectable button916. For example, organizations have the option to submit these improvement requirements for bidding by potential vendors using the submit for bids button918or even make an offer directly through the make an offer button920, providing a streamlined process for acquiring necessary security enhancements. InFIG.9F, the organization dashboard900can be configured to present policies930that the organization qualifies for, following the selection of, for example, “continue to vendor selection” button inFIG.9A. These policies represent various security measures and guidelines that organizations can adopt to enhance their overall cybersecurity posture. The organization dashboard900allows users to browse through the available policies, select specific ones926, and view detailed information934about each policy932. By selecting a policy, users can add it to their cart938, indicating their intention to adopt and implement that particular policy. InFIG.9G, the organization dashboard900provides insights into additional policies940that the organization may not qualify for but can still access and review. Although these policies may not currently align with the organization's qualifications, it allows them to stay informed about the evolving landscape of security policies and standards. Users can explore the details of these policies and gain knowledge about emerging security practices, even if they are not immediately applicable to their organization's specific requirements. InFIG.9H, upon selecting the view details option934for a specific policy932, the organization dashboard900can present an overview of the policy. This includes detailed information about the policy itself, such as the extent of coverage, retention period, and associated premium costs. Additionally, the organization dashboard900indicates the specific security needs addressed by the policy, such as endpoint protection, public key infrastructure, asset management, and more. In some implementations, the organization dashboard900provides access to policy documents, ensuring that organizations have a clear understanding of the policy's terms, conditions, and obligations. In some implementations, the organization dashboard900offers the capability to model the selected policy. Upon selecting the policy modeling element944, users can explore different scenarios and configurations to understand the potential impact and implications of adopting the policy. This modeling feature allows organizations to assess how the policy aligns with their existing security infrastructure, operational processes, and overall business objectives. Generally,FIGS.10A-10E,11A-11D,13A-13E,15A-15G,16A-16D, and21A-21Bdepict the vendor dashboards and the interactive items and actionable items the vendor can interact with (e.g., using third-party device150). Referring now toFIGS.10A-10E, the incident room dashboard1000provide a centralized space where the entire team, including the IR vendor, insurer, breach coach, and law firm, can actively monitor the live progress of the active incident (e.g., show with reference toFIG.16B, where the incident room dashboard1000is presented in response to a selection of active incident1604). In some implementations, the incident status1002can offer real-time updates on the current state of the incident. This allows stakeholders to have a clear understanding of the ongoing attack progress through the in-progress element1004, which indicates the stage the attacker is in, as well as the response team's progress. In some implementations, the incident room dashboard1000also provides high-level metrics1006, enabling a quick overview of the organization's environment within the primary workspace. Additionally, the dashboard can highlight any active tasks1008that are currently being worked on by the team. By clicking on a specific task, such as task1020inFIG.10C, users can access more detailed information, monitor the task's activity, add comments, upload attachments, and even start or stop task timers1022. In some implementations, the incident room dashboard1000serves as an information hub that provides essential details and functionalities for vendors in managing and responding to incidents. For example, the incident room dashboard1000can provide alerts1012, which are automatically generated by the response system130. These alerts can range from notifications about new meetings initiated on Zoom to changes in phase or submitted change orders related to the incident. In some implementations, the incident room dashboard1000also facilitates team member engagement by providing an activity feed1014that highlights individual contributions and actions. This feed captures activities such as system backup status updates or the upload of new files, enabling vendors to track the progress and involvement of team members. In some implementations, the incident room dashboard1000offers a range of incident-level actions that vendors can undertake. They can participate in video calls, access the intake details1016associated with the incident's initiation, invite new team members, and view pertinent information about the incident itself. This includes information about the duration of the incident, its origin, and any outstanding or existing agreements with the client or end organization. In cases where the allocated hours for the incident have been exhausted, vendors can initiate a new change order by simply clicking on the “new change order” option, assembling the necessary details, and submitting the request. This request would then be sent to the organization for further approval, allowing the incident response to proceed smoothly. Additionally, the dashboard provides insights into the response team, offering visibility into the team's members and their respective roles. In some implementations, the incident room dashboard1000provides vendors with a range of capabilities to efficiently manage and close incidents. InFIG.10B, vendors can select the “close incident” option within the incident status1002, which triggers the generation of a final invoice1018. This ensures that all services provided during the incident are accurately documented and invoiced. Once an incident is closed, vendors can utilize the incident room dashboard to generate a comprehensive post-incident report, as illustrated inFIGS.13A-13E. This report consolidates valuable insights, analyses, and findings from the incident, enabling vendors to document and communicate the incident's details and outcomes effectively. InFIG.10D, the incident room dashboard1000presents additional features to enhance productivity and collaboration within the incident response team. Vendors can access the task management section, where they can view and distribute tasks1024among team members. These tasks can be organized based on phases or displayed in a Kanban-style view, offering flexibility in task management. Furthermore, the incident room dashboard1000can host a dedicated communication channel specific to the incident. This channel allows team members to create groups and engage in real-time communication, facilitating efficient and focused collaboration. In some implementations, all assets related to the incident can be aggregated and stored within the incident room dashboard's file storage. This centralized repository ensures that all relevant documents, reports, and data pertaining to the incident are easily accessible, enabling vendors to retrieve and share information efficiently. InFIG.10E, the incident room dashboard1000features an activity trail1026, which serves as a central audit trail for the incident. This trail records all actions and activities performed throughout the incident's lifecycle, providing a detailed chronological account of events. InFIGS.11A-11D, the vendor metric dashboard1110is shown, presenting various market bid and booking metrics. The dashboard includes components such as general market bid and booking metrics1102, displaying key statistics related to market bids and bookings. Additionally, market incoming market bids1104highlights the influx of bids from the market, while recent market bookings1105provides insights into the latest bookings made.FIG.11Aprovides a visual representation of these features. Within the vendor metric dashboard, vendors can also access their specific bids and bookings through the vendor specific bids and bookings1106section, known as “my bids.” This allows vendors to track their own performance and activity. Furthermore, vendors can monitor incident-specific bids through the vendor specific incident bids1108section, enabling them to stay informed about relevant incidents in which they have expressed interest. Recent bookings made by the vendor are displayed in the vendor specific recent bookings1109section, as shown inFIG.11B. Efficacy metrics are also highlighted inFIG.11C. The dashboard presents best performing DNA metrics1110, showcasing the metrics that indicate the vendor's optimal performance. Efficacy metrics1112can be adjusted using the drop-down menu1114, providing a flexible and customizable view of the vendor's effectiveness in different areas. These metrics allow vendors to evaluate and optimize their performance based on specific criteria. InFIG.11D, the efficacy metric1116for the vendor is displayed, offering a clear representation of the vendor's performance. Additionally, the partner efficacy metric1118provides insights into the effectiveness of collaborative partnerships with other entities. Accordingly, the interactive items and actionable items available within these dashboards empower vendors to make informed decisions and take appropriate actions based on the provided data. By leveraging these comprehensive vendor dashboards, vendors can track their market presence, evaluate their efficiency and efficacy metrics, monitor their bids and bookings, and assess their performance in relation to partners. This level of visibility and control enables vendors to make strategic decisions and optimize their operations within the context of the marketplace. Referring now toFIG.12, the service agreement interface1200can be provided to an organization (e.g., in response to an identified incident at an organization). In general, a contract proposal can include a plurality of content and actionable objects (or items) within the service agreement interface1200. As shown, an estimate associated with a service agreement can be presented that included actionable objects (e.g., can be selected) that can decline or accept a proposal (e.g., from a vendor, or from an organization). In some implementations, the service agreement interface1200offers the organization two options: decline or accept the offer for services. This interface presents the organization with the relevant details and terms of the service agreement, ensuring transparency and clarity. Through a user-friendly interface, the organization can thoroughly review the proposal and make an informed choice. As shown, the service agreement interface1200allows the organization to decline1202or accept1204the offer for services by one or more vendors. Referring now toFIGS.13A-13E, a post-incident report dashboard1300can be presented as a tool that allows vendors to generate detailed reports summarizing the key aspects of an incident. InFIGS.13A-13E, the post-incident report dashboard1300showcases the ability of vendors to provide an overview of the incident, ensuring that all involved parties gain a comprehensive understanding of the event's occurrence and its management. Additionally, as shown, the vendor can enable or disable various features using the on/off interactable button switches (e.g., incident description, incident details, cost breakdown, etc.). In some implementations, the post-incident report dashboard1300includes various sections that offer insights into different aspects of the incident. For example, vendors can provide IR team performance statistics and metrics, allowing stakeholders to assess the effectiveness and efficiency of the incident response efforts. In some implementations, the report also encompasses details about the incident, such as its origin and contextual information. This information helps provide a clear understanding of the incident's background and circumstances. Furthermore, the post-incident report dashboard1300can present a breakdown of the costs associated with the incident, enabling stakeholders to gain insights into the financial implications of the event. In particular, to provide a comprehensive view of the incident, the post-incident report dashboard1300incorporates environment details that outline the specific systems or networks affected. Another component of the post-incident report dashboard1300can be the task summaries and working time by each team member. This feature allows vendors to outline the specific tasks performed by individual team members during the incident response process. In some implementations, the incident timeline is provided in the post-incident report dashboard1300. It can provide a chronological sequence of events, capturing the key milestones and actions taken throughout the incident's lifecycle. Referring now toFIGS.14A-14B, an invoice1400is shown, a generated and provided invoice that allows an entity, after receiving services from a vendor, to pay the invoice by selecting selectable button1402. The invoice1400details the services rendered by the vendor, including the nature of the cybersecurity incident addressed, the duration of service provision, and the corresponding costs. Upon clicking the selectable button1402, the entity is redirected to a secure payment portal where they can choose from various payment options, facilitating a secure transaction. Referring now toFIGS.15A-15G, the vendor setup dashboard1500is shown, allowing a vendor to customize, review, and update their offerings (sometimes referred to herein as “cybersecurity protection plans”), customers and firmographics, partners, procurement (e.g., purchase options, resellers), automations, roles, and team members. InFIG.15A, interactive items1502,1504, and1506provide vendors with easy access to critical customer-related information. By selecting these items, vendors can view existing customers, prospective customers, and pending verifications, respectively. The accompanying list1508offers a clear overview of customers, aiding vendors in maintaining a comprehensive record. Additionally, some features can be considered “Pro” features that are enabled in response to a payment or accumulation of activity (e.g., after continuously using the dashboards for 3 months). In some arrangements, the “Pro” features can be enabled if the vendor enrolls in an advertisement tier such that advertisement may be presented on the dashboards. FIG.15Benables vendors to define their target customer base and estimate potential opportunities. Through interactive sliders, vendors can customize the criteria for evaluating opportunities, such as location, organization size, and type. This flexible feature empowers vendors to narrow or broaden their focus, aligning their offerings with specific market segments. Turning toFIG.15C, vendors can review and manage their offerings. For example, the interface provides a detailed overview of existing offerings, while the interactive item1512allows vendors to easily add new offerings. This streamlined process ensures that vendors can continually update and refine their product or service portfolio. InFIG.15D, vendors have the ability to establish partnerships with other entities. By selecting the interactive item1516, vendors can initiate the partner setup process and review their existing partners in the accompanying interface1518. This facilitates collaboration and expands the reach of vendors' offerings.FIG.15Eintroduces the configuration of procurement options and resellers. For example, vendors can easily set up purchase options and specify resellers, ensuring a smooth and efficient procurement process within their ecosystem. FIG.15Fempowers vendors to configure automations to streamline their operations. By utilizing the interactive button1520, vendors can add various rules that govern different aspects of their business. For example, these rules can include threat response rules, monitoring rules, and incident response rules, among others. This automation capability enhances efficiency and enables vendors to respond promptly to potential incidents.FIG.15Gallow vendors to create new rules1524and assign specific roles1526to individuals within their organization. For example, roles, such as sales engineer, can be defined with granular permissions, allowing individuals to accept new incidents, invite users, generate incident reports, and perform other designated tasks. This fine-grained role assignment ensures proper division of responsibilities and access privileges. Referring now toFIGS.16A-16D, the vendor incident dashboard1600is shown, presenting various incidents, including inbound incidents1602and1606and active incidents1604, shown inFIGS.16A and16B. Inbound incidents can automatically allow the vendor to send a contract using send contract interactive item1608, add a team using team interactive item1610, or allow the vendor to view details of the incident using details interactive item161, shown in FIG.16C2. The contract can be sent to an organization that is depicted nFIG.16Dthat allows the organization to accept or decline. In some implementations, inbound incidents are prominently displayed on the vendor incident dashboard, as depicted inFIGS.16A and16B. The dashboard provides an overview of these incidents, enabling vendors to quickly assess their nature and severity. Interactive items are incorporated into the interface to streamline the incident response process. For instance, the send contract interactive item1608allows vendors to promptly send a contract to the organization associated with the incident. By selecting this option, the vendor initiates the contractual process, providing the necessary legal framework for collaboration. Furthermore, the team interactive item1610facilitates seamless team management within the vendor incident dashboard. Vendors can easily add team members to the incident response team by utilizing this interactive item. This feature promotes effective coordination and communication among team members, enhancing the overall incident resolution process. The details interactive item1612, as illustrated inFIG.16C, grants vendors access to comprehensive incident information. By selecting this item, vendors can delve deeper into the specifics of the incident, such as its origin, impact, and relevant contextual details. This detailed overview assists vendors in gaining a thorough understanding of the incident, enabling them to provide more targeted and efficient support. Additionally,FIG.16Ddepicts how the vendor can send a contract to the organization involved in the incident. The contract is transmitted through an interface, allowing the organization to carefully consider the terms and conditions. Within this interface, the organization is presented with the option to accept or decline the contract. This interactive feature streamlines the contracting process, enabling swift decision-making and fostering effective collaboration between the vendor and the organization. Referring now toFIG.17, an incident summary dashboard1700that includes an overview of incidents of the entity. The dashboard1700includes various graphical representations1702, illustrating metrics such as the total number of cases handled, the cumulative cost of remediation, and the percentage of incidents involving ransom payments. These metrics provide a high-level overview of the entity's cybersecurity incident history, enabling a quick evaluation of the overall situation. In some arrangements, the incident summary dashboard1700further enriches this overview with additional metrics1704,1706, and1708. These metrics delve deeper into the nature and handling of the incidents. For instance, they may include the average time taken to remediate an incident, which could highlight the efficiency of the incident response process and identify potential areas for improvement. The root causes of incidents are also displayed, providing insights into the types of threats frequently encountered by the entity. The dashboard1700can also display the highest payouts made by particular incident response teams. Referring now toFIGS.18A-18C, a posture dashboard1800that includes a posture stream1802and real-time (or near real-time) information associated with threats (e.g., threats affecting you1804, threats affecting similar orgs1806, and global threat news1808). The posture dashboard1800can also provide three different lenses through which to view the cybersecurity threat landscape. The “threats affecting you” section1804offers information about threats directly targeting the entity, providing real-time updates and context. For instance, it may show whether peers within the industry are also addressing the same threat, offering comparative insights and potentially guiding response strategies. For example, inFIG.18C, the treats affecting you can include real-time information about the particular threat, if the entities peers are acting on it, and action buttons allowing the entity to perform various actions. Action buttons within this section allow the entity to quickly respond to threats. These may include options to investigate further, escalate the threat to a response team, or activate specific protection measures. The “threats affecting similar orgs” section1806provides an overview of threats impacting entities with similar profiles. The “global threat news” section1808offers a wider perspective, delivering updates on significant cybersecurity incidents and trends worldwide. Expand onFIG.18A, the posture dashboard1800provides a view of the cybersecurity threat landscape, allowing the entity to stay informed and proactive in addressing potential risks. The posture stream1802captures and records the entity's posture and coverage levels over time, taking into account various factors such as the locations of data (e.g., EC2 servers in AWS), the implemented safeguards (e.g., EPP, CSPM), and the specific threats that could target the entity's environment based on real incidents. By continuously monitoring and assessing this combination of factors, the system determines the current state of the entity's security posture, representing it as a dynamic and immutable record. This enables the entity to gauge its security readiness and identify any areas that require attention or improvement. By providing visual indicators such as green, yellow, or red, the posture dashboard1800offers an overview of the entity's overall security status at any given point in time. This empowers the entity to make informed decisions, allocate resources effectively, and implement timely measures to mitigate risks and maintain a robust cybersecurity posture. InFIG.18B, the posture dashboard1800also includes recommended capabilities tailored to the entity's specific needs. These capabilities can be easily added by selecting the interactable buttons1808and1810, streamlining the process of enhancing the entity's cybersecurity posture. Further, the entity can configure automated protection measures through “adaptation rules” like assessment management1812. These are automated actions designed to protect the entity's assets and environment, such as initiating vulnerability scans or activating intrusion prevention systems when certain conditions are met. Referring now toFIG.19, the offering dashboard1900serves as a comprehensive interface for entities interested in purchasing a cybersecurity offering. Upon selecting a particular offering, the dashboard provides a detailed breakdown of the product, including essential information about its capabilities and features. The entity can review what specific tasks the offering can perform—for example, the creation and management of incident records. Additionally, the dashboard provides clarity on the financial side, presenting the price and the billing plan associated with the offering. It ensures transparency, allowing the entity to assess the value proposition of the offering fully. To streamline the purchasing process, a selectable button1902is included on the dashboard. Upon clicking this, the entity can seamlessly proceed with the purchase, making the acquisition of the offering a user-friendly and straightforward process. Referring now toFIGS.20A-20B, the vendor dashboard2000provides an entity with a platform to identify and select vendors they wish to engage for various cybersecurity plans. It facilitates the selection of multiple vendors concurrently, allowing the entity to choose vendors offering distinct cybersecurity protections to create a diversified and robust security portfolio. The dashboard presents information about each vendor, including their qualifications and current availability status. Additionally, the entity can also invite vendors (e.g., by selecting selectable button2010) after reviewing vendor qualifications and information presented in vendor dashboard2000and their current state such as available now state, available in 75 minutes (i.e., available pending state), or unavailable state. Invitations can be sent directly from the dashboard by selecting the appropriate button2010. An example of the dashboard's functionality can be seen with the vendor Sophos MTR, marked as the best fit2006and available now2012. The entity can easily select this vendor by checking the corresponding checkbox2008. In contrast, another vendor might be shown as available in 30 minutes, as indicated by state element2014, but also marked as off-panel, implying they do not currently have an account on response system130. Referring now toFIGS.21A-21B, a mobile application2100can be configured as a pocket companion for Incident Response (IR) vendors, enabling them to engage, monitor, and respond to incidents directly from their mobile devices such as phones, tablets, or VR/AR glasses. With the mobile app, vendors have the convenience and flexibility to handle incidents anytime, even in urgent situations when they are away from their home office. The mobile application2100offers a transition of the features and functionality described in the desktop version (FIGS.10A-10E) to a mobile interface. Vendors can access their incident dashboard, providing an overview of all active incidents. They can efficiently navigate to individual incidents, review detailed information, and track the progress of ongoing tasks. By monitoring activity within the mobile application2100, vendors stay informed about updates and can quickly respond to any developments. In some implementations, the mobile application2100also facilitates team collaboration by displaying the team members involved in each incident. Vendors can identify their colleagues, fostering efficient communication and coordination. Furthermore, the mobile application2100provides visibility into existing agreements, ensuring vendors have immediate access to relevant contractual details. In some implementations, to enhance responsiveness and situational awareness, the mobile application2100includes an alerts feature. This feature aggregates alerts from multiple incidents, allowing vendors to monitor critical notifications and stay informed about significant events or changes. The mobile application2100delivers a user-friendly interface optimized for mobile devices, enabling vendors to engage with incidents, review details, manage tasks, monitor activity, collaborate with team members, and access important agreements—all while on the go. This mobile experience empowers IR vendors to stay connected and effectively respond to incidents, regardless of their physical location, providing agility and convenience to their incident response efforts. Referring now toFIG.22, a flowchart for a method2200to protect data, in accordance with present implementations. At least system100can perform method2200according to present implementations. In broad overview of method2200, at block2210, the one or more processing circuits (e.g., response system130ofFIG.1A) can determine a security posture. At block2220, the one or more processing circuits can tokenize and broadcast the security posture. At block2230, the one or more processing circuits can model the security posture and a security objective. At block2240, the one or more processing circuits can determine at least one cybersecurity protection plan. At block2250, the one or more processing circuits can provide the at least one cybersecurity protection plan. Additional, fewer, or different operations may be performed depending on the particular arrangement. In some embodiments, some, or all operations of method2200may be performed by one or more processors executing on one or more computing devices, systems, or servers. In various embodiments, each operation may be re-ordered, added, removed, or repeated. At block2210, the one or more processing circuits can determine the security posture based on the entity data. In some arrangements, this can include analyzing the data storage systems of the entity to determine the various types of data being handled. Additionally, the processing circuits can assess the entity data to identify potential cybersecurity threats that may pose a risk to the organization. In some arrangements, the processing circuits can identify entity assets by accessing the data channels that are communicatively linked to these assets. Accordingly, this allows the processing circuits to understand and evaluate the resources, devices, and networks that include the entity's infrastructure. In general, the security posture corresponds to an assessment of the entity's overall cybersecurity risk profile. In some arrangements, the security posture encompasses multiple dimensions, including the current entity state and current entity index. For example, the current entity state represents the current cybersecurity conditions of the entity, such as system configurations, security policies, and incident response readiness. In another example, the current entity index can serve as references or pointers to the entity assets, enabling efficient retrieval and access of critical information. Accordingly, the security posture is an aggregate representation of various aspects, such as the entity's firmographics, data types, asset locations, cybersecurity safeguards, coverage, gaps, cyber hygiene practices, third-party attestations, cybersecurity incidents, and cybersecurity claims. By considering these factors, the processing circuits can determine a comprehensive view of the entity's cybersecurity posture, enabling organizations and third-parties to assess their security risks and make informed decisions. Thus, the security posture can refer to the overall cybersecurity stance of an entity, encompassing various factors that contribute to its risk profile and resilience against potential threats. For example, to determine a security posture of an eCommerce business with a significant online presence that processes large amounts of consumer data daily, including sensitive information such as credit card details and personal identities can include analyzing the entity data of the eCommerce business. In some arrangements, the processing circuits can evaluate the entity data, involving an analysis of the types of data stored in the company's databases. These databases might include customer records, transaction logs, and financial records. Additionally, in this example the processing circuits can identify the assets of the company. This can include accessing various data channels linked to these assets, which could include servers, computers, software applications, and network infrastructure. In the above example, after the entity data is analyzed, the processing circuits can begin to assess the current cybersecurity conditions of the company. This current entity state includes the company's system configurations, security policies, and the readiness of their incident response team. The processing circuits can also identify the current entity index, which provides as references or pointers to the entity's assets. Considering these elements, the processing circuits can now determine the company's security posture. In particular, the security posture provides a holistic assessment that includes the company's firmographics, data types, asset locations, cybersecurity safeguards, coverage, gaps in security, cyber hygiene practices, third-party attestations, past cybersecurity incidents, and cybersecurity claims. By considering all these factors, the processing circuits can provide a comprehensive view of the entity's cybersecurity posture. At block2220, the one or more processing circuits can tokenize and broadcast the security posture to a distributed ledger. In general, tokenization is the process of converting rights to an asset into a digital token on a blockchain. In this case, the asset is the security posture of the company. The processing circuits convert the security posture into a digital token that can be stored, transmitted, and processed. In some arrangements, the digital token is a representation of the security posture that is unique, tamper-resistant, and encrypted. Broadcasting refers to the process of sending this digital token to all nodes in the distributed ledger or blockchain network. The distributed ledger is a decentralized database that is maintained by multiple nodes or participants in the network. Broadcasting the token to the distributed ledger ensures that the token, representing the security posture, is stored in a decentralized, immutable, and transparent manner. In some arrangements, any changes to the security posture will require a new token to be generated and broadcasted, ensuring that there is a historical record of all changes. Over time, the circumstances, assets, and the data that the entity handles can change. For instance, the company may adopt new technologies, handle new types of data, or face new threats. As a result, it can be important to keep the security posture updated. In some arrangements, the processing circuits continuously monitor the entity's data and systems for any changes. When new data is accessed, the processing circuits analyze it to determine how it impacts the current security posture. For example, this can include reassessing the types of data the entity handles, the technologies it uses, its cybersecurity policies, and its overall threat landscape. Accordingly, the updated security posture provides a current and accurate representation of the entity's cybersecurity status, reflecting the most recent changes and developments. In some arrangements, once the updated security posture is determined, the next step can include tokenizing this updated posture. As mentioned earlier, tokenization involves converting the updated security posture into a digital token. After tokenizing the updated security posture, the processing circuits broadcast this new token to the distributed ledger. In some arrangements, the processing circuits provide a public address of the tokenized updated security posture on the distributed ledger. The public address is a unique identifier that allows third parties to locate and access the token on the blockchain. Providing the public address to a plurality of third parties allows these parties to verify the updated security posture. In some arrangements, the public address provided by the processing circuits acts as a unique identifier on the distributed ledger, or blockchain, for the tokenized security posture. By providing this address to third parties, they can locate and access the specific token, which represents the company's current security posture. That is, the ability for third-parties to access the tokenized security posture allows third parties to independently verify its contents. This is because the tokenization process ensures that the data representing the security posture is both tamper-proof and transparent, lending credibility to its contents. Furthermore, the decentralized nature of a distributed ledger ensures that the tokenized data has not been altered without consensus, adding an extra layer of verification. This means that third parties, be it auditors, partners, or cybersecurity firms, can trust the authenticity of the information encapsulated in the token, thus enabling them to accurately evaluate the organization's cybersecurity posture. At block2230, the one or more processing circuits can model the security posture and a plurality of security objectives to generate a set of cybersecurity attributes of the entity. In some arrangements, modeling the security posture includes constructing a representation of the entity's current cybersecurity state. This includes the data collected and analyzed in previous blocks, such as the types of data the entity handles, the assets it possesses, its system configurations, its security policies, its cybersecurity incidents, and other relevant factors. In some arrangements, the processing circuits can also model a plurality of security objectives. As user herein, “security objectives” refer to the goals or targets that the entity aims to achieve in terms of its cybersecurity. For example, the entity might aim to reduce its vulnerability to specific types of cyberattacks, improve its incident response time, or achieve compliance with certain cybersecurity standards or regulations. These objectives provide a framework for evaluating the entity's security posture and identifying areas for improvement. In some arrangements, block2220can be skipped or performed at a later point in time. In some arrangements, based on the modeled security posture and the security objectives, the processing circuits generate a set of cybersecurity attributes of the entity. Each cybersecurity attribute represents a specific aspect of the entity's cybersecurity. For example, one attribute might be the entity's vulnerability to phishing attacks, while another might be its adherence to data encryption standards. Accordingly, the attributes provide a more detailed and granular view of the entity's cybersecurity posture. In some arrangements, each cybersecurity attribute is associated with at least one of a required cybersecurity attribute, an additional cybersecurity attribute, or an existing cybersecurity attribute. A required attribute can be a cybersecurity attribute that the entity must possess to meet its security objectives. An additional attribute can be a cybersecurity attribute that the entity could benefit from but is not mandatory. An existing attribute is a cybersecurity attribute that the entity already possesses. By categorizing the attributes in this way, the processing circuits can identify the entity's strengths, weaknesses, and areas for improvement in its cybersecurity. In some arrangements, generating the set of cybersecurity attributes also involves creating a security roadmap. This is a strategic plan that outlines how the entity can improve its cybersecurity over time. The roadmap consists of multiple phases, each associated with a subset of the cybersecurity attributes. Each attribute is assigned to a phase based on its importance, urgency, and the entity's ability to implement it. For example, the first phase might involve implementing the required attributes, while later phases might involve adding the additional attributes. In some arrangements, modeling the security posture and security objectives together is a strategic approach that provides a comprehensive understanding of an entity's cybersecurity landscape. This process provides an interplay between the entity's current state of security (security posture) and its desired state of security (security objectives). In this context, the security posture represents the entity's current cybersecurity status. It includes all relevant factors such as the types of data the entity handles, the system configurations, the cybersecurity policies in place, the incident response readiness, and the history of cybersecurity incidents, among others. On the other hand, the security objectives represent the entity's goals or targets in terms of cybersecurity. These might include reducing vulnerability to specific types of cyberattacks, improving incident response time, achieving compliance with certain cybersecurity standards, or enhancing the security of specific assets. In some arrangements, when modeling the security posture and security objectives together, the processing circuits can map out the path from the current state to the desired state. For example, the map can identify the gaps between the security posture and the security objectives, and outline the steps that need to be taken to bridge these gaps. In various arrangements, the modeling process also involves generating a set of cybersecurity attributes of the entity, each reflecting a specific aspect of the entity's cybersecurity. By considering the security posture and the security objectives together, the processing circuits can develop a nuanced understanding of the entity's cybersecurity landscape. They can identify the strengths and weaknesses in the current security posture, align these with the security objectives, and define a clear path towards achieving these objectives. This holistic approach ensures that the entity's cybersecurity strategy is both grounded in its current reality and focused on its future goals. It should be understood that modeling the security posture and security objectives can involve executing computational algorithms and machine learning techniques. The processing circuits would analyze various data points, including system configurations, network structures, user behaviors, security incident history, and more, to create a multi-dimensional model of the entity's current security posture. This model could be represented in various forms, such as a statistical model, a graphical model, or a neural network, depending on the complexity of the data and the specific needs of the analysis. Concurrently, the security objectives would be defined and encoded in a format that can be integrated into the model. This could involve setting target values for certain metrics, specifying desired states for different aspects of the cybersecurity, or defining specific conditions that should be met. The processing circuits could then map the security posture onto the security objectives, identifying the gaps and generating the set of cybersecurity attributes that represent specific areas for improvement. This mapping process could involve various computational techniques, such as optimization algorithms, decision tree analysis, or reinforcement learning, depending on the complexity of the security posture and objectives. In some arrangements, the output would be a model that represents the entity's current security posture, its security objectives, and the path to bridge the gap between them. At block2240, the one or more processing circuits can determine utilizing one or more protection parameters, at least one cybersecurity protection plan corresponding to a new cybersecurity attribute to protect the entity. In some arrangements, the new cybersecurity attribute is an attribute from the generated set of cybersecurity attributes of the entity after modeling the security posture and a plurality of security objectives. Protection parameters refer to specific criteria or guidelines that are used to design the cybersecurity protection plan. For example, these could include, but are not limited to, the entity's resources, the severity of the threats it faces, the criticality of its assets, its regulatory requirements, and its risk tolerance. In particular, the protection parameters provide a framework for tailoring the cybersecurity protection plan to the entity's specific needs and circumstances. In some arrangements, each cybersecurity protection plan corresponds to a new cybersecurity attribute. As discussed herein, the cybersecurity attributes represent specific aspects of the entity's cybersecurity that were identified in the modeling process. A new cybersecurity attribute might represent an area for improvement, a gap in the current security posture, or a step towards achieving a security objective. The process of determining a cybersecurity protection plan can include defining the actions, measures, or strategies that will improve the entity develop or strengthen the new cybersecurity attribute. For instance, if the new attribute relates to improving incident response readiness, the protection plan might involve training staff, establishing an incident response team, or implementing an incident management system. In some arrangements, the cybersecurity protection plan is also designed to be adaptable. This means it can be updated or modified based on changes in the entity's security posture, security objectives, or the cybersecurity landscape. This adaptability ensures that the protection plan remains effective and relevant over time. Furthermore, while a cybersecurity protection plan is designed for a specific attribute, it can also have broader effects on the entity's overall cybersecurity. For instance, a plan designed to improve incident response readiness might also enhance the entity's resilience to cyberattacks, reduce downtime in the event of an incident, and improve its reputation for cybersecurity. In various arrangements, once the processing circuits have determined a cybersecurity protection plan based on the entity's security posture and objectives, the processing circuits can consider the practical implementation of the plan. It's important to note that there could be multiple cybersecurity protection plans that offer the same essential protection but come from different vendors and have different price points, features, support levels, and other variables. Each of these elements can significantly influence the choice of protection plan. For example, suppose the determined cybersecurity protection plan involves the deployment of a specific type of firewall to enhance network security. There could be several vendors in the market that offer firewall solutions. While each solution essentially serves the same purpose—protecting the network from unauthorized access—there could be significant differences in their features, performance, ease of use, compatibility with the existing IT infrastructure, and more. Some firewalls might offer advanced features such as deep packet inspection, intrusion prevention systems, or integrated virtual private network (VPN) support, while others might focus on providing a user-friendly interface or extensive customization options. Furthermore, price can be another factor in choosing a protection plan. Different vendors may offer their solutions at different price points, depending on factors such as the sophistication of the technology, the reputation of the vendor, the level of customer support provided, and the licensing model (for example, one-time purchase versus subscription-based). The entity can be presented with one or more plans corresponding to a new cybersecurity attribute to protect the entity with different price points so that the entity can consider its budget and the potential return on investment of each solution. Additionally, other factors such as the vendor's reputation, the quality of customer support, the vendor's understanding of the entity's industry, and the vendor's commitment to future updates and enhancements can also influence the choice of a cybersecurity protection plan. Therefore, the processing circuits can consider all these factors and potentially integrate additional data (e.g., vendor information, product reviews, and budget constraints) to select or offer the most suitable cybersecurity protection plan for the entity. This ensures that the chosen plan not only meets the entity's cybersecurity needs but also aligns with its financial, operational, and strategic requirements. In general, the processing circuits can connect the organizations with the relevant cybersecurity vendors. They can do this by integrating with a database or network of vendors, or by utilizing a platform that facilitates such connections. By acting as a bridge between the organization and the vendors, the processing circuits can streamline the process of finding and implementing cybersecurity solutions. They can automatically match the organization's needs, as defined by the cybersecurity protection plans, with the offerings of various vendors, taking into account factors such as features, price, vendor reputation, and support levels. This not only improves accessibility for both the organization and vendors by improving the selection process, but it also leads to improved technology and security for the organization. The automation and data-driven approach of the processing circuits ensure that the organization is connected with the most suitable vendors, allowing it to benefit from the latest cybersecurity technologies that align with its security posture and objectives. This ultimately contributes to a stronger and more effective cybersecurity infrastructure for the organization. In some arrangements, at block2240, the processing circuits can determine at least one cybersecurity protection plan based on an assortment of qualifying and additional cybersecurity protection plans. These plans may come from a diverse set of third-party vendors and are presented to the entity computing system via a cybersecurity marketplace. For example, a qualifying cybersecurity protection plan refers to a plan that meets the minimum requirements established by the entity's security objectives and the identified cybersecurity attributes. This could include factors such as the type of protection needed, compliance with certain standards, compatibility with the existing IT infrastructure, and others. The qualifying plan provides the basic level of security that the entity needs to address its identified cybersecurity attributes. In another example, an additional cybersecurity protection plan refers to a plan that goes beyond the minimum requirements to provide extra features, higher performance, or other benefits. This could include advanced threat detection capabilities, integrated incident response tools, superior customer support, and more. The additional plan can offer a higher level of protection and can provide more value to the entity, although it might also come at a higher cost. In some arrangements, the security objectives used to guide this determination process can be entity-specific. That is, they can be tailored to the unique needs, risks, and goals of the entity, which ensures that the determined protection plans are highly relevant and targeted. At block2250, the one or more processing circuits can provide the at least one cybersecurity protection plan to an entity computing system of the entity. In some arrangements, the cybersecurity protection plan is provided to the entity's computing system through a cybersecurity marketplace. For example, this can be a digital platform that connects entities with a wide range of third-party cybersecurity vendors. The marketplace enables the entity to easily browse, compare, and select from various cybersecurity protection plans. It also allows vendors to showcase their offerings to potential customers. Within the cybersecurity marketplace, the processing circuits identify the cybersecurity protection plans associated with a plurality of third parties. This includes a first cybersecurity protection plan offered by a first third-party and a second cybersecurity protection plan offered by a second third-party. Each of these plans is associated with the new cybersecurity attribute identified during the modeling process, meaning they are designed to address this specific aspect of the entity's cybersecurity. In some arrangements, each cybersecurity protection plan is associated with one of a plurality of availability states. These states indicate whether the plan is currently available for the entity to implement (an “available now” state), whether it will become available in the future (an “available pending” state), or whether it is not available at all (an “unavailable” state). In addition to identifying and providing the cybersecurity protection plans, the processing circuits can also facilitate the implementation of these plans. This could involve, for instance, integrating the chosen protection plan with the entity's existing IT systems, configuring the plan's settings according to the entity's needs and preferences, or monitoring the plan's deployment to ensure its functioning as expected. The processing circuits can also provide ongoing support for the protection plan, such as troubleshooting issues, providing updates, or adapting the plan based on changes in the entity's security posture or the cybersecurity landscape. Moreover, the processing circuits can manage the entity's interactions with third-party vendors. The processing circuits can handle communications between the entity and the vendors, negotiate contracts or service agreements, manage payment transactions, and ensure that the vendors fulfill their obligations. By acting as an intermediary, the processing circuits can help streamline the vendor management process, reduce the entity's administrative burden, and ensure a smooth and successful collaboration. In some arrangements, the processing circuits can also provide valuable analytics and reporting capabilities. For example, the processing circuits can track the performance of the cybersecurity protection plans, measure their impact on the entity's security posture, and generate reports that provide insights into the entity's cybersecurity progress. This can help the entity understand the effectiveness of its cybersecurity efforts, identify areas for improvement, and make informed decisions about its future cybersecurity strategy. These analytics and reporting capabilities can be particularly valuable in demonstrating the entity's compliance with regulatory requirements or industry standards, as well as in building trust with stakeholders such as customers, partners, or investors. In some arrangements, the processing circuits can scan the plurality of data channels to access third-party data from a range of third-parties. For example, this can include data about third-party vendors, partners, customers, or other entities that interact with the organization. Such third-party data can provide insights into the external aspects of the entity's cybersecurity, such as the security practices of its partners or the threats posed by its digital ecosystem. In this context, modeling involves integrating this third-party data into the determination of the set of cybersecurity attributes of the entity. This ensures that the model captures a holistic view of the entity's cybersecurity, encompassing both internal and external factors. In some arrangements, the processing circuits can determine a set of existing security attributes of the entity based on both the entity data and the third-party data. These existing security attributes represent the current state of the entity's cybersecurity, including its existing defenses, vulnerabilities, and threat exposures. By comparing these existing attributes with the desired attributes identified in the modeling process, the processing circuits can pinpoint the gaps that need to be addressed and guide the development of the cybersecurity protection plan. In some arrangements, the processing circuits can determine an incident readiness based on the set of cybersecurity attributes of the entity. In particular, the incident readiness corresponds to a calculated level that indicates how prepared the entity is to respond to a cybersecurity incident. For example, this could involve factors such as the robustness of the entity's incident response plan, the skills and resources of its incident response team, the effectiveness of its communication channels, and its capacity for detecting, analyzing, and containing incidents. Similarly, the processing circuits can determine an insurance readiness based on the set of cybersecurity attributes. The insurance readiness refers to a calculated level that indicates how prepared the entity is to obtain cybersecurity insurance. For example, this could consider factors such as the entity's risk profile, its compliance with insurance requirements, the adequacy of its security controls, and its history of cybersecurity incidents. In some arrangements, the set of cybersecurity attributes of the entity is associated with at least the incident readiness or the insurance readiness. That is, these readiness levels can be parts of the entity's overall cybersecurity profile, reflecting its ability to respond to incidents and its readiness to obtain insurance. By considering these readiness levels in its analysis, the processing circuits can provide a more nuanced and comprehensive assessment of the entity's cybersecurity posture. In various arrangements, the incident readiness and insurance readiness could be calculated through a weighted scoring system that combines various cybersecurity attributes. For example, the incident readiness score might take into account the robustness of the entity's incident response plan (weighted at 30%), the skills and resources of its incident response team (30%), the effectiveness of its communication channels (20%), and its capacity for detecting, analyzing, and containing incidents (20%). Each attribute could be scored on a scale from 1 to 10, with the scores then multiplied by their respective weights and summed to produce the overall incident readiness score. Similarly, the insurance readiness score might consider the entity's risk profile (weighted at 40%), its compliance with insurance requirements (30%), the adequacy of its security controls (20%), and its history of cybersecurity incidents (10%). Again, each attribute could be scored on a scale from 1 to 10, with the scores multiplied by their weights and summed to produce the insurance readiness score. Accordingly, the scores provide a quantitative measure of the entity's readiness levels, allowing for comparison and tracking over time. In some arrangements, the one or more processing circuits can (1) receive a portion of the entity data from a user device via an application programming interface (API), (2) tokenize and extract content of the portion of the entity data into a plurality of tokens, (3) generate a unique identifier for each of the plurality of tokens, (4) store a mapping between the unique identifier and each of the plurality of tokens, (5) populate, from each of the plurality of tokens, a plurality of fields of a data object associated with the security posture based on the extracted content of the portion of entity data stored in each of the plurality of tokens, and (6) verify accuracy of the populated plurality of fields. In general, the processing circuits can enhance the entity's security posture assessment by actively engaging with user devices via an application programming interface (API). Through this interface, they can receive a portion of the entity data, tokenize and extract content, generate unique identifiers for each token, and store a mapping between these identifiers and tokens. This process enables a granular analysis of the entity data, allowing the processing circuits to identify specific security attributes and nuances that may be concealed in the aggregated data. In some arrangements, the processing circuits can populate a data object associated with the security posture using the extracted content from the tokens. Each field of the data object corresponds to a specific aspect of the security posture, such as incident readiness, insurance readiness, risk profile, or compliance status. By populating these fields with precise data extracted from the tokens, the processing circuits can ensure that the data object accurately represents the entity's security posture. Furthermore, the processing circuits can verify the accuracy of the populated fields. For example, this could involve cross-checking the data with other sources, applying data validation rules, or using machine learning algorithms to detect anomalies or inconsistencies. In some arrangements, the processing circuits can receive a request to set up a cybersecurity protection account. This request could come from an entity that wants to enhance its cybersecurity posture or from a third-party such as a cybersecurity vendor or consultant. Setting up a cybersecurity protection account is the first step towards building a robust cybersecurity strategy, as it provides a centralized platform for managing all cybersecurity-related activities. Upon receiving the request, the processing circuits can generate a first graphical user interface (GUI) including interactable elements. The GUI serves as the main interface for users to interact with the cybersecurity protection account. The interactable elements could include menus, buttons, forms, or other components that allow users to input data, navigate the platform, or perform specific actions. When a user interacts with one of these elements, the processing circuits can receive, via the first GUI, a portion of the entity data. This data could correspond to various aspects of the entity's operations, such as team information, asset information, current third-party providers, and current cybersecurity protection plans. Next, the processing circuits can model the current cybersecurity protection plans. This involves analyzing the plans to understand their features, benefits, limitations, and effectiveness. It also includes implementing the plan, which could involve coordinating with the vendor, integrating the plan with the entity's systems, and ensuring its proper operation. By modeling the current plans, the processing circuits can identify potential improvements or gaps that need to be addressed in the new cybersecurity strategy. In some arrangements, the processing circuits can generate a second GUI including additional interactable elements. These elements are associated with the security posture, a plurality of incidents, and the plurality of security objectives. This second GUI provides users with a more detailed view of their cybersecurity situation, including their current posture, past incidents, and future objectives. In some arrangements, the processing circuits can implement, test, and manage (sometimes referred to collectively as “modeling”) the cybersecurity protection plans. After a plan is selected, the processing circuits can facilitate the integration (or modeling) process between the vendor's solution and the entity's systems. For example, this might involve configuring the entity's networks, devices, and applications to work with the vendor's cybersecurity tools, testing the integrated solution to ensure that it functions correctly, and addressing any issues or conflicts that arise during this process. Moreover, the processing circuits can continuously monitor the entity's systems to assess the effectiveness of the protection plan. This can include analyzing system logs, network traffic, user behavior, and other relevant data to detect any signs of cybersecurity incidents. It also includes coordinating with the vendor to receive updates about new threats, patches, or improvements to the protection plan. These updates can then be incorporated into the entity's systems to ensure that the protection plan remains up-to-date and effective against evolving cybersecurity threats. In the event of a potential incident, the processing circuits can alert the entity and the vendor, providing detailed information about the incident's nature, scope, and potential impact. This allows the entity and the vendor to respond quickly and effectively, minimizing the damage and downtime caused by the incident. Furthermore, the processing circuits can analyze the incident to understand its causes, impacts, and lessons, and use this information to further improve the protection plan and the entity's overall cybersecurity posture. In some arrangements, the processing circuits can model the selected cybersecurity protection plan by testing it within the entity's infrastructure. This can include simulating various scenarios to evaluate the plan's effectiveness and resilience against potential threats. Through this testing process, the processing circuits can identify any gaps, vulnerabilities, or implementation issues, ensuring that the plan is not only compatible with the entity's systems but also robust enough to provide the necessary level of protection. In some arrangements, a vendor plan can be tested by enabling stepping through the incident response plan as documented, including taking iterative steps to check if the plan would indeed work for a particular modeled threat scenario. By virtually executing the plan and monitoring its response to simulated threats, the processing circuits can assess its practicality and effectiveness, making any necessary adjustments or improvements to ensure optimal incident response readiness. This testing approach enhances the confidence in the selected cybersecurity protection plan, enabling the entity to deploy a proactive and reliable security strategy. In some arrangements, the one or more processing circuits can generate a security posture stream including a timeline of incidents, changes in the security posture, and corresponding cybersecurity threat levels. This timeline provides a historical record of the entity's security posture over time. By reviewing the posture stream, the entity can gain insights into the effectiveness of their cybersecurity measures, identify recurring vulnerabilities or patterns, and make data-driven decisions for future enhancements. The processing circuits can also apply advanced analytics and machine learning algorithms to the posture stream, enabling predictive capabilities to anticipate potential threats and proactively strengthen the entity's security posture. In addition to the aforementioned capabilities, some arrangements may leverage generative artificial intelligence (AI) algorithms to enhance the security posture analysis. Generative AI algorithms can analyze large volumes of data from various sources, such as threat intelligence feeds, incident reports, and security best practices, to identify patterns, trends, and potential vulnerabilities that human analysts may not have detected. By utilizing generative AI, the processing circuits can uncover hidden insights, predict emerging threats, and recommend proactive security measures to fortify the entity's defenses. In various arrangements, the use of generative AI further augments the capabilities of the processing circuits, enhancing the accuracy, efficiency, and scalability of the security posture analysis, and ultimately contributing to the overall resilience and robustness of the entity's cybersecurity framework. Referring now toFIG.23, a flowchart for a method2300to protect data, in accordance with present implementations. At least system100can perform method2300according to present implementations. In broad overview of method2300, at block2310, the one or more processing circuits (e.g., response system130ofFIG.1A) can receive a cybersecurity plan offering. At block2320, the one or more processing circuits can implement the cybersecurity plan offering. At block2330, the one or more processing circuits can monitor the environmental data of an entity. At block2340, the one or more processing circuits can generate a new cybersecurity incident. At block2350, the one or more processing circuits can provide the new cybersecurity incident to a dashboard. Additional, fewer, or different operations may be performed depending on the particular arrangement. In some embodiments, some, or all operations of method2300may be performed by one or more processors executing on one or more computing devices, systems, or servers. In various embodiments, each operation may be re-ordered, added, removed, or repeated. At block2310, the processing circuits can receive one or more cybersecurity plan offerings associated with a third-party. These offerings represent a variety of cybersecurity solutions that the third-party has developed to address different types of threats and vulnerabilities. The offerings may include active plans, which are ready to be implemented immediately, as well as plans that are to be offered on the marketplace for entities to activate. In general, the marketplace is a digital platform where entities can be provided, explore, compare, and select the cybersecurity plans that best meet their needs. It provides a wide range of options, catering to entities with different risk profiles, business models, and budget constraints. Upon receiving the cybersecurity plan offerings, the processing circuits then provide these offerings to the marketplace. The plans can be made available for activation by a plurality of entities, broadening the third-party vendor's reach and giving them access to a wider customer base. The processing circuits facilitate this process, ensuring that the offerings are presented accurately and attractively in the marketplace. In some arrangements, the processing circuits can receive an activation of a cybersecurity plan offering from an entity's computing system. This signals that the entity has selected a plan from the marketplace and is ready to implement it. The activation triggers a series of processes, including setting up the necessary connections between the entity and the third-party (described in block2320), configuring the plan according to the entity's specific requirements, and monitoring the implementation to ensure that it is successful. In some arrangements, the processing circuits can provide the cybersecurity plan offerings to entities for purchase before the modeling process at block2320takes place. This is based on one or more third-party customer parameters, which could include factors such as the entity's size, industry, risk profile, or specific cybersecurity needs. In some arrangements, the cybersecurity offerings may be tailored and made available only to certain entities based on both or either of the entity and vendor preferences. On one hand, an entity may have specific preferences or needs for cybersecurity protection plans based on their industry, size, geographical location, or regulatory requirements. On the other hand, the vendor may also have preferences for the types of entities they cater to, depending on factors such as the entity's risk profile, the vendor's area of expertise, or strategic business decisions. This customization of offerings ensures that each entity is presented with cybersecurity plans that are most relevant and suitable for their specific needs, while vendors can focus on providing services to entities that align with their capabilities and business strategy. This bespoke approach to cybersecurity planning enhances the efficiency and effectiveness of the cybersecurity marketplace. At block2320, the one or more processing circuits can model the one or more cybersecurity plan offerings, setting the stage for the application of the plans within the entity's infrastructure. For example, this process can being with the generation and activation of a cybersecurity protection obligation between the entity and the third-party vendor. These attributes encapsulate the specifics of the cybersecurity plan, detailing parameters such as the scope of coverage, the service level agreements, the roles and responsibilities of each party, and the cost and payment terms, among others. In some arrangements, the processing circuits can provide the entity's security posture, entity data, and the details of the cybersecurity protection obligation to a third-party computing system of the third-party. This sharing of information can be important to the successful implementation of the cybersecurity plan. The entity's security posture and data allow the third-party to understand the unique cybersecurity landscape of the entity and tailor their offerings accordingly. In some arrangements, the processing circuits can provide a public address to the tokenized security posture of the entity. The security posture can provide insights into the entity's existing security framework, potential vulnerabilities, and overall security objectives, thereby equipping the third-party with the context necessary to deliver effective protection. In some arrangements, the processing circuits, in response to the activation of the cybersecurity protection obligation, model the activated cybersecurity plan offering. This modeling phase translates the theoretical aspects of the plan into practical measures that are incorporated into the entity's existing infrastructure. It could involve the configuration and deployment of specific cybersecurity tools, the establishment of monitoring protocols, and the setup of incident response mechanisms, among other actions. The completion of this modeling phase signifies the full integration of the cybersecurity plan into the entity's infrastructure, positioning the entity to benefit from enhanced cybersecurity protection. For example, when a state is inconsistent or identified the processing circuits automatically analyze the current configurations of security tools employed by the vendor and the operating systems of the organization. Based on this analysis, appropriate modifications are made to the configurations or the agreement between the vendor and organization, ensuring that the security measures are aligned with the specific needs and risks of the entity. In some arrangements, prior to generating and activating the cybersecurity protection obligations, the one or more processing circuits can underwrite the cybersecurity plan by leveraging the data collected from the insured's security tools and configurations. This data, which provides a detailed and accurate representation of the insured's security posture, is assessed against the underwriting criteria established by the insurer. The processing circuits analyze various factors, including the effectiveness of the security measures implemented, the coverage level provided by the cybersecurity plan, and the compliance history of the insured. For example, a Fortune 500 company is seeking cybersecurity insurance. The processing circuits can collect data from the company's security tools and configurations, including information about their network infrastructure, access controls, incident response protocols, and data protection measures. By analyzing this data, the processing circuits can assess the company's overall security posture and identify any potential vulnerabilities or gaps in their defenses. The processing circuits can also evaluate the company's compliance history, including past incidents or breaches, and their adherence to industry best practices and regulatory requirements. Based on this analysis, the processing circuits can determine the level of threat associated with insuring the company and provide an accurate underwriting assessment. In another example, a small business owner who is applying for cybersecurity insurance. The processing circuits can collect data from the business owner's security tools, such as firewalls, antivirus software, and intrusion detection systems, as well as information about their data encryption practices and employee training programs. The processing circuits can also assess the business owner's compliance with relevant cybersecurity regulations and their incident response capabilities. By analyzing this data, the processing circuits can evaluate the effectiveness of the security measures in place and determine the level of threat associated with insuring the business. The processing circuits can identify any areas where additional safeguards or improvements may be needed and provide recommendations to mitigate potential risks. Based on this underwriting assessment, the processing circuits can generate a tailored cybersecurity plan that aligns with the business owner's specific needs and offers appropriate coverage for their computing environment. In some arrangements, the processing circuits takes a proactive approach to modeling the cybersecurity plan offerings by engaging in deployment and configuration activities. This involves deploying and configuring third-party tools and various systems within the computing infrastructure of the entity, in accordance with the specific requirements outlined in the cybersecurity plan offerings. Furthermore, in the modeling of the cybersecurity plan offerings, the processing circuits can establish connections and integrate the third-party tools within the existing computing infrastructure of the entity. By establishing these connections and integrating the tools, the processing circuits ensures that the cybersecurity measures are incorporated into the entity's computing environment, creating a holistic and robust defense against potential threats. At block2330, the one or more processing circuits initiate a monitoring process, leveraging the plurality of data channels to keep a watch on the environmental data of the entities that are being modeled using the one or more cybersecurity plan offerings. This monitoring process provides real-time threat detection and response mechanisms. By maintaining a consistent surveillance over the environmental data, the processing circuits are able to detect any anomalies or deviations that might signify a potential cybersecurity threat or breach. Environmental data in this context refers to an extensive array of information that encapsulates the operational environment of the entities. This data includes network traffic details, system logs, user activity, application activity, and other relevant metrics. Importantly, environmental data also includes information about the external threat landscape, such as updates about new types of cyber threats, threat intelligence feeds, and other relevant details. By monitoring this data, the processing circuits can maintain an updated understanding of the entity's cybersecurity status. In some arrangements, the monitoring process is carried out using a variety of data channels. These channels could include direct network connections, API feeds, and other communication interfaces that allow the processing circuits to tap into the entity's systems. The choice of data channels can depend on the specific architecture and requirements of the entity's information systems. Once the monitoring process is set in motion, the processing circuits are not just passively observing the data flow. They are actively scanning, analyzing, and interpreting the environmental data to pick up on any signs of cyber threats. For example, algorithms and artificial intelligence mechanisms can be deployed to sift through the vast volumes of data, identifying patterns and correlations that might escape human scrutiny. Any detected anomalies are promptly flagged, triggering appropriate response mechanisms as detailed in the cybersecurity plan offerings. This continuous, vigilant monitoring is instrumental in ensuring the entity's cybersecurity is always one step ahead of potential threats. At block2340, the one or more processing circuits are configured to generate a new cybersecurity incident, this operation is triggered upon detecting an anomaly or potential threat within the environmental data associated with any entity from the plurality of entities. The generation of a new cybersecurity incident is a step in the cybersecurity workflow. It signifies the identification of a potential threat, vulnerability, or breach within the entity's systems, based on the analysis of the environmental data. It should be understood that may times the detection of a new cybersecurity incident is not a simple binary process; it can include a multi-faceted analysis of the environmental data. For example, machine learning algorithms, statistical models, neural networks, or heuristic rules could be employed to analyze the data for signs of malicious activity. For instance, sudden spikes in network traffic, unusual login attempts, or patterns that match known attack signatures could all trigger the generation of a new cybersecurity incident. This incident is then logged and tracked, with all relevant information captured for further analysis and response. In some arrangements, the processing circuits can identify and engage with one or more partners of the third-party vendor. For example, the partners could be other cybersecurity service providers, third-party software vendors, or even internal teams within the entity's organization. Through job routing for cases and conditions, as shown inFIG.15F, the processing circuits categorize the identified gaps and match them to suitable solutions or vendors capable of remedying those gaps. This matching process can be facilitated through an insurer marketplace portal, leveraging the capabilities provided by response system130. By collaborating with partners, the processing circuits ensure that the entity gains access to the expertise, technologies, and resources necessary to address specific security gaps effectively. In some arrangements, the processing circuits facilitate the linking of preferred products or solutions to pre-existing relationships between vendors, customers, and insurers. By leveraging the data and insights gathered from the ecosystem partner APIs, the processing circuits can identify vendors that have established relationships with the entity's preferred customers or insurers. This linkage enables a streamlined procurement process, where the entity can benefit from pre-negotiated contracts, favorable pricing, or tailored solutions. The processing circuits can evaluate the compatibility of preferred products with the entity's security objectives and seamlessly integrate them into the existing cybersecurity infrastructure. In some arrangements, once partners are identified, the processing circuits can configure one or more routing rules that dictate the flow of information and action items in response to the detected cybersecurity incident. These rules could be based on various factors such as the nature of the incident, the specific systems or data affected, the capabilities of the partner, or even pre-defined response plans. For instance, if a certain type of cybersecurity incident requires the expertise of a specific partner, the routing rules would ensure that all relevant action items are automatically sent to that partner. In particular, the routing rules facilitate improved and efficient response to cybersecurity incidents, ensuring that the right people are alerted at the right time with the right information. This coordinated, automated response mechanism significantly enhances the overall efficacy of the cybersecurity protection plan, reinforcing the entity's defenses against cyber threats. At block2350, the one or more processing circuits can be configured to deliver the newly identified cybersecurity incident to a dashboard managed by the one or more processing circuits In some arrangements, the dashboard includes a set of categories under which incidents are organized. These categories include inbound incidents, active incidents, and past incidents, each of which provides a different perspective on the entity's cybersecurity status. Inbound incidents refer to newly detected threats or vulnerabilities that have not yet been addressed. They include the security posture information associated with the entity, which gives context about the entity's overall cybersecurity health and potentially vulnerable areas. The information might encompass details about the entity's network architecture, the nature of its data, its existing cybersecurity measures, and its previous history of incidents. In some arrangements, active incidents, pertain to ongoing issues that are currently being handled. These incidents come with real-time status updates and states, providing the third-party with a dynamic view of the incident's progression. The real-time statuses can include information on the current stage of incident response, such as investigation, containment, eradication, or recovery. The states may describe the condition of the incident, like open, pending, escalated, or closed, which helps in understanding the immediate attention that an incident requires. In some arrangements, past incidents consist of resolved threats or breaches and serve as a historical record of the entity's cybersecurity events. Moreover, the dashboard can include an Incident Room for each of the active incidents. An Incident Room can serve as a dedicated space for collaborative incident response, where all relevant parties can communicate, share updates, and coordinate their actions. It consolidates all information related to a particular incident, such as logs, alerts, action plans, timelines, and other relevant data, thereby facilitating a streamlined and efficient response process. In some arrangements, the Incident Room also enables the tracking of response efforts, ensuring accountability and promoting continuous improvement in the entity's cybersecurity practices. In some arrangements, the one or more processing circuits can automatically renew at least one of the one or more cybersecurity plan offerings with at least one of the plurality of entities. The automation process is designed to ensure continuity of protection by eliminating the risk of lapses due to manual renewal processes. This can be achieved by tracking the expiry dates of the cybersecurity plans and triggering the renewal process in advance. The renewal terms could be based on the existing contract between the entity and the third-party, or they could be subject to negotiation. The process also includes updating the entity's profile and security posture, and recalibrating the cybersecurity plan's specifications to align with any changes that may have occurred in the entity's environment or needs. Notifications about the renewal process, including any changes in terms or pricing, can be sent to the entity and vendor. In some arrangements, the automatic renewal process for cybersecurity plan offerings is built on procedures to ensure a seamless and efficient experience for both the entities and vendors. The processing circuits keep track of the expiration dates of the cybersecurity plans and initiates the renewal process in advance, eliminating the need for manual intervention and mitigating the risk of coverage lapses. The renewal terms and conditions can be based on the existing contract between the entity and the third-party vendor, ensuring consistency and alignment with the agreed-upon terms. In addition to the contractual aspects, the processing circuits can also takes into account any changes in the entity's profile, security posture, or specific needs, allowing for the recalibration of the cybersecurity plan's specifications to provide tailored protection. Throughout the renewal process, notifications are sent to the entity and the vendor, providing updates on any changes in terms, pricing, or other relevant information, facilitating transparency and effective communication between all parties involved. In some arrangements, in response to receiving an indication of the completion of the new cybersecurity incident, the processing circuits can automatically generate and provide an invoice of the new cybersecurity incident to the entity. The invoice could include details such as the type of incident, the duration of the response, resources utilized, and the cost associated with each line item. The processing circuits could also include detailed explanations of each charge, enabling the entity to understand the cost drivers. Furthermore, upon completion of the new cybersecurity incident, the processing circuits can generate an incident summary. The summary can include a report that provides an overview of the incident from origination to resolution. It includes performance metrics such as the time to detect the incident, time to respond, time to contain, and time to recover. These metrics can provide insights into the effectiveness and efficiency of the entity's incident response process. Origination details can provide information about the source of the incident, its nature, and how it infiltrated the entity's defenses, which can be crucial for future prevention strategies. The incident timeline can be a chronological representation of the incident's progression and the response activities, providing a clear picture of the incident's lifecycle. The incident summary can be provided to the entity and relevant stakeholders, serving as a valuable resource for post-incident reviews, improvement of security strategies, and compliance reporting. In some arrangements, the processing circuits can collect cybersecurity data from the third-party tool interface and analyze and identify the data that aligns with the underwriting requirements. This analysis involves matching the collected cybersecurity data with the specific underwriting criteria, ensuring that the plan meets the necessary standards and guidelines. Once the data has been identified and categorized, the processing circuits package the information and seamlessly provide it to an application programming interface (API). This API serves as a conduit for transmitting the wrapped cybersecurity data, along with the underwriting requirements, to the underwriting system. Referring now toFIG.24, a flowchart for a method2400to protect data, in accordance with present implementations. At least system100can perform method2400according to present implementations. In broad overview of method2400, at block2410, the one or more processing circuits (e.g., response system130ofFIG.1A) can identify a protection plan. At block2420, the one or more processing circuits can receive activation. At block2430, the one or more processing circuits can generate and activate protection obligation. At block2440, the one or more processing circuits can model the protection plan. At block2450, the one or more processing circuits can establish data monitoring. Additional, fewer, or different operations may be performed depending on the particular arrangement. In some embodiments, some, or all operations of method2400may be performed by one or more processors executing on one or more computing devices, systems, or servers. In various embodiments, each operation may be re-ordered, added, removed, or repeated. At block2410, the processing circuits identify at least one cybersecurity protection plan associated with a plurality of third-parties. This identification process is guided by the previously modelled cybersecurity attributes, ensuring that the identified protection plan is relevant to the entity's cybersecurity needs. For example, the protection plan may be offered by a first third-party and a second third-party. Each of these plans is associated with the new cybersecurity attribute, demonstrating their capacity to address the specific cybersecurity needs identified during the modeling process. To provide more choice and flexibility for the entity, each cybersecurity protection plan is associated with one of several availability states. At block2420, the processing circuits are configured to receive an activation request from the entity's computing system for a selected cybersecurity protection plan. This activation signifies the entity's commitment to implementing the chosen protection plan. It might be, for example, that the entity has decided to proceed with the cybersecurity protection plan associated with the first third-party. The activation request signals the entity's decision to the processing circuits and triggers the next step in the process. Still at block2420, the processing circuits generate and activate a cybersecurity protection obligation between the entity and the first third-party. This protection obligation represents a formal agreement between the entity and the third-party provider, stipulating the provision of cybersecurity services as per the selected protection plan. In some arrangements, the protection obligation includes a plurality of protection attributes, which could include the specific services to be provided, the duration of the agreement, the obligations of each party, and the terms for monitoring, reporting, and responding to cybersecurity incidents. The activation of this obligation effectively sets the selected cybersecurity protection plan into motion, transitioning the entity into a phase of enhanced cybersecurity protection. The process of generating and activating a cybersecurity protection obligation involves several steps, for example, the creation of a formal contractual agreement between the entity and the third-party vendor. This contract outlines the scope and specifics of the cybersecurity services to be provided, in line with the selected protection plan. The document can detail the responsibilities and obligations of both parties, including the specific cybersecurity tasks to be undertaken by the vendor, and the cooperation and access required from the entity. The contract can be reviewed by both parties, and sometimes the processing circuits can automatically begin executing to fulfil contract terms based on previous relationship or authorizations by the vendor and/or entity. In some arrangements, the processing circuits may generate an invoice for the entity, reflecting the cost of the cybersecurity services as per the agreed-upon protection plan. This invoice might include details such as the price of individual services, any discounts or package deals, taxes, and payment terms. Payment processing can also be facilitated through the processing circuits, providing a seamless and convenient transaction experience for the entity. At block2430, the processing circuits provide the security posture, the entity data, and the cybersecurity protection obligation to the third-party computing system of the chosen vendor. This information transfer enables the vendor to understand the current cybersecurity state of the entity, their specific needs, and the obligations outlined in the protection plan. In particular, following financial settlement or prior to financial settlement based on the agreement, the processing circuits can provide the vendor the necessary access to the entity's infrastructure. In some arrangements, this can be achieved through a secure Application Programming Interface (API), which allows the vendor's systems to interact directly with the entity's systems. The API may provide the vendor with access to various aspects of the entity's infrastructure, depending on the services outlined in the protection plan. For instance, it could allow the vendor to monitor network traffic, manage security protocols, or deploy software patches. In some arrangements, there can be two separate APIs where the entity communicates with the processing circuits via a first API and the vendor communicates with the processing circuits via a second API. Thus, the activation of the cybersecurity protection obligation signifies the commencement of the cybersecurity services. It represents the implementation phase of the protection plan, where the vendor starts executing the agreed-upon services, guided by the contract terms and enabled by the access provided through the API. This activation indicates a transition from the planning stage to the action stage, setting the entity on a path towards improved cybersecurity. At block2440, the processing circuits model the cybersecurity protection plan. This involves configuring the vendor's tools and systems to work within the entity's infrastructure, based on the agreed-upon rules of engagement. This configuration process can be automated, with the processing circuits sending specific instructions to the vendor's systems via an API. These instructions could include access permissions, monitoring parameters, alert settings, and various other operational details that will guide the execution of the protection plan. The successful modeling of the protection plan at this stage provides that the vendor's systems are well-integrated into the entity's infrastructure and are ready to provide the required cybersecurity services. In order to implement (or deploy/configure) (i.e., model) the protection plan and integrate the vendor's tools into the entity's infrastructure, several steps can be taken. In some arrangements, the organization can establish the necessary credentials and permissions for the vendor to access the relevant systems or platforms. For example, if the entity utilizes AWS for its cloud infrastructure, the organization can provide the vendor with the required AWS credentials to facilitate the deployment of their tools on the entity's EC2 instances. In various arrangements, the organization can leverage automation capabilities to streamline the deployment process. This automation can be set up to automatically deploy the vendor's tools to the appropriate systems within the entity's infrastructure. By defining clear rules and configurations, the automation system can ensure that the deployment is consistent, efficient, and aligned with the organization's security requirements. During the deployment process, the processing circuits can monitor the progress and provide real-time feedback on the integration of the vendor's tools. They can validate that the tools are properly installed, configured, and connected to the relevant components within the entity's infrastructure. For example, suppose an organization operates a cloud-based infrastructure using platforms like Amazon Web Services (AWS). To integrate the vendor's tools into this environment, the organization can leverage automation tools. They can create infrastructure-as-code templates that define the desired state of the infrastructure and include the necessary configurations for deploying the vendor's tools. Using these templates, the organization can automatically provision the required infrastructure components, such as EC2 instances, security groups, and networking resources. The templates can be configured to install and configure the vendor's tools on the provisioned instances, ensuring that they are integrated into the organization's cloud environment. In another example, in the case of endpoint security solutions, the organization may have a diverse range of devices and operating systems across its network. To integrate the vendor's endpoint security tools into these devices, the organization can utilize a unified endpoint management (UEM) platform (e.g., executed and deployed by the response system130and stored in database140). The UEM platform can provide a centralized management console and agent-based deployment capabilities. The organization can configure the UEM platform to push the vendor's endpoint security agent to all managed devices within the network. The agent can be configured to communicate with the vendor's cloud-based security platform or an on-premises management server. Through the UEM platform, the organization can enforce security policies, monitor endpoint activities, and receive alerts and notifications from the vendor's tools. In some arrangements, the configuration of vendor tools is carried out by customizing the settings and parameters to align with the organization's specific security requirements. This includes defining rules, policies, and thresholds within the tools to effectively monitor, detect, and respond to security incidents. For instance, configuring firewalls to enforce access control policies, fine-tuning intrusion detection systems to detect specific attack patterns, or setting up encryption protocols for secure data transmission. In some arrangements, establishing connections between the vendor's tools and the organization's infrastructure allows for data flow and security monitoring. This involves integrating the tools with existing systems, such as log management platforms, identity and access management solutions, or security information and event management (SIEM) systems. Through these integrations, the organization can consolidate and correlate security events, streamline incident response workflows, and gain a comprehensive view of the overall security posture. In some arrangements, the deployment and implementation of vendor tools encompass the installation, activation, and configuration of the tools within the organization's environment. This can involve deploying software agents on endpoints, installing network appliances or sensors, or provisioning virtual instances in cloud environments. The deployment process ensures that the vendor tools are properly installed, connected, and ready to perform their intended functions. In some arrangements, testing and validation procedures are conducted after the deployment phase to ensure the effectiveness and reliability of the vendor tools and connections. This includes testing the tools' functionality, performance, and interoperability with other systems. Security assessments, vulnerability scans, and penetration testing may also be conducted to verify the tools' capability to detect and respond to various threats and attacks. In some arrangements, modeling, as part of the implementation process, refers to the systematic and strategic approach of configuring, integrating, and deploying vendor tools and connections within an organization's infrastructure. This involves a series of steps to ensure that the tools are appropriately tailored to meet the organization's specific security needs and seamlessly integrated with existing systems and processes. During the modeling phase, organizations collaborate closely with the vendor to define and customize the configuration settings of the tools. This includes determining the appropriate thresholds, policies, and rules that align with the organization's security objectives. For example, the modeling process may involve fine-tuning intrusion detection systems to detect specific attack patterns or configuring security information and event management (SIEM) systems to correlate and analyze security events effectively. Once the configuration settings are defined, the modeling process moves to the deployment stage. This can include the installation, activation, and integration of the vendor tools within the organization's infrastructure. The tools are deployed across various components, such as endpoints, network devices, servers, and cloud environments, to provide comprehensive security coverage. To ensure the successful integration and functionality of the vendor tools, thorough testing and validation are conducted during the modeling phase. Accordingly, the modeling process encompasses the implementation and deployment of vendor tools and connections. It involves configuring the tools to match the organization's security requirements, integrating them within the existing infrastructure, and conducting thorough testing to ensure their effectiveness. At block2450, the processing circuits establish a continuous data monitoring channel between the entity and the vendor. This involves the creation of two secure communication connections using APIs. The first connection is established between the entity's computing system or assets and the processing circuits, allowing the circuits to monitor the entity's systems in real-time. The second connection is established between the vendor's computing system and the processing circuits, enabling the vendor to receive real-time updates and alerts about the entity's security status. This continuous data monitoring channel can be a component of the protection plan, as it allows for immediate (or periodic) detection and response to any cybersecurity incidents. It ensures that the vendor is up-to-date with the entity's security status and can provide the necessary support promptly and efficiently. In some arrangements, the processing circuits can respond to changes in the security objectives or the security posture of the entity. When the processing circuits receive an updated security objective from the plurality of security objectives or detect a new security objective, or when they detect a change in the security posture, the processing circuits can determine an updated cybersecurity attribute of the set of cybersecurity attributes of the entity. This can be a dynamic process, reflecting the fact that cybersecurity is not a static data structure. As threats evolve and the entity's business environment changes, its security objectives and posture may need to be adjusted. The processing circuits are designed to handle such changes, updating the entity's cybersecurity attributes as needed to ensure that the protection plan remains effective. Once the updated cybersecurity attribute has been determined, the processing circuits then reconfigure the security objective via the second API. This reconfiguration could involve adjusting the parameters of the security objective, changing its priorities, or even replacing it entirely with a new objective. In some arrangements, this process can be done in consultation with the vendor and the entity, ensuring that any changes to the security objective align with the entity's current needs and risk tolerance. The reconfiguration via the second API allows these changes to be implemented promptly and seamlessly, minimizing any potential disruption to the entity's operations. In some arrangements, when an objective of the entity is updated, the processing circuits analyze the corresponding state data, which includes information about the entity's safeguards, coverage, threats, insurance, and other relevant factors. If the analysis reveals an imbalance or a gap in the combination of these factors, the processing circuits notify the entity (e.g., through a gap manager). For example, this notification prompts the entity to take automated actions to address the gap, such as modifying insurance policies, adjusting technology configurations, or implementing additional security measures. For example, assume a Fortune 500 company has experienced a significant increase in targeted cyber threats aimed at their customer data. Through the analysis of the state data, the processing circuits identify a gap in the entity's existing security objective related to data protection. The gap manager alerts the entity about this imbalance and triggers an automated response. The processing circuits, in consultation with the entity and the vendor, reconfigure the security objective to prioritize enhanced data encryption, real-time monitoring, and incident response measures. In the above example, the second API could be utilized to promptly implement these changes across the organization's infrastructure, ensuring that the security objective is aligned with the heightened threat landscape. While this specification contains many specific implementation details and/or arrangement details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular implementations and/or arrangements of the systems and methods described herein. Certain features that are described in this specification in the context of separate implementations and/or arrangements can also be implemented and/or arranged in combination in a single implementation and/or arrangement. Conversely, various features that are described in the context of a single implementation and/or arrangement can also be implemented and arranged in multiple implementations and/or arrangements separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Additionally, features described with respect to particular headings may be utilized with respect to and/or in combination with illustrative arrangement described under other headings; headings, where provided, are included solely for the purpose of readability and should not be construed as limiting any features provided with respect to such headings. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations and/or arrangements described above should not be understood as requiring such separation in all implementations and/or arrangements, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Having now described some illustrative implementations, implementations, illustrative arrangements, and arrangements it is apparent that the foregoing is illustrative and not limiting, having been presented by way of example. In particular, although many of the examples presented herein involve specific combinations of method acts or system elements, those acts, and those elements may be combined in other ways to accomplish the same objectives. Acts, elements and features discussed only in connection with one implementation and/or arrangement are not intended to be excluded from a similar role in other implementations or arrangements. The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including” “including” “having” “containing” “involving” “characterized by” “characterized in that” and variations thereof herein, is meant to encompass the items listed thereafter, equivalents thereof, and additional items, as well as alternate implementations and/or arrangements consisting of the items listed thereafter exclusively. In one arrangement, the systems and methods described herein consist of one, each combination of more than one, or all of the described elements, acts, or components. Any references to implementations, arrangements, or elements or acts of the systems and methods herein referred to in the singular may also embrace implementations and/or arrangements including a plurality of these elements, and any references in plural to any implementation, arrangement, or element or act herein may also embrace implementations and/or arrangements including only a single element. References in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements to single or plural configurations. References to any act or element being based on any information, act or element may include implementations and/or arrangements where the act or element is based at least in part on any information, act, or element. Any implementation disclosed herein may be combined with any other implementation, and references to “an implementation,” “some implementations,” “an alternate implementation,” “various implementation,” “one implementation” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described in connection with the implementation may be included in at least one implementation. Such terms as used herein are not necessarily all referring to the same implementation. Any implementation may be combined with any other implementation, inclusively or exclusively, in any manner consistent with the aspects and implementations disclosed herein. Any arrangement disclosed herein may be combined with any other arrangement, and references to “an arrangement,” “some arrangements,” “an alternate arrangement,” “various arrangements,” “one arrangement” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described in connection with the arrangement may be included in at least one arrangement. Such terms as used herein are not necessarily all referring to the same arrangement. Any arrangement may be combined with any other arrangement, inclusively or exclusively, in any manner consistent with the aspects and arrangements disclosed herein. References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms. Where technical features in the drawings, detailed description or any claim are followed by reference signs, the reference signs have been included for the sole purpose of increasing the intelligibility of the drawings, detailed description, and claims. Accordingly, neither the reference signs nor their absence have any limiting effect on the scope of any claim elements. The systems and methods described herein may be embodied in other specific forms without departing from the characteristics thereof. Although the examples provided herein relate to controlling the display of content of information resources, the systems and methods described herein can include applied to other environments. The foregoing implementations and/or arrangements are illustrative rather than limiting of the described systems and methods. Scope of the systems and methods described herein is thus indicated by the appended claims, rather than the foregoing description, and changes that come within the meaning and range of equivalency of the claims are embraced therein. | 294,313 |
11943255 | DETAILED DESCRIPTION The present disclosure relates in general to a method, apparatus, and system to validate communications in an open architecture system and, in particular, to predicting responses of client device to identify malicious applications attempting to interfere with communications between servers and the client devices. Briefly, in an example embodiment, a system is provided that detects malicious errors in a communication channel between a server and a client device. Normally, communication errors between a server and a client device are a result of random channel noise. For instance, communications received by server-client endpoints fall outside of a set of prior selected, recognizable, messages or codewords. Channel errors are usually corrected by existing error correction schemes and internet protocols. The end user is typically unaware that a transmission error has occurred and has been corrected. Malicious applications typically evade error correcting schemes in two ways: first by altering an original message into an alternative message, and second by creating noise in a segment of a channel where traditional error correction schemes do not operate. In the first way, a malicious application alters an original message into an alternative message that is already in a codeword set of an error correction mechanism. The malicious application may also provide additional messages that are included within the codeword set. As a result, an error correction algorithm is unaware that an error has even taken place and thereby makes no attempt to correct for the error. In the second way, a malicious application creates noise in a segment of a channel where traditional error correction schemes do not operate. For example, once a packet successfully traverses the Internet and arrives at a network interface of a receiving device, a bit stream of the packet is processed by an application stack under an assumption that no further transmission noise sources will occur. As a result, the application stack does not anticipate errors to occur in the bit stream after processing and thereby makes no attempt to correct for any errors from this channel noise. Malicious applications create targeted malicious noise configured to interfere with communications between a client device and a server. This channel noise is guided by a deliberate purpose of the malicious application to alter, access, or hijack data and/or content that is being communicated across a client-server connection. Oftentimes, the noise alters communications from original and authentic information to substitute authentic-appearing information. The noise is often induced in a segment of the (extended) channel that is poorly defended or entirely undefended by error correction algorithms. As a result, a malicious application is able to use channel noise to direct a server and/or a client device to perform actions that the client device or server did not originally intend. In an example, a client device may be connected to an application server configured to facilitate banking transactions. During a transaction, the server requests the client device to provide authentication information (e.g., a username and a password) to access an account. A malicious application detects the connection and inserts malicious noise that causes the client device to display a security question in addition to the username and password prompts (e.g., client baiting). A user of the client, believing the server provided the security question, enters the answer to the security question with the username and password. The malicious application monitors the response from the client device so as to use malicious noise to remove the answer to the security question before the response reaches the server. The malicious application may then use the newly acquired security question to later illegally access the account associated with the client device to improperly withdrawal funds. In this example, the server is unable to detect the presence of the malicious application because the server receives a proper response to the authentication, namely the username and password. The client device also cannot detect the malicious application because the client device believes the server provided the security question. As a result, the malicious application is able to use channel noise to acquire sensitive information from the client device without being detected by the server or the client. This client baiting is not the only method used by malicious applications. In other examples, malicious applications may use channel noise to add data transactions between a client device and a server (e.g., add banking transactions). For instance, a client device may specify three bill payment transactions and a malicious application may insert a fourth transaction. In further examples, malicious applications may use channel noise to remove, substitute, or acquire data transmitted between a server and a client, modify data flow between a server and a client, inject graphics or advertisements into webpages, add data fields to forms, or impersonate a client device or a server. The example method, apparatus, and system disclosed herein overcome at least some of these issues caused by malicious noise by detecting malicious applications through estimated, predicted, or anticipated responses from a client device. The example method, apparatus, and system disclosed herein detect malicious applications by varying soft information describing how hard information is to be displayed by a client device. During any client-server connection, a server provides hard information and soft information. The hard information includes data, text, and other information that is important for carrying out a transaction with a client. The soft information specifies how the hard information is to be rendered and displayed by the client device. A server uses hard and soft messaging to transmit the hard and soft information to a client device. In some instances, the soft and hard information can be combined into messages before transmission. In other examples, the soft and hard information can be transmitted to a client device in separate messages. As used herein, soft messaging refers to the transmission of soft information to a client device in separate or combined soft/hard messages and hard messaging refers to the transmission of hard information to a client device in separate or combined soft/hard messages. The example method, apparatus, and system disclosed herein use variations in soft information to form a best guess (e.g., a prediction or estimation) as to how hard information is displayed by a client device. The example method, apparatus, and system disclosed herein then compare a response from the client device to the best guess. If the information included within the response does not match or is not close enough to the prediction, the example method, apparatus, and system disclosed herein determine that a malicious application is affecting communications between a server and a client or, alternatively, provide an indication that a malicious application is affecting communications. As a result of this detection, the example method, apparatus, and system disclosed herein implement fail safe procedures to reduce the effects of the malicious application. The example method, apparatus, and system disclosed herein uses soft information and messaging as a signaling language to detect malicious applications. In other words, the example method, apparatus, and system disclosed herein create an extended set of codewords for use with a user of a client device to validate that a malicious application is not interfering with communications. The created codeword set installs or uses soft messaging techniques including dynamically linked and/or static libraries, frameworks, browser helper objects, protocol filters, etc. The goal of these soft messaging techniques is to perturb the created communication channel such that the soft information cannot be reverse engineered by the malicious application but is known by the client device and the server. For instance,FIG.17shows diagrams comparing messaging without the example method, apparatus, and system disclosed herein and messaging using the example method, apparatus, and system disclosed herein. Diagram1700shows that in the absence of the example method, apparatus, and system disclosed herein, a set of legitimate codewords (denoted by circles) is fixed. Malicious applications know how these codewords are fixed and use malicious noise (denoted by the arrow) to transform a first valid codeword into a second valid codeword. The transformation is undetected by a receiving client device and the sending server. In contrast, diagram1710shows that the example method, apparatus, and system disclosed herein uses variability in soft information and messaging extends the dimensionality of the codeword set. This variability is unknown by the malicious application. Thus, an error occurs when the malicious noise combines with an intended codeword. As shown in diagram1710, the resulting altered codeword (denoted by an “X”) does not match the set of anticipated recognized codewords, which enables the malicious noise to be detected. The example method, apparatus, and system disclosed herein are accordingly able to use this soft information and messaging variability to detect malicious noise. As used herein, hard messaging and hard information is transactional text and/or data displayed by a client device. The transactional text, data, pictures, and/or images that can be instructional, informational, functional, etc. in nature. The hard information also includes textual options that are selectable by a client. Hard information is accordingly principal information of a transaction or service provided by a server and presented to a client by a client device. The hard information includes any type of text and/or data needed by a server to perform a transaction or service on behalf of a client. For instance, hard information of a webpage of an account log-in screen includes text providing instructions to a client as to the nature of the webpage, text for a username field, and text for a password field. After a client has logged into the account, the hard information includes transaction numbers, transaction dates, transaction details, an account balance, and account identifying information. Hard information may be financial (e.g. on-line banking), material (e.g., flow control of raw material in manufacturing processes), or related to data management (e.g., encryption, decryption, addition to or removal from shared storage, copying, deletion, etc.). As used herein, soft messaging and soft information is presentation information describing how hard information is to be displayed by a client device. Soft information pertains to the installation and/or system usage of dynamically linked and/or static libraries, frameworks, browser helper objects, protocol filters, javascript, plug-ins, etc. that are used to display hard information without interrupting the communication of the hard portion of the message between a client device and a server. The soft portion of the message includes information based on a server's selection of protocol, formatting, positioning, encoding, presentation, and style of a fully rendered version of hard information to be displayed at the client device endpoint. The soft information can also include preferences (e.g., character sets, language, font size, etc.) of clients as to how hard information is to be displayed. The precise details of the manner or method in which the direct, client device initiated, response information returns to the server is also a soft component of the communication and may be varied or manipulated without detracting from an ability of the server and client device to conduct e-business, e-banking, etc. The hard part of the message is constrained, for example, by business utility (e.g., there must be a mechanism for a client device to enter intended account and transaction information and return it to the server) while the soft part of the message has fewer constraints. For example, the order in which a client device enters an account number and a transaction amount usually is not important to the overall transaction. To achieve the business purpose a server only has to receive both pieces of information. In the client baiting example described above, the example method, apparatus, and system disclosed herein cause the server to transmit to the client device in one or more soft messages code that causes the client device to return coordinates of a mouse click of a ‘submit’ button. These soft messages are included with the other soft messages describing how the authentication information is to be displayed by the client. The server also determines a prediction as to what the coordinates should be based on knowing how the particular client device will render and display the information. When the malicious application uses malicious noise to insert the security question, the malicious application has to move the ‘submit’ button lower on a webpage. Otherwise, the security question would appear out of place on the webpage in relation to the username and password fields. When a user of the client device uses a mouse to select the ‘submit’ button, the client device transmits the coordinates of the mouse click to the server. The server compares the received coordinates with the coordinates of the prediction and determines that the difference is greater than a standard deviation threshold, which indicates the presence of a malicious application. In response to detecting the malicious application, the server can initiate fail safe procedures to remedy the situation including, for example, requiring the client device to create new authentication information or restricting access to the account associated with the client device. As can be appreciated from this example, the example method, apparatus, and system disclosed herein provide server-client communication channel validation. By knowing how a client device is to display information, the example method, apparatus, and system disclosed herein enable a server to identify remotely located malicious applications that mask their activities in hard to detect channel noise. As a result, servers are able to safeguard client data and transactions from some of the hardest to detect forms of malicious third party methods to acquire information and credentials. This allows service providers that use the example method, apparatus, and system disclosed herein to provide security assurances to customers and other users of their systems. Throughout the disclosure, reference is made to malicious applications (e.g., malware), which can include any computer virus, counterfeit hardware component, unauthorized third party access, computer worm, Trojan horse, rootkit, spyware, adware, or any other malicious or unwanted software that interferes with communications between client devices and servers. Malicious applications can interfere with communications of a live session between a server and a client device by, for example, acquiring credentials from a client device or server, using a client device to instruct the server to move resources (e.g., money) to a location associated with the malicious application, injecting information into a form, injecting information into a webpage, capturing data displayed to a client, manipulating data flow between a client device and a server, or impersonating a client device using stolen credentials to acquire client device resources. Additionally, throughout the disclosure, reference is made to client devices, which can include any cellphone, smartphone, personal digital assistant (“PDA”), mobile device, tablet computer, computer, laptop, server, processor, console, gaming system, multimedia receiver, or any other computing device. While this disclosure refers to connection between a single client device and a server, the example method, apparatus, and system disclosed herein can be applied to multiple client devices connected to one or more servers. Examples in this disclosure describe client devices and servers performing banking transactions. However, the example method, apparatus, and system disclosed herein can be applied to any type of transaction or controlled usage of resources between a server and a client device including, but not limited to, online purchases of goods or services, point of sale purchases of goods or services (e.g., using Near Field Communication), medical applications (e.g., intravenous medication as dispensed by an infusion pump under the control of a computer at a nurses station or medication as delivered to a home address specified in a webpage), manufacturing processes (e.g., remote manufacturing monitoring and control), infrastructure components (e.g., monitoring and control of the flow of electricity, oil, or flow of information in data networks), transmission of information with a social network, or transmission of sensitive and confidential information. The present system may be readily realized in a network communications system. A high level block diagram of an example network communications system100is illustrated inFIG.1. The illustrated system100includes one or more client devices102, one or more application servers104, and one or more database servers106connected to one or more databases108. Each of these devices may communicate with each other via a connection to one or more communication channels in a network110. The network110can include, for example the Internet or some other data network, including, but not limited to, any suitable wide area network or local area network. It should be appreciated that any of the devices described herein may be directly connected to each other and/or connected through the network110. The network110may also support wireless communication with wireless client devices102. The client devices102access data, services, media content, and any other type of information located on the servers104and106. The client devices102may include any type of operating system and perform any function capable of being performed by a processor. For instance, the client devices102may access, read, and/or write information corresponding to services hosted by the servers104and106. Typically, servers104and106process one or more of a plurality of files, programs, data structures, databases, and/or web pages in one or more memories for use by the client devices102, and/or other servers104and106. The application servers104may provide services accessible to the client devices102while the database servers106provide a framework for the client devices102to access data stored in the database108. The servers104and106may be configured according to their particular operating system, applications, memory, hardware, etc., and may provide various options for managing the execution of the programs and applications, as well as various administrative tasks. A server104,106may interact via one or more networks with one or more other servers104and106, which may be operated independently. The example servers104and106provide data and services to the client devices102. The servers104and106may be managed by one or more service providers, which control the information and types of services offered. These services providers also determine qualifications as to which client devices102are authorized to access the servers104and106. The servers104and106can provide, for example, banking services, online retain services, social media content, multimedia services, government services, educational services, etc. Additionally, the servers104and106may provide control to processes within a facility, such as a process control system. In these instances, the servers104and106provide the client devices102access to read, write, or subscribe to data and information associated with specific processes. For example, the application servers104may provide information and control to the client devices102for an oil refinery or a manufacturing plant. In this example, a user of the client device102can access an application server104to view statuses of equipment within the plant or to set controls for the equipment within the plant. While the servers104and106are shown as individual entities, each server104and106may be partitioned or distributed within a network. For instance, each server104and106may be implemented within a cloud computing network with different processes and data stored at different servers or processors. Additionally, multiple servers or processors located at different geographic locations may be grouped together as server104and106. In this instance, network routers determine which client device102connects to which processor within the application server104. In the illustrated example ofFIG.1, each of the servers104and106includes a security processor112. The security processor112monitors communications between the client devices102and the respective servers104and106for suspicious activity. The monitoring may include detecting errors in a communication channel between a client device102and a server104using hard and soft messages, as described herein. In some embodiments, the security processor112may be configured to only detect channel errors that are of strategic importance. This is because malicious applications generally only target communications that convey high value information (e.g., banking information). As a result, using the security processor112for important communications helps reduce processing so that the security processor112does not validate communications that are relatively insignificant (e.g., browsing a webpage). These important communications can include authentication information, refinements to types of requested services, or details on desired allocation of resources under a client's control. These resources may be financial (e.g., on-line banking), material (e.g., flow control of raw material in manufacturing processes) or related to data management (e.g., encryption, decryption, addition to or removal from shared storage, copying, deletion, etc.). In an example embodiment, a client device102requests to access data or servers hosted by a server104. In response, the server104determines hard information that corresponds to the request and identifies soft information compatible with the hard information. In some instances, the server104may use device characteristics or information of the client device102to select the soft messaging. Upon selecting the soft and hard messages, the security processor112selects how the messages are combined into transmission packets and instructs the server104to transmit the packets to the client device102. To make the packets undecipherable by malicious applications, the security processor112may combine hard and soft information, rearrange the order of information transmission, or mix different layers of information. The unperturbed location of any input boxes or buttons selected by the security processor112for soft messaging may vary, subtly, from session to session, without being observable by a client device102or a malicious application. For example, the absolute and relative positioning of page elements may be obscured by the incorporation of operating system, browser, and bugz and further obscured by seemingly routine use of byte code and javascript. The security processor112may also use redundant measures for determining rendered page geometry and activity so that information returned from the client device102may be further verified. For instance, benign “pop-up windows” featuring yes/no button messages such as: “would you have time to take our brief customer survey?” may be made to appear or not appear depending on actual cursor or mouse locations when a ‘submit’ button is pressed at the client device102. Additionally, the security processor112may use generic geometrical and content related soft-variations (absolute and relative locations of input boxes and buttons, the appearance or lack of appearance of benign “pop-up” boxes, buttons, advertisements or images) to validate communications with a client device102. In other words, the security processor112may use soft information provided by client devices102to also validate a communication channel. After selecting which soft and hard information to send to the client device102, the security processor112makes a prediction, in this example, as to a location of a ‘Submit’ icon on a fully rendered webpage displayed on client device102. This icon is part of a banking website provided by application server104. The security processor112may also use backscattered information received from routing components in the network110to form the prediction. This backscattered information provides, for example, how the soft and hard information in the transmitted message(s) are processed, routed, and rendered. The security processor112then monitors a response by the client device102to identify coordinates of a mouse click of the ‘Submit’ icon. The security processor112determines that a malicious application is affecting communications if the prediction does not match the reported coordinates of the mouse clink on the icon. In response to detecting a malicious application, the security processor112attempts to prevent the malicious application from further affecting communications with the affected client devices102. In some embodiments, the security processor instructs the servers104and106to alter normal operation and enter into a safe operations mode. In other embodiments, the security processor112restricts activities of the affected client devices102or requests the client devices102to re-authenticate or establish a more secure connection. The security processor112may also store a record of the incident for processing and analysis. In further embodiments, the security processor112may transmit an alert and/or an alarm to the affected client devices102, personnel associated with the servers104and106, and/or operators of the security processor112. While each server104and106is shown as including a security processor112, in other embodiments the security processor112may be remotely located from the servers104and106(e.g., the security processor112may be cloud-based). In these embodiments, the security processor112is communicatively coupled to the servers104and106and remotely monitors for suspicious activity of malicious applications. For instance, the security processor112may provide soft information to the servers104and106. The security processor112may also receive client device response messages from the servers104and106. In instances when the security processor112detects a malicious application, the security processor112remotely instructs the servers104and106how to remedy the situation. A detailed block diagram of electrical systems of an example computing device (e.g., a client device102, an application server104, or a database server106) is illustrated inFIG.2. In this example, the computing device102,104,106includes a main unit202which preferably includes one or more processors204communicatively coupled by an address/data bus206to one or more memory devices208, other computer circuitry210, and one or more interface circuits212. The processor204may be any suitable processor, such as a microprocessor from the INTEL PENTIUM® or CORE™ family of microprocessors. The memory208preferably includes volatile memory and non-volatile memory. Preferably, the memory208stores a software program that interacts with the other devices in the system100, as described below. This program may be executed by the processor204in any suitable manner. In an example embodiment, memory208may be part of a “cloud” such that cloud computing may be utilized by computing devices102,104,106. The memory208may also store digital data indicative of documents, files, programs, web pages, etc. retrieved from computing device102,104,106and/or loaded via an input device214. The example memory devices208store software instructions223, webpages224, user interface features, permissions, protocols, configurations, and/or preference information226. The memory devices208also may store network or system interface features, permissions, protocols, configuration, and/or preference information228for use by the computing devices102,104,106. It will be appreciated that many other data fields and records may be stored in the memory device208to facilitate implementation of the methods and apparatus disclosed herein. In addition, it will be appreciated that any type of suitable data structure (e.g., a flat file data structure, a relational database, a tree data structure, etc.) may be used to facilitate implementation of the methods and apparatus disclosed herein. The interface circuit212may be implemented using any suitable interface standard, such as an Ethernet interface and/or a Universal Serial Bus (USB) interface. One or more input devices214may be connected to the interface circuit212for entering data and commands into the main unit202. For example, the input device214may be a keyboard, mouse, touch screen, track pad, track ball, isopoint, image sensor, character recognition, barcode scanner, microphone, and/or a speech or voice recognition system. One or more displays, printers, speakers, and/or other output devices216may also be connected to the main unit202via the interface circuit212. The display may be a cathode ray tube (CRTs), a liquid crystal display (LCD), or any other type of display. The display generates visual displays generated during operation of the computing device102,104,106. For example, the display may provide a user interface and may display one or more webpages received from a computing device102,104,106. A user interface may include prompts for human input from a user of a client device102including links, buttons, tabs, checkboxes, thumbnails, text fields, drop down boxes, etc., and may provide various outputs in response to the user inputs, such as text, still images, videos, audio, and animations. One or more storage devices218may also be connected to the main unit202via the interface circuit212. For example, a hard drive, CD drive, DVD drive, and/or other storage devices may be connected to the main unit202. The storage devices218may store any type of data, such as pricing data, transaction data, operations data, inventory data, commission data, manufacturing data, marketing data, distribution data, consumer data, mapping data, image data, video data, audio data, tagging data, historical access or usage data, statistical data, security data, etc., which may be used by the computing device102,104,106. The computing device102,104,106may also exchange data with other network devices220via a connection to the network110or a wireless transceiver222connected to the network110. Network devices220may include one or more servers (e.g., the application servers104or the database servers106), which may be used to store certain types of data, and particularly large volumes of data which may be stored in one or more data repository. A server may include any kind of data including databases, programs, files, libraries, pricing data, transaction data, operations data, inventory data, commission data, manufacturing data, marketing data, distribution data, consumer data, mapping data, configuration data, index or tagging data, historical access or usage data, statistical data, security data, etc. A server may store and operate various applications relating to receiving, transmitting, processing, and storing the large volumes of data. It should be appreciated that various configurations of one or more servers may be used to support and maintain the system100. For example, servers may be operated by various different entities, including sellers, retailers, manufacturers, distributors, service providers, marketers, information services, etc. Also, certain data may be stored in a client device102which is also stored on a server, either temporarily or permanently, for example in memory208or storage device218. The network connection may be any type of network connection, such as an Ethernet connection, digital subscriber line (DSL), telephone line, coaxial cable, wireless connection, etc. Access to a computing device102,104,106can be controlled by appropriate security software or security measures. An individual users' access can be defined by the computing device102,104,106and limited to certain data and/or actions. Accordingly, users of the system100may be required to register with one or more computing devices102,104,106. The Client-Server Communication Channel FIG.3shows a diagram of a communication session300between a client device102and an application server104. The communication session300occurs over a communication channel302, which is included in the network110ofFIG.1. The communication channel302includes hardware and software components that convey, relay, shape and forward information between the server104and the client device102. The hardware components includes network node devices such as routers, mobile switching center components, base switching center components, data storage, caches, device proxies and firewalls. The hardware components can also include client device specific endpoints, computer architecture, processor types, mobile device chipsets, SIM cards and memory. The software components of the channel include network or endpoint device platforms, instruction sets, operating systems, operating system versions, application programming interfaces (“api”), and libraries. The software components can also include client device endpoint software, user interfaces, browser types, browser versions, cascading style sheets, scripts, document object models, javacode, byte script, etc. In the communication channel302, information transmitted by the server104(e.g., soft/hard information included within soft/hard messages) is acted upon, processed, forwarded, and rendered by the various intervening hardware and software channel components. The processing is performed by hardware and software components residing on both network and client device endpoints. The client device102is the ultimate recipient of the fully realized, completely processed version of the information transmitted by the server104. The client device102is stimulated by the received (processed) information into prompting a user for decision(s) and/or performing one or more actions. Once a user inputs a decision, the client device102communicates a response message to the server104through the channel302. WhileFIG.3shows one communication channel302, other communication channels can include different components and corresponding behavioral characteristics that vary from one server-client device connection to another. The behavioral characteristics identify ways in which information is acted upon, processed, forwarded and rendered by the hardware and software components of the channel302. The security processor112uses these behavioral characteristics to help form a prediction of a response from the client device102. Once a server-client device connection is established across a channel302and the primary, intended function of that communication is initiated (e.g., the type of transaction that is to occur across the channel302), secondary characteristics and observables are generated in the channel302as a consequence. There are two types of secondary characteristics and observables: “global” (involving many or all channel components) and “local” (involving a single, pair, or triple of channel components). The “global” channel's temporal secondary characteristics are applied across many or all hardware/software components and layers in, for example, the network110and include: i) number and size of discrete transmissions, ii) density of discrete transmissions, iii) frequency and other spectral content (e.g., content obtained by discrete Fourier transform, wavelet transform, etc. of an observed time series), and iv) geo-spatial density. These characteristics are derived from observables (e.g., from observation of information flow between client device102and server104) that include, for example, i) delivery times, ii) delivery rates, iii) transmission requests (as reports on errors or inefficiencies), and iv) sequencing or permutations in an order of information processing events. These observables are dependent on a number of factors including, for example, hardware type, software type, and current state (e.g., traffic load, internal queue lengths, etc.) of components that comprise the channel302. “Local” observables may also be generated on a per client device basis or per layer basis in the channel302ofFIG.3by server104and/or client device102initiated stimuli. The variations between client devices or layers are a result of a client's or layer's internal, device specific, information processing prioritization rulesets/protocols, inter-component signaling rulesets, and/or protocols that use hardware or software-based signaling components. The local observables may indicate, for example, a browser type used by the client device102, an operating system of the client device102, or device coding schemes used by the client device102. In the example embodiment ofFIG.3, the security processor112structures the hard and soft messaging output by the server104so that the secondary characteristics and observables function as a secondary means of communication between the client device102and server104. At the same time, the security processor112structures the hard and soft messaging output by the server104in a manner consistent with the original purpose of the connection with the client device102. As a result, the secondary means of communication between the server104and the client device104over the channel302is configured to not interfere with the primary, intended function of the server-client device interaction. As a result, the security processor112uses the channel302to vary soft information without changing the nature of the intended transaction between the server104and the client device102. InFIG.3, the channel302is constructed for universal use (e.g., an open architecture channel). That is, the component and collections of component technologies of the channel302are designed to enable a rich variety of server types, client device types, and purposeful communications between the different server and client device types. This enables the security processor112to use a variety of different soft messaging methods to achieve the original, intended purpose of the server-client transaction. However, each soft messaging method sets into motion a different set of (global and local) channel characteristic signals and observables. The security processor112is accordingly able to establish a secondary communication language between the server104and the client device102across the channel302using the association between variations in soft messaging methods (global, local) and corresponding channel characteristic responses. The communication session300ofFIG.3also includes malicious applications304, which are configured to interfere with client-server communications while allowing the primary, intended function of the server-client device interaction to occur. However in accomplishing and creating this perturbation of the primary, intended communication between the server104and the client device102, the malicious applications304effectively become an “additional component” of the channel302, thereby unknowingly affecting the secondary communications. As shown inFIG.3, the malicious applications304can insert information into the channel302and/or extract information from the channel302using engineered channel noise. The example security processor112detects these malicious applications304by monitoring how generated malicious channel noise impacts the consistently crafted client-server secondary communications. FIG.4shows a diagram of backscattered channel information402during the communication session300between the server104and the client device102using the communication channel302ofFIG.3. From the point of view of the server104(or a trusted proxy), a complete communication with the client device102includes two distinct segments: information sent to the client device102and information received from the client device102in response to the information sent. Information402regarding the progress of channel components in processing, realizing and rendering information and inter-device signaling events, scatter back to the server104. If the server104, via the security processor112, subtly varies the content that it sends to the client device102through soft messaging, the effects of the changes will be detectable in the echoed information returning back to the server104from the various components and processing layers of the channel302. The a priori knowledge of the information transmitted by the server104(the information and stimuli actually sent into the channel302to the client device102) together with the global and local backscatter information402from the components and layers of the channel302, permit the server104(or a trusted proxy) to form a prediction as to the condition of the final, post-processing, fully rendered version of the information displayed by the client device102. Additionally, direct, client device initiated, response messages to the server104(e.g., mouse clicks or user supplied account information) constitute a means for the security processor112to determine a prediction as to the fully rendered version of the information displayed by the client device102. The information in the response from the client device102can be entered by a user using a mouse, keyboard, touchscreen, an infrared ID tag scanner, etc. For example, information of a returned mouse click informs the security processor112that a selectable box was 1) rendered, 2) selected, and 3) the click was preformed at (x,y) pixel coordinates. The security processor112determines discrepancies between the prediction and the direct, client device102initiated responses of the fully rendered information to detect and identify errors (e.g., malicious applications304) in the channel302. The detection and identification of channel error causes the security processor112to alter normal operations of the server104. In some embodiments, the security processor112may cause the server104to enter a safe operations mode, restrict authorized client device activities, and/or generate an alert and/and/or alarm. The Use of Soft Messaging for Channel Verification As discussed above, the security processor112can use different types and variations of soft messaging and information to help identify malicious applications. This variation helps prevent malicious applications from reverse engineering the soft messaging and circumventing the approaches described herein. As described below, the variation can include changes to font size, changes to web page arrangement of hard information and graphics, addition of characters to user inputs, changes to function definitions, requests for user prompts through banners and pop-up windows, or implementations of bugz. The variation can also include changing an order in which hard and soft information is sent from a server104or a client device102. The order in which information arrives at a server104or client device102is not relevant for business purposes. The inclusion of additional information, for example the pixel location of a mouse click, cursor, or scroll bar (e.g., soft information) in addition to account information (e.g., hard information) does not affect the business purpose. The method of encoding information, and within reasonable bounds, the amount of time information spent in transmission of channel302have a generally neutral impact on business purposes. “Soft” choices consistent with the “hard” business purpose exist at many layers of the channel302ranging from the choice(s) of physical method(s) used, transmission encoding method(s) used on the physical layer(s), to aesthetic details of information presentation and user interactions with a presented webpage. The choice of soft messaging by the server104(or its trusted proxy) corresponding to given hard information is a many-to-one mapping. In a similar way, the local, specialized function and contribution of each network and client device specific hardware and software channel component is decomposable into hard and soft elements consistent with achieving the overall, global intent of the interaction of the server104with the client device102. The security processor112accordingly maintains hard functionality of the server-client device connection (e.g., the session300) while varying the soft information. Soft information variations are recorded a priori by the security processor112or the server104(or its trusted proxy) in a data structure to create a large set of composite (hard and soft) messages to be transmitted together. In other embodiments, the server104may transmit the hard messages separate from the soft messages. The soft variations are constrained by the fact the final presentation at the client device102must be intelligible, not garbled. Further, the soft variations must be of sufficient complexity that the malicious applications304are faced with a time consuming reverse engineering problem in deciphering the accumulated impact of the soft message changes throughout the channel302. The Use of Bugz in Soft Messaging for Channel Verification As mentioned above, the security processor112may use implementations of bugz in soft information variation. Bugz are anomalous, device, software, protocol and/or physical communication medium specific interpretations of input instructions that produce consistent although unexpected output. Bugz are inherent in many components of the channel302and are generally undetectable by malicious applications304without significant processing and analysis. The use of bugz helps enhance the complexity of soft messaging by enabling the security processor112to craft soft information so that the soft degrees of freedom within and between hardware and software based components of the channel302are combined in a multiplicative fashion. While four examples of bugz are described below, the security processor112can implement any type of bugz in soft messaging. One type of bugz is based on different operating systems of client devices102processing the same incoming packet streams differently. As a result of this bugz, the security processor112can create soft messaging packet streams indented to induce certain known behaviors in an operating system to display hard information. Another type of bugz is based on different operating systems of client devices102interpreting the same portion of Extensible Markup Language (“xml”) code differently. Prior to initializing its service to a client device102, a server104or security processor112selects from a variety of ways that a portion of xml code may be written and select from a variety of ways to order, time delay, and geographically position the way the packets containing that code are transmitted into the channel302. Yet another type of bugz is based on HyperText Markup Language (“html”) code and cascading style sheet instructions that can be written and combined in contrasting and confusing fashion by a server104or the security processor112. The server104can also use different layers of the style sheet in opposition of each other. For example, the security processor112could instruct a server104to randomize which portions of a webpage are sent in style sheet instructions at sequential times. As a result, a malicious application304is unable to easily determine which style sheet instruction corresponds to which portion of the webpage. A further type of bugz is based on code libraries that are internally re-arranged by the security processor112so that functions that use the code libraries on client devices102are contrasted with expected performance in accord with the usage conventions of the standard library. For example, the security processor112can use this type of bugz to swap the definitions of the “add” and “multiply’ functions. As a result of this swap, the client device102performs the intended function while a malicious application304incorrectly determines that a different function is being performed. As a result, the security processor112can determine if a malicious application304attempts to change a result of the function or transaction. Often the ultimate resolution of the purposefully mis-engineered “spaghetti” code applied by the security processor112in soft messaging depends on a browser type and version at the client device102. Java script and bytecode, for example, may be similarly obfuscated by the security processor112without negatively detracting from run time performance or the ability of the server104and client device102to conduct business. These effects of the examples described above may be enhanced by incorporating operating system and browser bugz into the instructions. The result of this incorporation is a soft formatting and presentation style at a client device endpoint that makes it difficult for malicious applications304to predict and/or automatically interpret the soft information. This makes the soft information difficult for the malicious applications304to alter, replace, or counterfeit in real time. Although this encoding is difficult to interpret in real time, it may be easily tested experimentally, a priori by a server104(or its trusted proxy). It is this a priori knowledge of the unperturbed and fully implemented rendering of the instruction set at the client device102that forms the basis of the prediction determination made by the security processor112of the formatting at the client device endpoint. The example security processor112creates the variation among the soft messages to increase the differences between the prediction and direct versions of the fully rendered information displayed by the client device102. A Comparative Example of Channel Verification with and without the Security Processor FIGS.5and6show diagrams representative of a malicious application304affecting the communication session300between the client device102and server104. In particular,FIG.5shows the affects of the malicious application304when the server104does not include a security processor112andFIG.6shows the affects of the malicious application304when the server104includes the security processor112. It should be noted thatFIGS.5and6are only one example of communications between a client device102and server104. Other examples can include additional affects by malicious applications304and/or different types of transactions performed between the server104and client device102. InFIG.5, the server104intends to communicate a deliberate, per-determined datagram402to client device102. Here, the datagram402is a webpage that prompts a user to provide a username and password. The pre-determined datagram402is represented as a binary form for transmission purposes, shown inFIG.5as the number “0” in data transmission404. The final, fully rendered, intended, client device intelligible and/or useable form of the data transmission404is known to the server102(or its proxy) at and/or before the time of the data transmission to the client device102. The pre-determined, intended data transmission404progresses through and/or is processed by the various hardware and/or software based components, layers, and protocols of channel302. The sequence of “0's” represents the original intent of the server104and is represented inFIG.5as a sequence of “0's” progressing through a sequence of rectangles in the direction of the dashed, horizontal arrows406. The upper arrow represents the sequence of processing events experienced by the “hard” portion of the data and the lower arrow represents the sequence of processing events experienced by the “soft” data. The soft and hard data transmission paths may or may not be the same and may or may not entail identical processing events. As transmitted data404progresses through and/or is processed by the channel302with the original intent of the server104intact, secondary information408generated by the routing and processing of the data404is scattered back through the channel302to the server102. The secondary information408can include, for example, an operating system of the client device102, a browser type used by the client device102, a cascading style sheet type used to display the soft/hard information, java script information, byte code data, etc. In other instances, the secondary information408may be reported by the client device102as device information after initiating the communication session300with the server104. The secondary information408is generated, for example, from Transmission Control Protocol/Internet Protocol (“TCP/IP”) negotiation, Hypertext Transfer Protocol (“HTTP”) requests and conformations, and/or rendering information. In other examples, the secondary information408can be generated through other channel302backscattering routing and/or processing. During transmission of the data404to the client device102, the malicious application304creates channel noise410, which alters the data404. The channel noise410causes an intelligent modification of the data404to be realized at the client device102instead of the original pre-determined datagram402. This alteration is represented inFIG.5as the number “1” and may incorporate hard and/or soft information. The client device102receives the final, fully rendered, client device intelligible form of the data as altered by the malicious application104and displays this data as datagram412. Here, the channel noise410adds a security question to the webpage and moves the location of a ‘submit’ button to accommodate the security question. As a result, of this channel noise410, the server104believes the client device102is viewing datagram402when in fact the client device102is viewing altered datagram412. Further, a user of the client device412has no reason to be suspicious of the datagram412because the maliciously inserted security question appears to coincide with the remainder of the datagram412. When the client device102returns a response message to the server104, the malicious application304detects the response and uses channel noise410to remove the answer to the security question. This is represented by transition of the data404from “1” to “0” before the data reaches the server104. As a result, the server104receives a response from the client device102that only includes the username and password. The server104never received an indication that the client device102provided a response to a security question, and, accordingly, never detects the presence of the malicious application304. The malicious application304remains hidden to carry out further stealthy compromises of account security. FIG.6shows how security processor112can validate communications between the server104and the client device102during the same communication session300described in conjunction withFIG.5. Similar toFIG.5, the server104inFIG.6is to transmit a request for a username and password to access an account. However, unlike inFIG.5, the security processor112inFIG.6specifically creates the soft content of a deliberate, pre-determined datagram502before transmission to the client device102. FIG.6shows soft information504and hard information506transmitted by the server104. The security processor112varies the soft data504from one client-server connection to the next to prevent the client device102or the malicious application304from knowing the components of the soft information504beforehand. The soft information504is however fully understood by the server104(or its trusted proxy) by the time of transmission to the client device102. The server104stores the soft information504to a data structure to help form a prediction as to a response from the client device102. The server104initiated soft and hard information504,506is shown as “0's” in the blocks. During the communication session300, the propagations of the soft and hard information504,506through channel302cause secondary information508to be generated. The secondary information508is scattered back to the server104and the security processor112. The security processor112uses the secondary information508in conjunction with the soft information504to form a datagram510of the prediction.FIG.7shows an enlarged image of the datagram510including the request for the username and password. The server104uses the datagram510to predict how the client device102will process, render, and display datagram502. In other embodiments, the security processor112stores the secondary information508in conjunction with the soft information504in a data structure rather then rendering datagram510. Similar toFIG.5, the malicious application304uses channel noise512to alter the soft and/or hard data504,506, which is shown inFIG.6as the number “1.” As before, the alternation includes the addition of a security question and the movement of the ‘submit button.’ The client device102then receives, processes, renders, and displays the altered data. A rendered datagram514, as displayed by the client device102, is displayed inFIG.8. This datagram514shows a security question prompt below the prompts for the username and the password. In addition, the ‘submit’ button and corresponding text have been lowered in the datagram514to make room for the security question. As a result, the security question appears to be genuine to a user of the client device102. After displaying the datagram514, the client device102transmits a response, which also includes hard and soft information. Similar toFIG.5, the malicious application304uses channel noise510to remove the response to the security question, which is shown inFIG.6in the transition of the hard information506from “1” to “0.” However, while the malicious application304removed the hard information506, the malicious application304is not concerned with the mouse click information, and accordingly does not alter the soft information504. InFIG.6, the server104and the security processor112receive the response from the client device102, including the hard and soft information504,506. The security processor112compares the soft information504to the prediction and is able to determine that the communication session300has been compromised. In other words, the security processor112detects the malicious application304by determining that the coordinates of the mouse click on the ‘submit’ button do not match the coordinates of the ‘submit’ button made during the prediction. FIG.9shows a diagram of a comparison datagram516representative of the comparison made by the security processor112to determine if a malicious application is affecting communications between the server104and the client device102. The comparison datagram516includes the prediction datagram510formed by the security processor112and a construction of the datagram514based on the soft and hard information received from the client device102. For visually effect, the prediction datagram510is superimposed upon the datagram514from the client device102. As shown inFIG.9, the geometry of the datagram514is altered, in particular the position of the ‘submit’ button as a result of the space needed to reformat the page and accommodate the additional bogus security question. In addition to the location of the data fields, the datagram514includes soft information504such as a position of a mouse click associated with the ‘submit’ button. In this example, the server104requests that the client device102report the mouse click as soft information, for example, by relying on a “hidden in the clear” communication protocol. In some examples, the server104or security processor112may embed the authentication form in a “trendy” image so that the relative coordinates of the mouse clicks are returned as a matter of routine and not detected by the malicious application. In this authentication page example, by comparing the prediction position of the ‘submit’ button with the directly reported position, the security processor112detects whether an error has occurred during communication session300. Here, the security processor112detects that the datagram514does not align with the datagram510, and accordingly determines that the malicious application304is affecting communications. In some embodiments, the security processor112may determine an allowable deviation or threshold for datagram510. Thus, as long as, for example, the ‘submit’ button is located within the allowable deviation, the security processor112determines that communications are not being affected by malicious applications. The security processor112may determine what an allowable deviation is for the datagram510based on, for example, secondary information508, characteristics of the client device102, or history information of how the datagram510has been displayed by other client devices. Examples of Channel Verification Using Different Types of Soft Messaging As disclosed, the security processor112uses different types and variations of soft information and soft messaging to validate communication channels between servers104and client devices102. The types of soft information and messaging can include changes to font size, changes to web page arrangement of hard information and graphics, addition of characters to user inputs, changes to function definitions, requests for user prompts through banners and pop-up windows, or implementations of bugz. The following sections describe how the security processor112uses different types of soft information and messaging. Soft Messaging Using Text Size and Font Variations FIG.10shows a diagram of a datagram1000that includes a code section1002and a result section1004. The datagram1000illustrates how soft information can be selected or created by the security processor112in code section1002. The datagram1000also shows how the soft information would be displayed on a client device102in the result section1004. FIG.10shows that character sets, font types and point sizes may be varied by the security processor112from session to session. These variations are in addition to the geometrical and content related soft-variations described in the previous comparative example. In the code section1002, keyboard and mouse functionality may be made functions of a number of characters typed or x,y coordinates of text boxes. These modifications may be subtle and may also be made session dependent. The security processor112may invoke changes using any seemingly contrasting combination of coding instructions via html, xml, CSS javascript, byte code, etc. The security processor112may also invoke changes by altering or restricting elements available for coding instructions to draw from, for example, available character sets. For example, in the datagram1000, the security processor112is subject to a ruleset based on the hard information that is required to be transmitted (e.g., the prompt for a username and password). Here, the security processor112selects soft information or message variation such that for the fully processed and rendered information presented to the client device102is structured so that the username transaction field is to be rendered by a client device102in a font size of 12, the first password field is to be rendered in a font size of 13, and the second password field is to be rendered in a font size of 14. In other examples, the security processor112may also vary a font type, font color, font weight, or any other text variation allowable for the corresponding hard information. The variation among the font sizes is used by the security processor112to form a prediction. For instance, the name provided by the client device102is to be in 12 point font while the first password is to be in 13 point font. If a malicious application uses channel noise to alter the username or password responses or add a second transaction, the security processor112is able to detect the modification by the malicious application if the returned font size does not match the prediction. If the malicious application is more sophisticated and processes the soft information returned from the client device102to determine the font size, the extra time spent processing the information provides an indication to the security processor112that a malicious application is affecting communications. As a result, the soft messaging makes it relatively difficult for a malicious application to go undetected by the security processor112. Soft Messaging Using Programmed Keystrokes In another embodiment, the code section1002may include code that instructs a client device102to programmatically generate keystrokes based on keystrokes provided by a user. The security processor112uses the algorithm for the programmatically generated keystrokes to form a prediction. The security processor112transmits the algorithm for the programmatically generated keystrokes through xml code, java code, etc. The security processor112may also use the programmatically generated keystrokes in Document Object Models (“DOMs”) of hidden form fields. Upon receiving the code, the client device102applies the algorithm to the specified data fields. For example, one algorithm may specify that the letter ‘e’ is to be applied after a user types the letter ‘b’ and the number ‘4’ is applied after a user types the number ‘1.’ When the user submits the entered text, the client device102transmits the user provided text combined with the programmatically generated keystrokes in a response message. For instance, in the result section1004ofFIG.10, the client device102may add keystrokes to the user provided username or password. A malicious application that uses channel noise may attempt to, alter text, inject text, or additional data fields into the response from the client device102. However, the security processor112is able to identify which text was affected by the malicious application based upon which of the received text does not match the algorithm-based keystroke prediction. As a result, the security processor112is able to detect the malicious application. Soft Messaging Using Function Modification In a further embodiment, the code section1002may include code that changes a library definition of one or more functions. For example, the code section1002could specify that a function named ‘add’ is to perform division and that a function named ‘subtract’ is to perform addition. The security processor112uses the library definitions to form a prediction of a response from a client device102. The security processor112transmits the library definition through, for example, xml code, java code, etc. Upon receiving the code, the client device102applies the changed library definitions to the specified data fields in, for example, the result section1004ofFIG.10. In one instance, the client device102may prompt a user to enter a result of a mathematical equation as part of an authentication process or when entering a number of related transactions. A malicious application, attempting to alter the authentication or inject additional transactions, examines the response from the client device102. The malicious application only sees, at most, the name of the function performed, not the definition of the function. As a result, the malicious application alters the data or applies transactions consistent with the name of the function. However, the security processor112is able to detect the malicious application because the received altered response would not be consistent with the functional definitions stored to the prediction. Soft Messaging Using Un-Rendered Page Elements FIG.11shows client device102including rendered information1102and un-rendered information1104as a variation of soft messaging. The rendered information1102is displayed to a user by the client device102while the un-rendered information1104is not displayed but instead is included within source code of soft information for a document. The security processor112uses the un-rendered information1104to determine if a malicious application is affecting communications with the client device102. For instance, the security processor112detects a malicious application if an altered response from the client includes reference to the un-rendered information1104or accommodates the un-rendered information1104. WhileFIG.11shows the un-rendered information1104as titles, the un-rendered information1104may also include redundant/multiple passwords, redundant/multiple forms, or redundant/multiple logical structures in DOM. Generally, malicious applications use un-rendered, machine-readable source code to perform functions instead of the rendered version of the code. The reason is that rendering the code takes additional time and resources that may expose the malicious application. In the example shown, soft information applied to the source code by the security processor112enables the introduction of title and tag variations, redundancies, substitutions, embedded requests for data downloads from arbitrary locations, logical obfuscations, piecewise delivery of a final edition of machine-readable source code, transformations of the machine-readable source code based on features of previous or currently rendered pages, transformations of the machine-readable source code based on intended client interactions with previous or currently rendered pages, etc. in the machine source code version of the page. The soft modifications applied by the security processor112to the machine-readable source code produce a consistent, useable, non-varied rendered page to the intended user while producing a different varied page to the malicious application. In this manner, the intended user interacts freely with the rendered page while the attempts of the malicious application to interact with the un-rendered, machine-readable source results in a failure to interact with the source code. The un-rendered information1104may also cause the malicious application to experience excessively long task completion times. Any modifications or alterations performed by a malicious application result in the activation of placeholder source page elements, which are processed and returned to the security processor112as indications that the returned information is based on an edition of the machine source code that was not the final edition intended for the end user. Additionally, the security processor112is able to detect that a malicious application altered a response from the client device102when the received information includes data with geographic locations or bogus data fields that correspond to the soft information of the un-rendered information1104. For instance, the security processor112detects a malicious application if the response from the client device102includes a payee after the ‘Online Poker’ payee. In addition to using data fields of un-rendered information1104, the security processor112can also use behind-the-scenes, un-rendered, machine-readable source code used to generate communications. The security processor112may also use decision process interfaces for the intended client device102in technologies where the communications occur via physical medium and protocols other than HTTP traffic traveling through the network110. Some of these communication examples include Short Message Service (“SMS”) messaging, manufacturing control process signals and protocols (e.g., Foundation Fieldbus, Profibus, or Hart Communication Protocol), and/or infrared or Bluetooth-based communications. The soft messaging techniques may be used by the security processor112when the delivery mechanism is not Internet/HTTP based as a way to differentiate between end user presentation, end user interface level and the machine source level of response, and/or interaction with delivered content or information. In instances when a malicious application uses the interactions and/or input of a legitimate user via a client device102as a means to guide itself through the logical flow of the obfuscated, machine-readable source code, the security processor112may use soft information that includes the creation of additional “user” input events by the system. Examples of these user input events can include, but are not limited to, keyboard events, user focus events, mouse clicks, mouse rollovers, cursor movements etc. The specific details of the soft information or messaging generated user events are known prior to the security processor112as the prediction and may be later removed by server104or the security processor112to recover the legitimate client device102and/or end users intent. Additionally, in instances when a malicious application exports machine-readable source code to be rendered for processing and/or navigation by a substitute recipient, the security processor112can use soft messaging variations among an operating system, a layout engine, a browser, Cascading Style Sheets (“CSS”), java script, bugz, and/or peculiarities acting individually or in combination so that the exported source code compiles and/or renders differently for the substitute client than it does for the originally intended end user. The just-in-time nature of the delivery of the final edition of the machine-readable source code to the intended client device102also differentiates between page versions, content versions compiled, and/or rendered at the communicating client device102. The communicating client device102may be the original, intended client or a substitute of the malicious application. The substitute client device may be a computer program and/or technology that replicates the intended end user's powers of observation, recognition and/or understanding. Soft Messaging Using Graphical Elements FIG.12shows a client device102conducting a transaction with a server104. The transaction is displayed in datagram1202and includes three separate transactions totaling an amount of 268.55. In this example, a malicious application304intercepts the transmission of the datagram1202from the client device102to the server104. The malicious application304uses channel noise to add a fourth transaction and a new balance of 332.89 to the datagram1202. As a result, the server104receives four transactions and the correctly appearing balance of 332.89. To prevent such fraud, the security processor112uses graphical elements1204as soft information to verify the data transmitted by the client102. The use of graphical elements1204enables the security processor112to validate channel communications when a client device102is the originator of hard and soft information. In other words, the security processor112uses graphical elements1204to confirm communications with the client device102when the security processor112may not be able to form a prediction because the client device is the originator soft and/or hard information. The graphical elements1204may be presented to the user of the client device102as, for example, a banner, background, image, part of an advertisement, or a video. In some examples, the security processor112can use variations in graphical elements1204as soft information in conjunction with other soft messaging techniques discussed above. In the illustrated example ofFIG.12, the security processor112transmits the graphical element1204to the client device102. The security processor112embeds the balance information as code included within the graphic, which helps prevent the malicious application304from detecting and using channel noise to alter the balance to the amount provided originally by the client device102. The client device102accordingly displays the graphical element1204including the balance received by the server104. The user can then compare the balances and provide feedback that the balances do not match by, for instance, selecting the graphical element1204. In response, the security processor112instructs the server104to disregard the datagram1202. In an alternative embodiment, the security processor112enables the client device102to supply comparison information. For example, a ‘submit these transactions’ button may be presented by the client device102as an active, account balance indexed grid. A user of the device102is expected to activate that portion of the button corresponding to the traditionally displayed account balance. As in the previous examples, the details of this button may be session dependent. In another example, the client device102may be enabled by the security processor112to send a screen capture of the account information in the datagram1202to the server104for automated comparison by the security processor112. The background and other features of the screen capture may be session dependent to prevent counterfeiting. For example the security processor112may specify in soft messaging whether the client device102is to create and forward a snapshot of the top ⅔ of an account balance or the lower ⅔ of the account balance and/or a blank image followed by the account balance. Multiple Predictions for a Single Session Embodiment FIG.13shows an illustration of two different configurations of a client device102that can be accounted for by the security processor112to create multiple predictions in some embodiments. In this example, the security processor112creates two different predictions based on an orientation of client device102. The first prediction corresponds to the client device102being in a vertical orientation1302and the second prediction corresponds to the client device being in a horizontal orientation1304. Oftentimes, many smartphones and tablet computers can display information based on how the device is orientated. However, the orientation of the device is generally not reported back to a server104through backscattered secondary information. As a result, the server104does not know the orientation of the device when the hard information is displayed. To compensate for this lack of information, the security processor112creates two different predictions. In some embodiments, the security processor112may generate, by default, multiple predictions regardless of a type of client device102to account for different screen sizes, orientations, etc. In other embodiments, the security processor112may generate a second prediction only after receiving backscatter information that indicates the client device102corresponds to a type of device that can have more than one orientation. In the illustrated example ofFIG.13, the security processor112creates a first prediction as to how the hard information (e.g., username, password and ‘submit button) is displayed based on the received soft information. The security processor112determines that coordinates of the features displayed by client device102have to fit within the vertical orientation1302of the client device102. Similarly, the security processor112creates a second prediction as to how the hard information will be displayed within the horizontal orientation1304. The differences between the orientations1302,1304can include spacing between data fields, sizes of the data fields, location of the ‘submit’ button, and a location of the trademark. The security processor112then compares a response from the client device102to each of the predictions to determine if a malicious application is affecting communications. Prediction Data Structure FIG.14shows a diagram of a data structure1400of a prediction formed by the security processor112based on soft information and secondary information acquired from global and local observable temporal channel information. The data structure1400is representative of information used by the security processor112to form the prediction. In other embodiments, the security processor112may render a webpage based on the soft and secondary information, similar to the datagram510ofFIGS.5and7. The example security processor112uses the information in data structure1400to determine if a response from a client device102is indicative of a malicious application affecting communications. The security processor112creates the data structure1400by storing soft information used in soft messaging by a server104. The security processor112supplements the data structure1400with secondary information received as backscatter information. As mentioned before, the soft information describes how hard information is displayed or presented while the secondary information provides indications how the soft and hard information are to be displayed on a client device102. In the illustrative example ofFIG.14, the soft information includes font type, font size, and positioning of three text fields. The soft information also includes coordinates of a ‘submit’ button including an allowable deviation or predetermined threshold. The soft information further includes programmed text to be generated automatically in the text fields and a location of a banner graphical element. In addition, the soft information includes un-rendered text at specified coordinates. Also in the data structure1300ofFIG.14, the secondary information includes a browser type and operating system of the client device102. The secondary information also includes an indication that java script is enabled. The security processor112uses the secondary information to modify the soft information as needed. For example, upon receiving an indication that a client device102is using an OPPS browser, the security processor112updates coordinates of the text fields and ‘submit’ button to reflect how the OPPS browser is known to format and render text and graphics. In this manner, the secondary information is used by the security processor112to refine or alter the initial prediction made when the soft information was initially transmitted to the client device102. Flowchart of the Example Process FIGS.15and16are a flow diagram showing example procedures1500,1530, and1560to validate a communication channel, according to an example embodiment of the present invention. Although the procedures1500,1530, and1560are described with reference to the flow diagram illustrated inFIGS.15and16, it will be appreciated that many other methods of performing the acts associated with the procedures1500,1530, and1560may be used. For example, the order of many of the blocks may be changed, certain blocks may be combined with other blocks, and many of the blocks described are optional. The example procedure1500operates on, for example, the client device102ofFIGS.1to6. The procedure1500begins when the client device102transmits a connection request to a server104(block1502). The connection request can include a website address or IP address that is routed by the network110to the appropriate server104. The connection request can also include device information identifying secondary characteristics or information associated with the client device102. After receiving a connection response, the client device102requests to engage in a data transaction with the server104(block1504). The request can include a specification of information that the client device102desires to read or write to information stored in a database or managed by the server104. The request can also include one or more transactions that the client device102desires to complete with the server104. Some time after transmitting the request, the client device102receives hard and soft information1507corresponding to the requested transaction (block1506). The hard and soft information1507can be received in separate messages or combined together in one or more messages. The client device102uses the soft information to determine how the hard information is to be rendered and displayed (block1508). After displaying the hard information, the client device102transmits a response message1509provided by a user (block1510). At this point, the example procedure1500ends when the client device102and server104stop exchanging communications (e.g., terminate a communication session). Additionally, in some embodiments, the client device102may receive an indication from the server104that a malicious application has affected at least the information in the response message1509. As a result, the client device102could re-authenticate communications with the server104or enter a failsafe mode. The example procedure1530ofFIG.15operates on, for example, the application server104ofFIGS.1to6. The procedure begins when the server104receives a connection request from a client device102(block1532). In instances that the connection request includes device information, the server104transmits the device information to a communicatively coupled security processor112. The server104then transmits a connection response to the client device102, thereby initiating a communication session (block1534). Some time later, the server104receives from the client device102a request to process a data transaction (block1536). The server104then determines hard information1537associated with the requested data transaction (block1538). For example, a request to access an account causes the server104to identify account log-in information. In another example, a request to perform a banking transaction cases the server104to identify account information and available banking options for the account. The server104then transmits the determined hard information1537to a security processor112. In some embodiments, the security processor112may be instantiated within the server104. In other embodiments, the security processor112may be remote from the server104. Responsive to receiving hard and soft information1507from the security processor112, the server104formats and transmits the information1507to the client device102(block1540). In some embodiments, the server104receives messages with combined hard and soft information. In these embodiments, the server104formats the messages (e.g., structures the messages into data packets) for transmission. In other embodiments, the server104receives the hard and soft information. In these other embodiments, the server104combines the hard and soft information into one or more messages and formats these messages for transmission. The server104accordingly provides the client device102with hard and soft messaging. After transmitting the hard and soft information1507, the server104ofFIG.16receives backscattered information1543from channel components used to process, route, and render the information1507(block1542). The server104transmits this backscattered information1543to the security processor112. In some instances, the server104transmits the information1543as the information is received. In other instances, the server104transmits the information1543periodically or after receiving an indication that the soft and hard information1507has been received and processed by the client device102. The server104then receives the response message1509from the client device102including information responding to the hard information (block1544). The server104subsequently transmits the response message1509to the security processor112. After the security processor112has compared information in the response message1509to a prediction, the server104determines whether the communication session with the client device has been validated (block1546). If the security processor112does not provide an indication of a malicious application, the server104determines the communication session with the client device102is validated. The server104continues communications with the client device102and continues to validate communications until the communication session is ended. However, responsive to the security processor112providing an indication of a malicious application, the server104enters a failsafe mode (block1548). The failsafe mode can include the server104informing the client device102of the malicious application, requesting that the client device102re-authenticate, restricting access to the data transactions associated with the client device102, transmitting an alarm or alert to appropriate personnel, and/or applying a routine or algorithm to remove or restrict further attempts by the malicious application to affect communications. Regardless of which failsafe operation is performed, the example procedure1530ends when the communication session with the client device102is terminated or when the effects of the malicious application have been remedied. Returning toFIG.15, the example procedure1560operates on, for example, the security processor112ofFIGS.1to6. The procedure1560begins when the security processor112receives device information from the server104(block1562). This step can be skipped in instances where a connection request does not include device information. The security processor112then receives hard information1537from the server104and identifies compatible soft information (block1564). For instance, hard information has a limited number of ways that it can be correctly displayed. The security processor112uses this relationship to identify which soft information is compatible with the hard information. After identifying the compatible soft information, the security processor112selects a variation of the soft information (block1566). The security processor112may select a different variation of soft information for each client device-server connection. As described before, this variation prevents malicious applications from reverse engineering the soft messaging used to validate communications. The security processor112then combines the hard information and the selected soft information1507into one or more messages and transmits combined information1507to the server104, which then transmits the information1507to the client device102(block1568). The security processor112also forms a prediction as to how the client device102will render and display the hard information based on the soft information (block1570). InFIG.16, the security processor112receives the backscattered information1543from the server104and determines corresponding secondary information or characteristics (block1572). The security processor112then updates or modifies the prediction based on the secondary information (block1574). Responsive to receiving the response message1509from the client device102, the security processor112compares the information in the response to the prediction (block1576). The comparison includes determining if soft information returned by the client device102matches or is within an allowable deviation to corresponding soft information in the prediction (e.g., matching coordinates of graphics or data fields, matching programmatically entered characters, or matching font information) (block1578). Responsive to determining the information in the response matches the prediction, the security processor112validates the communication session between the server104and the client device102(block1580). The security processor112then continues to validate the communication session for additional communications between the server104and the client device102until the communication session is ended. Responsive to determining the information in the response deviates from the prediction, the security processor112provides an indication of a malicious application (block1582). The security processor112may also remedy the effects of the malicious application or take steps to prevent the malicious application from affecting further communications between the client device102and the server104. The security processor112then continues to validate the communication session for additional communications between the server104and the client device102until the communication session is ended. It will be appreciated that all of the disclosed methods and procedures described herein can be implemented using one or more computer programs or components. These components may be provided as a series of computer instructions on any conventional computer-readable medium, including RAM, ROM, flash memory, magnetic or optical disks, optical memory, or other storage media. The instructions may be configured to be executed by a processor, which when executing the series of computer instructions performs or facilitates the performance of all or part of the disclosed methods and procedures. It should be understood that various changes and modifications to the example embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims. | 93,293 |
11943256 | DESCRIPTION OF EMBODIMENTS To explain the objectives, technical solutions, and advantages of this disclosure, the following further describes implementations of this disclosure in detail with reference to the accompanying drawings. In the related art, because an original link (e.g., Uniform Resource Locator or URL) corresponding to a webpage is relatively long and is not conducive to promotion, a service provider of the original link usually generates a corresponding short link (e.g., shortened URL) for the original link through other service providers. As the name suggests, a short link is a relatively short link in form and can replace a long original link in form to access content corresponding to the original link. In this case, a user can only query a registered subject of a short link but cannot query a registered subject of an original link corresponding to the short link, and therefore cannot determine the security of the short link. The exemplary embodiments are described herein in detail, and examples of the embodiments are shown in the accompanying drawings. When the following description involves the accompanying drawings, unless otherwise indicated, the same numerals in different accompanying drawings represent the same or similar elements. The implementations described in the following exemplary embodiments do not represent all implementations that are consistent with this disclosure. On the contrary, the implementations are merely examples of apparatuses and methods that are described in detail in the appended claims and that are consistent with some aspects of this disclosure. Terms used in the embodiments of this disclosure are explained in the following. A blockchain is a new application mode implementing computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, and an encryption algorithm. The blockchain is essentially a decentralized database and is a string of data blocks generated through association by using a cryptographic method. Each data block includes information of a batch of network transactions, the information being used for verifying the validity of information of the data block (anti-counterfeiting) and generating a next data block. The blockchain may include an underlying blockchain platform, a platform product service layer, and application service layer. The blockchain underlying platform may include processing modules such as a user management module, a basic service module, an intelligent contract module, and an operation supervision module. The user management module is responsible for identity information management of all blockchain participants, including maintenance of public key and private key generation (account management), key management, and maintenance of a relationship between a user's real identity and a blockchain address (authority management), or the like, and in the case of authorization, supervising and auditing some real-identity transactions, and providing rule configuration of risk control (risk control audit). The basic service module is deployed on all blockchain node devices to verify the validity of a service request and records the service request on a storage after a consensus on a valid request is reached. For a new service request, the basic service module first performs interface adaptation analysis and authentication processing (interface adaptation), then encrypts service information through a consensus algorithm (consensus management), transmits the service information to a shared ledger (network communication) after encryption, and stores and records the service information. The smart contract module is responsible for contract registration and issuance as well as contract triggering and contract execution. Developers may define a contract logic through a programming language, release the programming language on a blockchain (contract registration), and call, according to a logic of contract terms, a private key or other events to trigger the execution to complete the contract logic, and further provide a function of upgrading and canceling the contract. The operation monitoring module is mainly responsible for deployment, configuration modification, contract configuration, cloud adaptation, and visual output of real-time status during product operation in a product release process, such as: alarms, monitoring network conditions, monitoring health status of a node device, or the like. The platform product service layer provides basic capabilities and implementation frameworks for typical applications. Based on the basic capabilities, the developers may superimpose service features to complete a blockchain implementation of the service logic. The application service layer provides a blockchain solution-based application service for use by a service participant. The consensus mechanism is a mathematical algorithm for building trust and obtaining rights and interests between different nodes in a blockchain system. In a blockchain system, a transaction can be verified and confirmed in a short time through voting of special node devices. For a transaction, if several node devices that are irrelevant to each other in terms of interests can reach a consensus, it can be considered that all node devices in the system can also reach a consensus. A smart contract is a computer agreement designed to distribute, verify, or execute a contract in an information-based way. A contract program that each node device in the blockchain system automatically executes based on specific conditions may perform an operation on data stored on a chain and is an important path through which a user interacts with a blockchain and implements service logic by using the blockchain. An objective of the smart contract is to provide a better security method than a conventional contract and reduce other transaction costs related to a contract, and the smart contract permits trusted transactions to be performed without a third party, and these transactions are traceable and irreversible. The smart contract is triggered by transactions and can read, write, and calculate transaction data on a blockchain to support operations of various commercial applications. The smart contract can automatically execute a computerized program of contract terms, identify and determine data information obtained from the outside, and when conditions set by a central node and a consortium node are met, a system is then triggered to automatically execute corresponding contract terms, thereby completing transaction and transfer of general virtual resources/organization virtual resources in this embodiment of this disclosure. The smart contract in this embodiment of this disclosure includes a general part and a customized part. The general part supports basic functions such as identity authentication, transaction, and virtual resource transfer. The customized part is used for coping with differentiated application scenarios of a third node device. The third node device may implement special functions by configuring an independent smart contract. A public key and a private key are a key pair (that is, one public key and one private key) obtained by using an algorithm. The public key is a public part in the key pair, and the private key is a non-public part in the key pair. The public key is usually used for encrypting data, authenticating a digital signature, or the like. The algorithm can ensure that the key pair obtained is unique. When using the key pair, if one of the keys is used to encrypt a piece of data, the other key needs to be used to decrypt the piece of data. For example, when the key pair is used, if data is encrypted by using the public key, the data needs to be decrypted by using the private key, or if data is encrypted by using the private key, the data needs to be decrypted by using the public key. Otherwise, the decryption fails. This embodiment of this disclosure provides a blockchain system100implemented based on the foregoing blockchain technologies, and a system architecture of the blockchain system is described below. In some embodiments, referring toFIG.1, the blockchain system includes a plurality of node devices101. In addition, the blockchain system further includes a client. The node devices101are computing devices of any form in a network, such as servers, hosts, or user terminals. Data can be shared between the node devices101. The node devices101may establish a peer-to-peer (P2P) network based on a P2P protocol. The P2P protocol is an application layer protocol that runs on top of a transmission control protocol (TCP). In some embodiments, each node device101can receive input information and maintain shared data in the blockchain system based on the received input information. To ensure an information exchange in the blockchain system, information connections exist among all the node devices in the blockchain system, and information transmission may be performed among the node devices through the foregoing information connections. For example, when any node device in the blockchain system receives input information, another node device in the blockchain system obtains the input information based on a consensus algorithm, and stores the input information as data in shared data, so that data stored in all node devices in the blockchain system is consistent. In some embodiments, each node device in the blockchain system has a node device identifier corresponding to the node device. Each node device in the blockchain system stores node device identifiers of other node devices in the blockchain system, to help subsequently broadcast a generated block to the other node devices in the blockchain system based on the node device identifiers of the other node devices. Each node device may maintain a node device identifier list shown inFIG.1, and a node device name and a node device identifier are correspondingly stored in the node device identifier list. In some embodiments, a node device identifier is an Internet Protocol (IP) address or any other information that can be used for identifying the node device. In Table 1, description is made by using an IP address as an example. TABLE 1Node device nameNode device identifierNode device 1117.114.151.174Node device 2117.116.189.145. . .. . .Node device N119.123.789.258 In some embodiments, each node device in the blockchain system stores the same blockchain. Referring toFIG.2, the blockchain includes a plurality of blocks. A genesis block includes a block header and a block body. The block header stores an input information feature value, a version number, a timestamp, and a difficulty value. The block body stores input information. A next block of the genesis block uses the genesis block as a parent block, and also includes a block header and a block body. The block header stores an input information feature value of a current block, a block header feature value of the parent block, a version number, a timestamp, and a difficulty value. By analogy, block data stored in each block in the blockchain is associated with block data stored in the parent block, thereby ensuring the security of the input information in the blocks. When blocks are generated in the blockchain, referring toFIG.3, a node device where the blockchain is located receives the inputted information, and the inputted information is verified. After the verification is completed, the inputted information is stored in a memory pool, and a hash tree used for recording the inputted information is updated. Next, the timestamp is updated to the time when the inputted information is received, different random numbers are tried, and feature value calculation is performed a plurality of times, so that the calculated feature value may satisfy the following formula: SHA256(SHA256(version+prev_hash+merkle_root+ntime+nbits+x))<TARGET SHA256 is a feature value algorithm used to calculate a feature value; version is version information of a relevant block protocol in the blockchain; prev_hash is a block header feature value of a parent block of a current block; merkle_root is a feature value of input information; ntime is an update time of an update timestamp; nbits is current difficulty, which is a fixed value for a period of time, and is determined again after a fixed period of time; x is a random number; and TARGET is a feature value threshold, which can be determined based on nbits. In this way, when a random number satisfying the above formula is obtained through calculation, information may be correspondingly stored, and a block header and a block body are generated, to obtain a current block. Subsequently, the node device where the blockchain is located transmits, based on the node identifier of another node device in the blockchain system, a newly generated block to the another node device in the blockchain system in which the node device is located, and the another node device verifies the newly generated block and add the newly generated block after the verification to the blockchain stored in the another node device. A functional architecture of the node device101in the blockchain system is described below. Referring toFIG.4, the node device101may functionally include a hardware layer, an intermediate layer, an operating system layer, and an application layer. Specific functions involved may include a routing function. Routing is a basic function of a node device and is used for supporting communication between the node devices. In addition to the routing function, the node device may further have an application function deployed in a blockchain, and used for implementing a particular service according to an actual service requirement, recording data related to function implementation to form recorded data, adding a digital signature to the recorded data to indicate a source of task data, and transmitting the recorded data to another node device in the blockchain system, so that the another node device adds the recorded data to a temporary block when successfully verifying a source and integrity of the recorded data. For example, the application function may implement a wallet service used for providing a transaction function with electronic money, including transaction initiation (that is, a transaction record of a current transaction is transmitted to another node device in the blockchain system, and the another node device writes, after successfully verifying the transaction record, recorded data of the transaction to a temporary block in a blockchain in response to admitting that the transaction is valid). The wallet further supports querying for remaining electronic money in an electronic money address. For example, the application function may also implement a shared ledger service used for providing functions of operations such as storage, query, and modification of account data. Recorded data of the operations on the account data is transmitted to another node in the blockchain system. The another node device writes, after verifying that the account data is valid, the recorded data to a temporary block in response to admitting that the account data is valid, and may further transmit an acknowledgement to a node device initiating the operations. For example, the application function may also implement a smart contract service, which is a computerized protocol used for executing conditions of a contract, and is implemented by using code that is deployed in the shared ledger and that is executed when a condition is satisfied. The code is used for completing, according to an actual service requirement, an automated transaction, for example, searching for a delivery status of goods purchased by a purchaser, and transferring electronic money of the purchaser to an address of a merchant after the purchaser signs for the goods. The smart contract is not limited only to a contract used for executing a transaction and may be further a contract used for processing received information. A blockchain includes a series of blocks that are consecutive in a chronological order of generation. Once a new block is added to the blockchain, the new block is not removed. The block records recorded data submitted by the node device in the blockchain system. The link detection method provided in this embodiment of this disclosure can be applied to related scenarios for link detection, for example, a scenario of detecting a link in a short message, a scenario of detecting a link in a Weibo post, a scenario of a detecting a link in chat information of a social application, a scenario of detecting a link in a product search bar of a shopping application, or a scenario of detecting a link entered in a browser search box. Main steps of the link detection method provided in this embodiment of this disclosure are briefly described. First, an electronic device obtains first service provider information from a target text including a to-be-detected short link (i.e., a short link). The target text may include the short link and the first service provider information. Then, the electronic device obtains, based on the first service provider information, at least one short link (i.e., at least one reference short link) that is generated by a second service provider for a first service provider indicated by the first service provider information and meets a target condition, and the second service provider is configured to provide a short link generation service. Finally, the electronic device determines, in response to that the to-be-detected short link is not included in the at least one short link, that the to-be-detected short link is an insecure link. In the foregoing technical solution, at least one short link that is generated by the second service provider for the first service provider and meets the target condition is obtained, and then the at least one short link is compared with the to-be-detected short link in the target text to determine whether the to-be-detected short link is a short link generated for the first service provider, so as to determine whether the to-be-detected short link is secure. FIG.5is a structural block diagram of a link detection system500according to an embodiment of this disclosure. The link detection system500includes terminals510and a link detection platform520. Each terminal510is connected to the link detection platform520by a wireless network or a wired network. In some embodiments, the first terminal510is at least one of a smart phone, a game console, a desktop computer, a tablet computer, an e-book reader, a laptop portable computer, or an in-vehicle computer. An application program that supports link detection is installed and run on the terminal510. In some embodiments, the application program is a communication application program, a social application program, a shopping application program, a browser application program, a security assistant application program, or the like. For example, the terminal510is a terminal used by a user, and a user account is logged in in the application program running on the terminal510. In some embodiments, the link detection platform520includes at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. The link detection platform520is used for providing a backend service for the application program that supports link detection. In some embodiments, the link detection platform520undertakes main detection work, and the terminal510undertakes secondary detection work; or, the link detection platform520undertakes secondary detection work, and the terminal510undertakes main detection work; or, the link detection platform520or the terminal510may separately undertake detection work. In some embodiments, the link detection platform520includes an access server, a link detection server, and a database. The access server is configured to provide an access service to the terminal510. The link detection server is configured to provide a backend service related to link detection. There may be one or more link detection servers. When there are a plurality of link detection servers, at least two link detection servers are configured to provide different services, and/or at least two link detection servers are configured to provide the same service, for example, provide the same service in a load balancing manner, which is not limited in this embodiment of this disclosure. In this embodiment of this disclosure, the link detection platform520is constructed based on the node device101in the foregoing blockchain system100. The terminal510may generally refer to one of a plurality of terminals. In this embodiment, the terminal510is merely used as an example for description. A person skilled in the art may understand that there may be more or fewer terminals. For example, there may be only one terminal, or there may be dozens of or hundreds or more terminals. In the present embodiment, the link detection system includes plural terminals. The quantity and the device type of the terminals are not limited in the embodiments of this disclosure. The link detection method provided in this disclosure may be applied to a blockchain system or a non-blockchain system. When the link detection method is applied to the non-blockchain system, first link release information and second link release information involved in the link detection method, that is, information related to an original link and information related to a short link may be stored in a database with high credibility to ensure the validity of the first link release information and the second link release information. A query service for data stored in the database with high credibility may be provided by a first service provider involved in the link detection method, or may be provided by a second service provider involved in the link detection method, or may be provided by a third-party third service provider, or may be jointly provided by the first service provider, the second service provider, and another service provider after forming a consortium. When the link detection method is applied to the blockchain system, because the blockchain system is a data storage system with high credibility, the first link release information and the second link release information involved in the link detection method can be stored in the blockchain system. Correspondingly, the electronic device that performs the link detection method is a node device in the blockchain system, or a device that can communicate with a node device in the blockchain system. For example, the electronic device is the terminal510inFIG.5, and the terminal510can communicate with the link detection platform520constructed based on the node device in the blockchain system to implement the link detection method. FIG.6is a flowchart of a link detection method according to an embodiment of this disclosure. As shown inFIG.6, in this embodiment of this disclosure, the link detection method is applied to a blockchain system, and an example in which the electronic device is a terminal that can communicate with the node device in the blockchain system is used for description. The link detection method includes the following steps. In step601, the terminal obtains first service provider information from a target text, the target text including a to-be-detected short link. In some embodiments, the terminal can receive the target text including the to-be-detected short link. Text content of the target text can be used for interpreting webpage content corresponding to the to-be-detected short link, or text content of the target text can be used for guiding a user to access a webpage corresponding to the to-be-detected short link. In some embodiments, the target text is a short message in a short message application, a chat message in a social networking application, or information content in an information application, or the like. A field in a fixed position of the target text can be used for representing the first service provider information. In some embodiments, the first service provider information is information used for identifying the first service provider, such as a name of the first service provider or a service provider identifier of the first service provider. The first service provider is used for providing a service described by content of the target text. The first service provider is used for providing relevant information about a webpage corresponding to an original link. In some embodiments, the terminal can further perform punctuation detection on the target text, and the terminal uses, in response to detecting a target punctuation, a field indicated by the target punctuation as the first service provider information. In some embodiments, the target punctuation is quotation marks “ ” or brackets ( ), or the like, which are not limited in this embodiment of this disclosure. In addition, the terminal can further obtain the first service provider information by means of character identification, keyword matching, or the like, which is not limited in this disclosure. In some embodiments, the to-be-detected short link is associated with one of a text, a picture, a video, or a button in the target text. For example, the target text includes a text hyperlink, and a link address of the text hyperlink is the to-be-detected short link. In another example, an image hyperlink in the target text is a to-be-detected short link, and the terminal performs, in response to detecting a trigger operation on the image, a link jump step based on the to-be-detected short link. Descriptions are made by using an example in which the target text is a short message. Content of the short message is used for instructing a user of the terminal to modify the user's account information: “Your account information is expired. Please click url.cn.xxxxx to modify your account information to avoid affecting your normal gameplay “xx Game”. “xx Game” indicates the first service provider, and “url.cn.xxxxx” is the to-be-detected short link. The terminal performs punctuation detection on content of the short message to detect whether there are target punctuations “[” and “]” in the content of the short message. When the two target punctuations are detected, the terminal uses a field between the two target punctuations, that is, “xx Game”, as the first service provider information. The target text can be transmitted by the first service provider indicated by the first service provider information or by the second service provider that generates the to-be-detected short link, and may further be transmitted by another service provider that provides an information push service, which is not limited in this disclosure. In step602, the terminal obtains at least one short link based on the first service provider information, the short link being a short link that is generated by a second service provider for a first service provider and meets a target condition, the first service provider being a service provider indicated by the first service provider information, the second service provider being configured to provide a short link generation service. In some embodiments, after obtaining the first service provider information, the terminal can obtain second service provider information associated with the first service provider information based on the first service provider information. The terminal obtains the at least one short link from a short link associated with the second service provider information based on the second service provider information. In some embodiments, the target condition is at least one of a short link generation time being within a target time period, the short link being not expired, or a service type to which the short link belongs being consistent with a target service type indicated by the target text. In some embodiments, the storage of the short link may be implemented in a plurality of different manners, for example, through blockchain-based storage. That is, in some embodiments, the terminal can obtain the second service provider information from the blockchain system. Correspondingly, the step of obtaining, by the terminal, second service provider information associated with the first service provider information based on the first service provider information may be: determining, by the terminal, a first block from a blockchain system based on the first service provider information, the first block being used for storing first link release information of the first service provider; and obtaining, by the terminal, the second service provider information from the first link release information of the first service provider stored in the first block. The first link release information is stored on the blockchain and therefore cannot be changed to avoid a case that a malicious user modifies the first link release information and provides incorrect second service provider information to the terminal, thereby ensuring that the short link obtained by the terminal is a secure and valid short link rather than a short link forged by the malicious user. Descriptions are made by using an example in which the target text is a short message. The first service provider is a game service provider and can provide game-related services. The second service provider is a short link service provider and can provide a service for generating a short link. The first service provider provides the user with a webpage for modifying account information. A link of the webpage is an original link a. The second service provider generates at least one short link for the original link a. One of the short links is a short link b, and the short message includes the short link b, that is, url.cn.xxxxx. The first service provider stores the first service provider information, the original link a, and the second service provider information in the first link release information, and uploads the first link release information to a chain, so that the first link release information is stored in the first block. Then, the terminal may obtain the second service provider information from the first link release information. If the first link release information is not stored on the blockchain, a malicious user can modify the first link release information to make the terminal obtain forged malicious service provider information, and a short link associated with the malicious service provider information is a short link forged by the malicious user. In the foregoing method, the first link release information is stored on the blockchain, so that the first link release information cannot be changed, thereby ensuring that the short link obtained by the terminal is a secure and valid short link rather than a short link forged by the malicious user. In some embodiments, the terminal can obtain the at least one short link from the blockchain system. Correspondingly, the step of obtaining, by the terminal based on the second service provider information, at least one short link that is generated by the second service provider for the first service provider and meets the target condition from the short link associated with the second service provider information may be: determining, by the terminal, a second block from a blockchain system based on the second service provider information, the second block being used for storing second link release information of the second service provider; and obtaining, by the terminal, at least one short link that is generated by the second service provider for the first service provider and meets the target condition from the second link release information of the second service provider stored in the second block. The second link release information is stored on the blockchain and therefore cannot be changed to avoid a case that a malicious user modifies the second link release information and provides a short link forged by the malicious user to the terminal, thereby ensuring that the short link obtained by the terminal is a secure and valid short link. Descriptions are still made by using an example in which the target text is a short message. The second service provider stores the second service provider information and the generated plurality of short links in the second link release information and uploads the second link release information to a chain, so that the second link release information is stored in the second block. The plurality of short links generated by the second service provider include a short link b generated for the first service provider. Then, the terminal can obtain at least one short link that is generated for the first service provider and meets the target condition from the second link release information. If the second link release information is not stored on the blockchain, the malicious user can add a forged short link to the second link release information by modifying the second link release information, to make the terminal obtain the forged short link. When a to-be-detected short link is a malicious short link, the terminal may falsely believe that the to-be-detected short link is secure, and there is a security risk as a result. The second link release information is stored on the blockchain, so that the second link release information cannot be changed, thereby ensuring that the short link obtained by the terminal is a secure and valid short link. In some embodiments, the terminal can use a generation time of a short link within a target time period as a target condition. Correspondingly, the step of obtaining, by the terminal, at least one short link that is generated by the second service provider for the first service provider and meets the target condition from the second link release information of the second service provider stored in the second block may be: determining, by the terminal, the target time period according to a receiving time of the target text. The terminal can filter a plurality of short links included in the second link release information of the second service provider stored in the second block, and obtain at least one short link generated by the second service provider for the first service provider within the target time period. The generation time of the short link is limited, so that during obtaining of the at least one short link, short links that are not within the target time period can be filtered out, thereby improving the obtaining efficiency. Descriptions are still made by using an example in which the target text is a short message. The second service provider needs to generate the short link b first before the terminal can receive a short message including the short link b. Therefore, a generation time of the short link is not later than a receiving time of the short message. In addition, to ensure the timeliness of information, under normal circumstances, after being created, a webpage needs to be used within 24 hours, that is, the short link b corresponding to the original link a of the webpage is transmitted to the user. Therefore, a generation time of the short link is generally not 24 hours earlier than before a receiving time of the short message. That is, the target time period is 24 hours before the short message is received. The target time period may be set to 36 hours or 48 hours before the short message is received, which is not limited in this embodiment of this disclosure. In some embodiments, the terminal can determine the at least one short link by using a service type as a target condition. Correspondingly, the second link release information further includes a service type corresponding to any short link generated by the second service provider, where one short link corresponds to one service type. The step of obtaining, by the terminal, at least one short link that is generated by the second service provider for the first service provider and meets the target condition from the second link release information of the second service provider stored in the second block may be: performing, by the terminal, content identification on the target text to determine a service type corresponding to the to-be-detected short link, and obtaining a target service type. The terminal obtains at least one short link that is generated by the second service provider for the first service provider and belongs to the target service type from the second link release information of the second service provider stored in the second block. The service type of the short link is limited, so that during obtaining of the at least one short link, short links that do not belong to the target service type can be filtered out, thereby improving the obtaining efficiency. Descriptions are still made by using an example in which the target text is a short message. A service type corresponding to the original link a generated by the first service provider is A. When the second service provider generates at least one short link for the original link a, a service type of the generated at least one short link can also be set to A. The service type A corresponds to a plurality of short links. The short link b corresponding to the original link a corresponds to only one service type. The second service provider stores a service type corresponding to the short link in the second link release information. The terminal performs content identification on content of the short message, so that it can be determined that the short message indicates a service for modifying an account information type, and the modifying an account information type is used as the target service type. The terminal selects at least one short link whose service type is the target service type from the second link release information. In some embodiments, the terminal can determine the at least one short link by using a condition of whether the short link is expired as the target condition. Correspondingly, the second link release information further includes time limit information corresponding to any short link generated by the second service provider, and the time limit information is used for indicating whether the short link is expired. The step of obtaining, by the terminal, at least one short link that is generated by the second service provider for the first service provider and meets the target condition from the second link release information of the second service provider stored in the second block may be: obtaining, by the terminal, at least one piece of time limit information from the second link release information of the second service provider stored in the second block. The terminal obtains, based on the at least one piece of time limit information, at least one unexpired short link generated by the second service provider for the first service provider from the second link release information. The time limit information of the short link is obtained, so that when the at least one short link is obtained, expired short links can be filtered out, thereby improving the obtaining efficiency and also improving the validity of the obtained at least one short link. Descriptions are still made by using an example in which the target text is a short message. When generating the short link b, the second service provider can set time limit information of the short link b, and within a validity period indicated by the time limit information, the short link b can be trigged to jump to the original link a. In some embodiments, the short link b points to an original link c in January, points to an original link d in February, and currently points to the original link a. The second service provider may store the time limit information in the second link release information. This embodiment of this disclosure exemplarily shows a process of determining the at least one short link according to different target conditions. In fact, the at least one short link can further be determined according to two or more target conditions at the same time. The target conditions can be freely combined, which is not limited in this embodiment of this disclosure. The storage of the short link can be implemented in another manner, for example, through cloud storage. That is, information related to the short link and information related to the original link are stored by a plurality of service providers related to the short link in a database with high credibility, and the database with high credibility may be built by using cloud storage technology. The database with high credibility can provide an information query entry and set an access right to ensure data security. When obtaining the at least one short link, the terminal can transmit an obtaining request to a link detection server through a link detection client, and the at least one short link is obtained by the link detection server having an access right to the database with high credibility. A database with high credibility is set up, so that it is ensured that the information stored by the plurality of service providers related to the short link is not easily tampered with and the privacy of the information is also ensured. For example, a security assistant may be installed on a mobile phone, and a backend server of the security assistant has access right to the database with high credibility, so that the information related to the short link and the information related to the original link can be obtained, so as to implement detection of the to-be-detected short link. A short message sender, a network service provider, and an information security supervision department may all have access rights to the database with high credibility. In step603, the terminal determines, in response to a determination that the to-be-detected short link is not included in the at least one short link, that the to-be-detected short link is an insecure link. In this embodiment of this disclosure, after obtaining the at least one short link, the terminal can compare the to-be-detected short link with the at least one short link one by one. If none of the at least one short link is the same as the to-be-detected short link, it is determined that the second service provider has not generated the to-be-detected short link. That is, the to-be-detected short link may be a malicious short link forged by a malicious user. Therefore, the terminal can determine that the to-be-detected short link is an insecure link. In some embodiments, the terminal can process the target text based on a detection result. Correspondingly, the step of processing the target text by the terminal may be: obtaining, by the terminal, a target processing manner corresponding to the target text, and processing, by the terminal, the target text based on the target processing manner, the target processing manner including at least one of isolation, deletion, labeling, or display of prompt information. By automatically processing a target text including an insecure short link, the terminal can reduce the risk that the user is deceived by a malicious user and protect the user's information security. Descriptions are still made by using an example in which the target text is a short message. After determining that the short link b included in the short message is an insecure link, the terminal displays prompt information, such as “url.cn.xxxxx is an insecure link. Please pay attention to information security”. The terminal can further isolate the short message. For example, the short message is marked as an intercepted message, and the short link in the intercepted message is in an unclickable state to prevent the user from clicking the insecure short link b by mistake. The terminal can further directly delete the short message when the user sets a delete right, and the deleted short message can be temporarily stored in a recycle bin to facilitate viewing by the user. The terminal can further prompt the short link b to be insecure in a labeling manner, such as highlighting the short message in red and labeling the short message with a word “insecure”. The foregoing example shows a processing process in which the target text is a short message, and the foregoing processing process is also applicable to a target text in another form, which is not limited in this embodiment of this disclosure. In some embodiments, the terminal can display a detection result to the user and process the target text based on an operation of the user. Correspondingly, the step of processing the target text by the terminal may be displaying, by the terminal, a target interface in response to receiving a selection operation of the user, the target interface including tabs such as delete, trust, and query. The terminal performs an operation corresponding to any tab in response to a trigger operation of the tab. Descriptions are still made by using an example in which the target text is a short message. In addition to displaying the detection result to the user, that is, displaying whether the short link is secure or not, the terminal can further display an owner of the original link corresponding to the to-be-detected short link to the user. If the owner displayed by the terminal is not the xx Game in the short message, the user determines that the link is not secure, and the user can choose to delete the short message. If the owner displayed by the terminal is the xx Game in the short message, it indicates that the terminal may make an incorrect determination, the user can determine that the link is secure, and the user can also choose to trust the short message. If the user decides that the user is not to receive the short message for modifying the account information, the user can choose to make a query, and the terminal obtains the short message from the first link release information and displays a transmission list of the short message to the user, so that the user can determine whether the received short message is appropriate. For the processing manner of the target text, a preset target processing manner can be used by the terminal for automatic implementation. That is, if it is detected that there is a security problem in the short link in the target text, the target processing manner is directly performed on the target text, thereby improving the security and the intelligence of security detection. In step604, the terminal determines, in response to a determination that the to-be-detected short link is included in the at least one short link, the security of the to-be-detected short link based on the first link release information of the first service provider and the second link release information of the second service provider. In this embodiment of this disclosure, if the to-be-detected short link is included in at least one short link, it indicates that the short link is OK and is not a forged malicious short link, and the terminal may further verify the short link to determine whether the short link is secure. In some embodiments, the terminal can perform further detection on the short link based on the time limit information that is included in the second link release information and corresponds to any short link generated by the second service provider, and the time limit information is used for indicating whether the short link is expired. Correspondingly, the step of further verifying the short link by the terminal may be: obtaining, by the terminal, target time limit information in response to that the to-be-detected short link is included in the at least one short link, the target time limit information being time limit information corresponding to the to-be-detected short link obtained from the second link release information; and displaying, by the terminal, first prompt information in response to the target time limit information indicating that the to-be-detected short link is expired, the first prompt information being used for prompting that the to-be-detected short link is expired. It is determined by using time limit information whether the to-be-detected short link is still valid currently, so that the user is prevented from accessing an invalid short link or a short link pointing to a malicious website, thereby ensuring the information security of the user. Descriptions are still made by using an example in which the target text is a short message. Time limit information corresponding to the short link b generated by the second service provider indicates that the short link b is expired. That is, the short link b currently no longer points to the original link a, but instead points to the original link d. A webpage corresponding to the original link d may be a webpage generated or hijacked by a malicious user and is used for defrauding the user of property information. In some embodiments, the terminal prompts the user by displaying prompt information like “The short link is expired, and there is a risk”. In some embodiments, the terminal can perform, based on first content information that is included in the second link release information and corresponds to any short link generated by the second service provider, further detection on the short link. The first content information is used for indicating at least one of a page screenshot, a page feature value, or page parameter information corresponding to the short link. Correspondingly, the step of further verifying, by the terminal, the short link may be obtaining, by the terminal, target first content information in response to that the to-be-detected short link is included in the at least one short link, the target first content information being first content information corresponding to the to-be-detected short link obtained from the second link release information; and displaying, by the terminal, second prompt information in response to the target first content information being inconsistent with content information obtained by accessing the to-be-detected short link, the second prompt information being used for indicating that the to-be-detected short link is an insecure link. The target first content information corresponding to the short link stored in the blockchain is compared with the content information obtained through actual access, it can be determined whether there is a change in content of a webpage to which the to-be-detected short link actually jumps. The user is prompted when there is a change, thereby ensuring the information security of the user. Descriptions are still made by using an example in which the target text is a short message. After generating the short link b for the original link a, the second service provider further obtains content information corresponding to the short link b, for example, a screenshot of the short link b jumping to the original link a, a screenshot of a webpage corresponding to the original link a, a feature value of a webpage corresponding to the original link a, and a label parameter of the webpage corresponding to the original link a, and stores the content information in the second link release information. The screenshot may include screenshots of the webpage corresponding to the original link a displayed on different devices, for example, a screenshot on a computer and a screenshot on a mobile phone. The terminal may verify a feature value in content information by at least one of the Message-Digest algorithm5(MD5), the Term Frequency (TF) algorithm, the Document Frequency (DF) algorithm, or Term Frequency-Inverse Document Frequency (TF-IDF) algorithm. The terminal can further compare the screenshot of the webpage in the content information through an image identification algorithm. The terminal may determine, based on a verification result and a comparison result, whether the target first content information is consistent with content information obtained through actual access. If not, it indicates that there is a change in content of the webpage to which the to-be-detected short link actually jumps, and there may be a risk of defrauding the user of information. In some embodiments, the terminal displays prompt information like “The content of the webpage is changed and there is a risk” to prompt the user. The target first content information and the content information obtained through actual access do not have to be 100% the same. If there is a similarity above a target threshold such as 90%, 85%, or 80%, the terminal can also determine that the target first content information is consistent with the content information obtained through actual access, and there is no risk. In some embodiments, the terminal can perform, based on an original link corresponding to any short link that is included in the second link release information and is generated by the second service provider and the second content information corresponding to the original link included in the first link release information, further detection on the short link. The second content information is used for indicating at least one of a page screenshot, a page feature value, or page parameter information corresponding to the original link. Correspondingly, the step of further verifying, by the terminal, the short link may be obtaining by the terminal, a target original link in response to that the to-be-detected short link is included in the at least one short link, the target original link being an original link corresponding to the to-be-detected short link obtained from the second link release information; obtaining, by the terminal, target second content information, the target second content information being second content information corresponding to the target original link obtained from the first link release information; and displaying, by the terminal, third prompt information in response to that the target second content information is inconsistent with content information obtained by accessing the to-be-detected short link, the third prompt information being used for indicating that the to-be-detected short link is an insecure link. The target second content information corresponding to the short link stored in the blockchain is compared with the content information obtained through actual access, so that it can be determined whether there is a change in content of a webpage to which the to-be-detected short link actually jumps. The user is prompted when there is a change, thereby ensuring the information security of the user. Descriptions are still made by using an example in which the target text is a short message. After creating a webpage corresponding to the original link a, the first service provider can store parameter information such as label information of the page corresponding to the original link a and content information corresponding to the original link a such as a feature value of the page corresponding to the original link a and a screenshot of the page corresponding to the original link a in the first link release information. The screenshot includes screenshots of the webpage corresponding to the original link a displayed on different devices, for example, a screenshot on a computer and a screenshot on a mobile phone. For a manner in which the terminal compares the content information, reference may be made to the content of the previous example, and details are not repeated herein. In some embodiments, the terminal can perform, based on an original link corresponding to any short link that is included in the second link release information and is generated by the second service provider and an account list corresponding to the original link included in the first link release information, further detection on the short link. The account list is used for indicating at least one account that receives the original link. Correspondingly, the step of further verifying, by the terminal, the short link may be obtaining by the terminal, a target original link in response to that the to-be-detected short link is included in the at least one short link, the target original link being an original link corresponding to the to-be-detected short link obtained from the second link release information; obtaining, by the terminal, a target account list, the target account list being an account list corresponding to the target original link obtained from the first link release information; and displaying, by the terminal, fourth prompt information in response to that an account currently logged in by a terminal is not in the target account list, the fourth prompt information being used for indicating that the to-be-detected short link is an insecure link. It is found through comparison whether a currently logged-in account is in the target account list, so that it can be determined whether the target text is transmitted incorrectly, thereby avoiding a case that the user blindly performs an operation based on the target text, the time of the user is wasted, and there is a risk of information leakage. Descriptions are still made by using an example in which the target text is a short message. The first service provider can set an account list for each original link for receiving the original link or a short link corresponding to the original link. In some embodiments, an account in the account list is a mobile phone number, or International Mobile Equipment Identity (IMEI), or the like. The terminal obtains an account list corresponding to the original link a. If there is an account currently logged in by the terminal in the list, it indicates that the to-be-detected short link is indeed transmitted to the user of the terminal. If there is not an account currently logged in by the terminal in the list, it indicates that the to-be-detected short link is indeed not transmitted to the user of the terminal. The terminal determines that the to-be-detected short link has a certain risk, and in some embodiments, the terminal prompts the user by displaying prompt information like “An information transmission object is incorrect. Please pay attention to information security”. In addition to being used for performing detection on the to-be-detected short link, information such as first service provider information, second service provider information associated with the first service provider information, at least one original link, content information corresponding to any original link, and an account list corresponding to any original link included in the first link release information and information such as the second service provider information, at least one short link, content information corresponding to any short link, and time limit information corresponding to any short link included in the second link release information can further be used as deposit information. The terminal may receive a deposit instruction triggered by the user and generate deposit information in response to receiving the deposit instruction. The deposit information being used for indicating information associated with a process of accessing a page corresponding to the to-be-detected short link. The deposit information includes, but not limited to, the first link release information and the second link release information. The deposit information may further include a channel parameter such as a short message delivery port number in the short message industry. The deposit information may further include device information collected with the user's consent, such as a browser version, a browser type, terminal device information, network parameters or the like. The terminal device information may be an Internet protocol (IP) address, and a media access control (MAC) address. In some embodiments, after receiving the deposit instruction, the terminal transmits a report request to a forensic device, the report request instructing the forensic device to collect evidence on the to-be-detected short link. After the report request is transmitted, a forensic device of an impartial institution can automatically collect evidence, which avoids the problem that evidence fails to be collected because the impartial institution cannot collect evidence during non-working hours. For example, if a malicious user modifies content of a webpage corresponding to a short link to illegal content during the non-working hours of the impartial institution, such as midnight, while the content corresponding to the short link is normal content at other time periods, the impartial institution cannot rely on individuals to collect evidence on the short link. This problem can be resolved through triggering by a report request, so that evidence can be collected whenever a report is received. The foregoing steps601-604are exemplary implementations of the embodiments of this disclosure, and the link detection method provided in this disclosure can be implemented in another implementation. For example, the first service provider stores at least one short link generated by the second service provider in the first link release information of the first service provider to omit the step of obtaining, by the terminal, the at least one short link, thereby saving time. Alternatively, information used in a link detection process can be stored in a non-blockchain system with high credibility, so that the link detection method does not rely on the blockchain system, so as to extend the application scope of the link detection method. This is not limited in this embodiment of this disclosure. The foregoing steps601-604exemplarily show a manner in which link detection is performed by the terminal. Before the terminal receives the target text, the target text may further be reviewed by a plurality of intermediate service providers, such as a short message service provider, a network operator, and a service proxy. For any intermediate service provider, the intermediate service provider can use the link detection method provided in this disclosure to obtain the first link release information and the second link release information from the blockchain when receiving the target text including the to-be-detected short link and perform detection on the to-be-detected short link. In this embodiment of this disclosure, at least one short link that is generated by the second service provider for the first service provider and meets the target condition is obtained, and then the at least one short link is compared with the to-be-detected short link in the target text to determine whether the to-be-detected short link is a short link generated for the first service provider, so as to determine whether the to-be-detected short link is secure. FIG.7is a block diagram of a link detection apparatus according to an embodiment of this disclosure. The apparatus is configured to perform the steps when the foregoing link detection method is performed. Referring toFIG.7, the apparatus includes an obtaining module701and a determining module702. One or more modules of the apparatus can be implemented by processing circuitry, software, or a combination thereof, for example. The obtaining module701is configured to obtain first service provider information from a target text, the target text including a to-be-detected short link. The obtaining module701is further configured to obtain at least one short link based on the first service provider information, the short link being a short link that is generated by a second service provider for a first service provider and meets a target condition, the first service provider being a service provider indicated by the first service provider information, the second service provider being configured to provide a short link generation service. The determining module702is configured to determine, in response to that the to-be-detected short link is not included in the at least one short link, that the to-be-detected short link is an insecure link. In some embodiments, the obtaining module701is further configured to: obtain second service provider information associated with the first service provider information based on the first service provider information; and obtain at least one short link from a short link associated with the second service provider information based on the second service provider information. In some embodiments, the obtaining module701is further configured to: determine a first block from a blockchain system based on the first service provider information, the first block storing first link release information of the first service provider; and obtain the second service provider information from the first link release information of the first service provider stored in the first block. In some embodiments, the obtaining module701is further configured to: determine a second block from a blockchain system based on the second service provider information, the second block storing second link release information of the second service provider; and obtain the at least one short link from the second link release information of the second service provider stored in the second block. In some embodiments, the target condition is at least one of the following: a short link generation time being within a target time period; the short link being not expired; and a service type to which the short link belongs being consistent with a target service type indicated by the target text. In some embodiments, the obtaining module701is further configured to: determine the target time period according to a receiving time of the target text; and filter a plurality of short links included in the second link release information of the second service provider stored in the second block to obtain the at least one short link. In some embodiments, the second link release information further includes a service type corresponding to any short link generated by the second service provider, one short link corresponding to one service type; and the obtaining module701is further configured to: perform content identification on the target text to determine a target service type corresponding to the to-be-detected short link; and obtain at least one short link belonging to the target service type from the second link release information of the second service provider stored in the second block. In some embodiments, the second link release information further includes time limit information corresponding to any short link generated by the second service provider, and the time limit information is used for indicating whether the short link is expired; and the obtaining module701is further configured to: obtain at least one piece of time limit information from the second link release information of the second service provider stored in the second block; and obtain at least one unexpired short link from the second link release information based on the at least one piece of time limit information. In some embodiments, the second link release information further includes time limit information corresponding to any short link generated by the second service provider, and the time limit information is used for indicating whether the short link is expired. The apparatus further includes: the obtaining module701, further configured to obtain target time limit information in response to that the to-be-detected short link is included in the at least one short link, the target time limit information being time limit information corresponding to the to-be-detected short link obtained from the second link release information; and a first display module, configured to display first prompt information in response to the target time limit information indicating that the to-be-detected short link is expired, the first prompt information prompting that the to-be-detected short link is expired. In some embodiments, the second link release information further includes first content information corresponding to any short link generated by the second service provider, and the first content information is used for indicating at least one of a page screenshot, a page feature value, or page parameter information corresponding to the short link. The apparatus further includes: the obtaining module701, further configured to obtain target first content information in response to that the to-be-detected short link is included in the at least one short link, the target first content information being first content information corresponding to the to-be-detected short link obtained from the second link release information; and a second display module, configured to display second prompt information in response to the target first content information being inconsistent with content information obtained by accessing the to-be-detected short link, the second prompt information indicating that the to-be-detected short link is an insecure link. In some embodiments, the second link release information further includes an original link corresponding to any short link generated by the second service provider, the first link release information further includes second content information corresponding to the original link, and the second content information is used for indicating at least one of a page screenshot, a page feature value, or page parameter information corresponding to the original link. The apparatus further includes: the obtaining module701, further configured to obtain a target original link in response to that the to-be-detected short link is included in the at least one short link, the target original link being an original link corresponding to the to-be-detected short link obtained from the second link release information, and further configured to obtain target second content information, the target second content information being second content information corresponding to the target original link obtained from the first link release information; and a third display module, configured to display third prompt information in response to that the target second content information is inconsistent with content information obtained by accessing the to-be-detected short link, the third prompt information indicating that the to-be-detected short link is an insecure link. In some embodiments, the second link release information further includes an original link corresponding to any short link generated by the second service provider, the first link release information further includes an account list corresponding to the original link, and the account list is used for indicating at least one account that receives the original link. The apparatus further includes: the obtaining module701, further configured to obtain a target original link in response to that the to-be-detected short link is included in the at least one short link, the target original link being an original link corresponding to the to-be-detected short link obtained from the second link release information, and further configured to obtain a target account list, the target account list being an account list corresponding to the target original link obtained from the first link release information; and a fourth display module, configured to display fourth prompt information in response to that an account currently logged in by a terminal is not in the target account list, the fourth prompt information indicating that the to-be-detected short link is an insecure link. In some embodiments, the apparatus further includes: the obtaining module701, further configured to obtain a target processing manner corresponding to the target text, the target processing manner including at least one of isolation, deletion, labeling, or display of prompt information; and a text processing module, configured to process the target text based on the target processing manner. In some embodiments, the apparatus further includes: a generation module, configured to generate deposit information in response to receiving a deposit instruction, the deposit information indicating information associated with a process of accessing a page corresponding to the to-be-detected short link; and a submission module, configured to submit the deposit information. In some embodiments, the apparatus further includes: a request transmission module, configured to transmit a report request to a forensic device, the report request for instructing the forensic device to collect evidence on the to-be-detected short link. In some embodiments, the obtaining module701is further configured to: perform punctuation detection on the target text including the to-be-detected short link; and use, in response to detecting a target punctuation, a field indicated by the target punctuation as the first service provider information. In some embodiments, the to-be-detected short link is associated with at least one of a text, a picture, a button, or a video in the target text. When the link detection apparatus provided in the forgoing embodiment runs an application program, division of the foregoing functional modules is merely an example for description. In a practical application, the foregoing functions may be assigned to and completed by different modules as needed, that is, the internal structure of the apparatus is divided into different functional modules to implement all or some of the functions described above. In addition, the link detection apparatus and link detection method embodiments provided in the foregoing embodiments belong to the same concept. For the specific implementation process, reference may be made to the method embodiments, and details are not described herein again. In this embodiment of this disclosure, an electronic device can be implemented as a terminal or a server. When the electronic device is implemented as a terminal, operations performed by the foregoing link detection method can be implemented by the terminal. When the electronic device is implemented as a server, the operations performed by the foregoing linking method can be implemented by the server, and the operations performed by the foregoing link detection method can also be implemented by an interaction between the server and the terminal. The electronic device can be implemented as the terminal.FIG.8is a structural block diagram of a terminal800according to an exemplary embodiment of this application.FIG.8is a structural block diagram of a terminal800according to an exemplary embodiment of this disclosure. The terminal800may be a smart phone, a tablet computer, a moving picture experts group audio layer III (MP3) player, a moving picture experts group audio layer IV (MP4) player, a notebook computer, or a desktop computer. The terminal800may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or by another name. Generally, the terminal800includes processing circuitry (e.g., a processor801) and a memory802(e.g. a non-transitory computer-readable storage medium). The processor801may include one or more processing cores, for example, a 4-core processor or an 8-core processor. The processor801may be implemented by using at least one hardware form of a digital signal processor (DSP), a field-programmable gate array (FPGA), and a programmable logic array (PLA). The processor801may alternatively include a main processor and a coprocessor. The main processor is configured to process data in an active state, also referred to as a central processing unit (CPU). The coprocessor is a low-power processor configured to process data in a standby state. In some embodiments, the processor801may be integrated with a graphics processing unit (GPU). The GPU is configured to render and draw content that needs to be displayed on a display. In some embodiments, the processor801may further include an artificial intelligence (AI) processor. The AI processor is configured to process computing operations related to machine learning. The memory802may include one or more computer-readable storage media. The computer-readable storage medium may be non-transient. The memory802may further include a high-speed random access memory and a nonvolatile memory, for example, one or more disk storage devices or flash storage devices. In some embodiments, the non-transient computer-readable storage medium in the memory802is configured to store at least one instruction, and the at least one instruction is configured to be executed by the processor801to implement the following operations: obtaining first service provider information from a target text, the target text including a to-be-detected short link; obtaining at least one short link based on the first service provider information, the short link being a short link that is generated by a second service provider for a first service provider and meets a target condition, the first service provider being a service provider indicated by the first service provider information, the second service provider being configured to provide a short link generation service; and determining, in response to that the to-be-detected short link is not included in the at least one short link, that the to-be-detected short link is an insecure link. In some embodiments, the processor is further configured to perform the following operations: obtaining second service provider information associated with the first service provider information based on the first service provider information; and obtaining at least one short link from a short link associated with the second service provider information based on the second service provider information. In some embodiments, the processor is further configured to perform the following operations: determining a first block from a blockchain system based on the first service provider information, the first block storing first link release information of the first service provider; and obtaining the second service provider information from the first link release information of the first service provider stored in the first block. In some embodiments, the processor is further configured to perform the following operations: determining a second block from a blockchain system based on the second service provider information, the second block storing second link release information of the second service provider; and obtaining the at least one short link from the second link release information of the second service provider stored in the second block. In some embodiments, the target condition is at least one of the following: a short link generation time being within a target time period; the short link being not expired; and a service type to which the short link belongs being consistent with a target service type indicated by the target text. In some embodiments, the processor is further configured to perform the following operations: determining the target time period according to a receiving time of the target text; and filtering a plurality of short links included in the second link release information of the second service provider stored in the second block to obtain the at least one short link. In some embodiments, the second link release information further includes a service type corresponding to any short link generated by the second service provider, one short link corresponding to one service type; and the processor is further configured to perform the following operations: performing content identification on the target text to determine a target service type corresponding to the to-be-detected short link; and obtaining at least one short link belonging to the target service type from the second link release information of the second service provider stored in the second block. In some embodiments, the second link release information further includes time limit information corresponding to any short link generated by the second service provider, and the time limit information is used for indicating whether the short link is expired; and the processor is further configured to perform the following operations: obtaining at least one piece of time limit information from the second link release information of the second service provider stored in the second block; and obtaining at least one unexpired short link from the second link release information based on the at least one piece of time limit information. In some embodiments, the second link release information further includes time limit information corresponding to any short link generated by the second service provider, and the time limit information is used for indicating whether the short link is expired; and the processor is further configured to perform the following operations: obtaining target time limit information in response to that the to-be-detected short link is included in the at least one short link, the target time limit information being time limit information corresponding to the to-be-detected short link obtained from the second link release information; and displaying first prompt information in response to the target time limit information indicating that the to-be-detected short link is expired, the first prompt information prompting that the to-be-detected short link is expired. In some embodiments, the second link release information further includes first content information corresponding to any short link generated by the second service provider, and the first content information is used for indicating at least one of a page screenshot, a page feature value, or page parameter information corresponding to the short link; and the processor is further configured to perform the following operations: obtaining target first content information in response to that the to-be-detected short link is included in the at least one short link, the target first content information being first content information corresponding to the to-be-detected short link obtained from the second link release information; and displaying second prompt information in response to the target first content information being inconsistent with content information obtained by accessing the to-be-detected short link, the second prompt information indicating that the to-be-detected short link is an insecure link. In some embodiments, the second link release information further includes an original link corresponding to any short link generated by the second service provider, the first link release information further includes second content information corresponding to the original link, and the second content information is used for indicating at least one of a page screenshot, a page feature value, or page parameter information corresponding to the original link; and the processor is further configured to perform the following operations: obtaining a target original link in response to that the to-be-detected short link is included in the at least one short link, the target original link being an original link corresponding to the to-be-detected short link obtained from the second link release information; obtaining target second content information, the target second content information being second content information corresponding to the target original link obtained from the first link release information; and displaying third prompt information in response to that the target second content information is inconsistent with content information obtained by accessing the to-be-detected short link, the third prompt information indicating that the to-be-detected short link is an insecure link. In some embodiments, the second link release information further includes an original link corresponding to any short link generated by the second service provider, the first link release information further includes an account list corresponding to the original link, and the account list is used for indicating at least one account that receives the original link; and the processor is further configured to perform the following operations: obtaining a target original link in response to that the to-be-detected short link is included in the at least one short link, the target original link being an original link corresponding to the to-be-detected short link obtained from the second link release information; obtaining a target account list, the target account list being an account list corresponding to the target original link obtained from the first link release information; and displaying fourth prompt information in response to that an account currently logged in by a terminal is not in the target account list, the fourth prompt information indicating that the to-be-detected short link is an insecure link. In some embodiments, the processor is further configured to perform the following operations: generating deposit information in response to receiving a deposit instruction, the deposit information indicating information associated with a process of accessing a page corresponding to the to-be-detected short link; and submitting the deposit information. In some embodiments, the processor is further configured to perform the following operation: transmitting a report request to a forensic device, the report request instructing the forensic device to collect evidence on the to-be-detected short link. In some embodiments, the processor is further configured to perform the following operations: obtaining a target processing manner corresponding to the target text, the target processing manner including at least one of isolation, deletion, labeling, or display of prompt information; and processing the target text based on the target processing manner. In some embodiments, the processor is further configured to perform the following operations: performing punctuation detection on the target text; and using, in response to detecting a target punctuation, a field indicated by the target punctuation as the first service provider information. In some embodiments, the to-be-detected short link is associated with at least one of a text, a picture, a button, or a video in the target text. In some embodiments, the terminal800may include a peripheral interface803and at least one peripheral. The processor801, the memory802, and the peripheral interface803may be connected by a bus or a signal line. Each peripheral device may be connected to the peripheral interface803by a bus, a signal line, or a circuit board. Specifically, the peripheral devices include: at least one of a radio frequency (RF) circuit804, a touch display screen805, a camera assembly806, an audio circuit807, a positioning component808, and a power supply809. The peripheral interface803may be configured to connect the at least one peripheral related to input/output (I/O) to the processor801and the memory802. In some embodiments, the processor801, the memory802, and the peripheral interface803are integrated on the same chip or the same circuit board. In some other embodiments, any or both of the processor801, the memory802, and the peripheral interface803may be implemented on an independent chip or circuit board, which is not limited in this embodiment. The RF circuit804is configured to receive and transmit an RF signal, also referred to as an electromagnetic signal. The radio frequency circuit804communicates with a communication network and other communication devices through the electromagnetic signal. The radio frequency circuit804converts an electrical signal into an electromagnetic signal for transmission or converts a received electromagnetic signal into an electrical signal. The RF circuit804may include: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a DSP, a codec chip set, and a subscriber identity module card. The RF circuit804may communicate with another terminal by using at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to, a metropolitan area network, different generations of mobile communication networks (2G, 3G, 4G, and 5G), a wireless local area network, and/or a wireless fidelity (Wi-Fi) network. In some embodiments, the RF circuit804may further include a circuit related to NFC, which is not limited in this disclosure. The display screen805is configured to display a user interface (UI). The UI may include a graph, a text, an icon, a video, and any combination thereof. When the display screen805is a touch display screen, the display screen805is further capable of collecting touch signals on or above a surface of the display screen805. The touch signal may be inputted, as a control signal, to the processor801for processing. In this case, the display screen805may be further configured to provide a virtual button and/or a virtual keyboard, which is also referred to as a soft button and/or a soft keyboard. In some embodiments, the display screen805may be one, and a front panel of the terminal800is disposed; in other embodiments, there may be at least two display screens805that are respectively disposed on different surfaces of the terminal800or folded. In still other embodiments, the display screen805may be a flexible display screen disposed on a curved surface or a folded surface of the terminal800. Even, the display screen805may also be set to a non-rectangular irregular pattern, that is, a special-shaped screen. The display screen805may be made of materials such as a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like. The camera assembly806is configured to collect images or videos. The camera assembly806may include a front-facing camera and a rear-facing camera. Generally, the front-facing camera is disposed on the front panel of the terminal, and the rear-facing camera is disposed on a back surface of the terminal. In some embodiments, there are at least two rear-facing cameras, which are respectively any of a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, to achieve backend blurring function through fusion of the main camera and the depth-of-field camera, panoramic photographing and virtual reality (VR) photographing through fusion of the main camera and the wide-angle camera, or other fusion photographing functions. In some embodiments, the camera assembly806may further include a flash. The flash may be a monochrome temperature flash or may be a double color temperature flash. The double color temperature flash refers to a combination of a warm light flash and a cold light flash and may be used for light compensation under different color temperatures. The audio circuit807may include a microphone and a loudspeaker. The microphone is configured to acquire sound waves of a user and an environment, and convert the sound waves into electrical signals and input the electrical signals into the processor801for processing, or input the electrical signals into the radio frequency circuit804to implement voice communication. For a purpose of stereo acquiring or noise reduction, there may be a plurality of microphones disposed at different portions of the terminal800. The microphone may be further an array microphone or an omnidirectional microphone. The speaker is configured to convert electric signals from the processor801or the radio frequency circuit804into sound waves. The speaker may be a conventional thin-film speaker or a piezoelectric ceramic speaker. When the speaker is the piezoelectric ceramic speaker, the speaker can not only convert an electric signal into sound waves audible to a human being, but also convert an electric signal into sound waves inaudible to the human being for ranging and other purposes. In some embodiments, the audio circuit807may further include an earphone jack. The positioning component808is configured to determine a current geographic location of the terminal800, to implement navigation or a location-based service (LBS). The positioning component808may be a positioning component based on the Global Positioning System (GPS) of the United States, the BeiDou system of China, the GLONASS System of Russia, or the GALILEO System of the European Union. The power supply809is configured to supply power to components in the terminal800. The power supply809may be an alternating-current power supply, a direct-current power supply, a disposable battery, or a rechargeable battery. In a case that the power supply809includes the rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery may be further configured to support a fast charge technology. In some embodiments, the terminal800may further include one or more sensors810. The one or more sensors810include but are not limited to an acceleration sensor811, a gyroscope sensor812, a pressure sensor813, a fingerprint sensor814, an optical sensor815, and a proximity sensor816. The acceleration sensor811may detect the magnitude of acceleration on three coordinate axes of a coordinate system established by the terminal800. For example, the acceleration sensor811may be configured to detect components of gravity acceleration on the three coordinate axes. The processor801may control, according to a gravity acceleration signal collected by the acceleration sensor811, the display screen805to display the user interface in a frame view or a portrait view. The acceleration sensor811may be further configured to acquire motion data of a game or a user. The gyroscope sensor812may detect a body direction and a rotation angle of the terminal800and may work with the acceleration sensor811to acquire a 3D action performed by the user on the terminal800. The processor801may implement the following functions according to the data collected by the gyroscope sensor812: motion sensing (for example, change of the UI based on a tilt operation of the user), image stabilization during photographing, game control, and inertial navigation. The pressure sensor813may be disposed at a side frame of the terminal800and/or a lower layer of the display screen805. When the pressure sensor813is disposed at the side frame of the terminal800, a holding signal of the user on the terminal800may be detected, and the processor801performs left/right hand identification or a quick operation according to the holding signal collected by the pressure sensor813. When the pressure sensor813is disposed on the low layer of the display screen805, the processor801controls, according to a pressure operation of the user on the display screen805, an operable control on the UI. The operable control includes at least one of a button control, a scroll-bar control, an icon control, and a menu control. The fingerprint sensor814is configured to collect a user's fingerprint, and the processor801identifies a user's identity according to the fingerprint collected by the fingerprint sensor814, or the fingerprint sensor814identifies a user's identity according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor801authorizes the user to perform related sensitive operations. The sensitive operations may include unlocking a screen, viewing encrypted information, downloading software, paying, changing a setting, and the like. The fingerprint sensor814may be disposed on a front surface, a rear surface, or a side surface of the terminal800. When a physical button or a vendor logo is disposed on the terminal800, the fingerprint sensor814may be integrated with the physical button or the vendor logo. The optical sensor815is configured to acquire ambient light intensity. In an embodiment, the processor801may control display luminance of the display screen805according to the ambient light intensity collected by the optical sensor815. Specifically, in a case that the ambient light intensity is relatively high, the display luminance of the display screen805is increased; and in a case that the ambient light intensity is relatively low, the display luminance of the touch display screen805is reduced. In another embodiment, the processor801may further dynamically adjust a camera parameter of the camera assembly806according to the ambient light intensity collected by the optical sensor815. The proximity sensor816, also referred to as a distance sensor, is usually disposed on a front panel of the terminal800. The proximity sensor816is configured to collect a distance between a user and the front surface of the terminal800. In an embodiment, when the proximity sensor816detects that the distance between the user and the front surface of the terminal800gradually becomes small, the touch display screen805is controlled by the processor801to switch from a screen-on state to a screen-off state; and when the proximity sensor816detects that the distance between the user and the front surface of the terminal800gradually increases, the touch display screen805is controlled by the processor801to switch from the screen-off state to the screen-on state. A person skilled in the art may understand that the structure shown inFIG.8does not constitute a limitation on the terminal800and that the terminal may include more or fewer assemblies than those shown in the figure, a combination of some assemblies, or different assembly arrangements. The electronic device may be implemented as a server.FIG.9is a schematic structural diagram of a server according to an embodiment of this disclosure. The server900may vary greatly due to different configurations or performance and may include one or more processors (such as central processing units (CPUs))901and one or more memories902. The memory902stores at least one instruction, the at least one instruction being loaded and executed by the processor901to implement the methods provided in the foregoing method embodiments. The server may further include a wired or wireless network interface, a keyboard, an input/output interface and other components to facilitate input/output. The server may also include other components for implementing device functions. Details are not described herein again. The embodiments of this disclosure further provide a computer-readable storage medium. The computer-readable storage medium is applied to an electronic device, and at least one piece of program code (i.e., computer-readable instructions) is stored in the computer-readable storage medium, the at least one piece of program code being configured to be executed by a processor to implement the following operations: obtaining first service provider information from a target text, the target text including a to-be-detected short link; obtaining at least one short link based on the first service provider information, the short link being a short link that is generated by a second service provider for a first service provider and meets a target condition, the first service provider being a service provider indicated by the first service provider information, the second service provider being configured to provide a short link generation service; and determining, in response to that the to-be-detected short link is not included in the at least one short link, that the to-be-detected short link is an insecure link. In some embodiments, the processor is further configured to perform the following operations: obtaining second service provider information associated with the first service provider information based on the first service provider information; and obtaining at least one short link from a short link associated with the second service provider information based on the second service provider information. In some embodiments, the processor is further configured to perform the following operations: determining a first block from a blockchain system based on the first service provider information, the first block storing first link release information of the first service provider; and obtaining the second service provider information from the first link release information of the first service provider stored in the first block. In some embodiments, the processor is further configured to perform the following operations: determining a second block from a blockchain system based on the second service provider information, the second block storing second link release information of the second service provider; and obtaining the at least one short link from the second link release information of the second service provider stored in the second block. In some embodiments, the target condition is at least one of the following: a short link generation time being within a target time period; the short link being not expired; and a service type to which the short link belongs being consistent with a target service type indicated by the target text. In some embodiments, the processor is further configured to perform the following operations: determining the target time period according to a receiving time of the target text; and filtering a plurality of short links included in the second link release information of the second service provider stored in the second block to obtain the at least one short link. In some embodiments, the second link release information further includes a service type corresponding to any short link generated by the second service provider, one short link corresponding to one service type; and the processor is further configured to perform the following operations: performing content identification on the target text to determine a target service type corresponding to the to-be-detected short link; and obtaining at least one short link belonging to the target service type from the second link release information of the second service provider stored in the second block. In some embodiments, the second link release information further includes time limit information corresponding to any short link generated by the second service provider, and the time limit information is used for indicating whether the short link is expired; and the processor is further configured to perform the following operations: obtaining at least one piece of time limit information from the second link release information of the second service provider stored in the second block; and obtaining at least one unexpired short link from the second link release information based on the at least one piece of time limit information. In some embodiments, the second link release information further includes time limit information corresponding to any short link generated by the second service provider, and the time limit information is used for indicating whether the short link is expired; and the processor is further configured to perform the following operations: obtaining target time limit information in response to that the to-be-detected short link is included in the at least one short link, the target time limit information being time limit information corresponding to the to-be-detected short link obtained from the second link release information; and displaying first prompt information in response to the target time limit information indicating that the to-be-detected short link is expired, the first prompt information prompting that the to-be-detected short link is expired. In some embodiments, the second link release information further includes first content information corresponding to any short link generated by the second service provider, and the first content information is used for indicating at least one of a page screenshot, a page feature value, or page parameter information corresponding to the short link; and the processor is further configured to perform the following operations: obtaining target first content information in response to that the to-be-detected short link is included in the at least one short link, the target first content information being first content information corresponding to the to-be-detected short link obtained from the second link release information; and displaying second prompt information in response to the target first content information being inconsistent with content information obtained by accessing the to-be-detected short link, the second prompt information indicating that the to-be-detected short link is an insecure link. In some embodiments, the second link release information further includes an original link corresponding to any short link generated by the second service provider, the first link release information further includes second content information corresponding to the original link, and the second content information is used for indicating at least one of a page screenshot, a page feature value, or page parameter information corresponding to the original link; and the processor is further configured to perform the following operations: obtaining a target original link in response to that the to-be-detected short link is included in the at least one short link, the target original link being an original link corresponding to the to-be-detected short link obtained from the second link release information; obtaining target second content information, the target second content information being second content information corresponding to the target original link obtained from the first link release information; and displaying third prompt information in response to that the target second content information is inconsistent with content information obtained by accessing the to-be-detected short link, the third prompt information indicating that the to-be-detected short link is an insecure link. In some embodiments, the second link release information further includes an original link corresponding to any short link generated by the second service provider, the first link release information further includes an account list corresponding to the original link, and the account list is used for indicating at least one account that receives the original link; and the processor is further configured to perform the following operations: obtaining a target original link in response to that the to-be-detected short link is included in the at least one short link, the target original link being an original link corresponding to the to-be-detected short link obtained from the second link release information; obtaining a target account list, the target account list being an account list corresponding to the target original link obtained from the first link release information; and displaying fourth prompt information in response to that an account currently logged in by a terminal is not in the target account list, the fourth prompt information indicating that the to-be-detected short link is an insecure link. In some embodiments, the processor is further configured to perform the following operations: generating deposit information in response to receiving a deposit instruction, the deposit information indicating information associated with a process of accessing a page corresponding to the to-be-detected short link; and submitting the deposit information. In some embodiments, the processor is further configured to perform the following operation: transmitting a report request to a forensic device, the report request instructing the forensic device to collect evidence on the to-be-detected short link. In some embodiments, the processor is further configured to perform the following operations: obtaining a target processing manner corresponding to the target text, the target processing manner including at least one of isolation, deletion, labeling, or display of prompt information; and processing the target text based on the target processing manner. In some embodiments, the processor is further configured to perform the following operations: performing punctuation detection on the target text; and using, in response to detecting a target punctuation, a field indicated by the target punctuation as the first service provider information. In some embodiments, the to-be-detected short link is associated with at least one of a text, a picture, a button, or a video in the target text. A person of ordinary skill in the art may understand that all or some of the steps of the foregoing embodiments may be implemented by hardware or may be implemented by a program instructing relevant hardware. The program may be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic disk, an optical disc, or the like. The foregoing descriptions are merely exemplary embodiments of this disclosure but are not intended to limit this disclosure. Any modification, equivalent replacement, or improvement made within the spirit and principle of this disclosure shall fall within the scope of this disclosure. | 109,045 |
11943257 | DETAILED DESCRIPTION The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions. A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured. I. Threat Detection Platform FIG.1Adepicts an example of a threat detection platform that is configured to examine the digital conduct of accounts associated with employees to detect threats to the security of an enterprise in accordance with various embodiments. The threat detection platform can apply one or more models to incoming emails to quantify the risk posed to the security of an enterprise in near real time. These models can examine the context and content of incoming emails to determine whether any of those emails are representative of a business email compromise (BEC), phishing, spamming, etc. In order to protect against novel threats, the models can be consistently trained (and re-trained) using up-to-date information. This information can include insights gained by the threat detection platform and/or insights provided by other entities, such as third party services or individuals. Examples of these individuals include researchers, security operations center (SOC) analysts, information technology (IT) professionals, and end users (e.g., reporting false negatives to the threat detection platform). Regardless of its source, the information can be collected and then provided, as input, to a pipeline for retraining models (e.g., rapidly retraining an NLP pipeline). The term “pipeline,” as used herein, generally refers to a series of steps that—when performed in sequence—automate a retraining process by processing, filtering, extracting, or transforming the information and then using the information to retrain the models used by the threat detection platform. Embodiments are described herein with reference to certain types of attacks. The features of those embodiments can similarly be applied to other types of attacks. As an example, while embodiments may be described in the context of retraining models to identify never-before-seen phishing attacks, a threat detection platform can also rapidly train its models to identify never-before-seen spamming attacks or spoofing attacks. Moreover, embodiments may be described in the context of certain types of digital activities. The features of those embodiments can similarly be applied to other types of digital activities. Thus, while an embodiment may be described in the context of examining emails, a threat detection platform can additionally or alternatively be configured to examine messages, mail filters, sign-in events, etc. While embodiments may be described in the context of computer-executable instructions, aspects of the technology described herein can be implemented via hardware, firmware, or software. As an example, aspects of the threat detection platform can be embodied as instruction sets that are executable by a computer program that offers support for discovering, classifying, and then remediating threats to the security of an enterprise. References in this description to “an embodiment” or “one embodiment” mean that the feature, function, structure, or characteristic being described is included in at least one embodiment of the technology. Occurrences of such phrases do not necessarily refer to the same embodiment, nor are they necessarily referring to alternative embodiments that are mutually exclusive of one another. Unless the context clearly requires otherwise, the terms “comprise,” “comprising,” and “comprised of” are to be construed in an inclusive sense rather than an exclusive or exhaustive sense (i.e., in the sense of “including but not limited to”). The term “based on” is also to be construed in an inclusive sense rather than an exclusive or exhaustive sense. Thus, unless otherwise noted, the term “based on” is intended to mean “based at least in part on.” The terms “connected,” “coupled,” and/or variants thereof are intended to include any connection or coupling between two or more elements, either direct or indirect. The connection/coupling can be physical, logical, or a combination thereof. For example, objects may be electrically or communicatively coupled to one another despite not sharing a physical connection. The term “module” refers to software components, firmware components, or hardware components. Modules are typically functional components that generate one or more outputs based on one or more inputs. As an example, a computer program may include multiple modules responsible for completing different tasks or a single module responsible for completing all tasks. Unless otherwise specified, an example way of implementing a module referred to herein is as a set of one or more python scripts which may make use of various publicly available libraries, toolkits, etc. When used in reference to a list of multiple items, the term “or” is intended to cover all of the following interpretations: any of the items in the list, all of the items in the list, and any combination of items in the list. The sequences of steps performed in any of the processes described here are exemplary. However, unless contrary to physical possibility, the steps may be performed in various sequences and combinations. For example, steps could be added to, or removed from, the processes described here. Similarly, steps could be replaced or reordered. Thus, descriptions of any processes are intended to be open ended. Accounts are digital profiles with which employees can engage in digital activities. These digital profiles are typically used to perform activities such as exchanging emails and messages, and thus may also be referred to as “email accounts” or “messaging accounts” herein. Further, various of the techniques described herein can be extended to other types of accounts as applicable (e.g., accounts on productivity/collaboration platforms, etc.) “Digital conduct” refers to the digital activities that are performed with those accounts. Examples of digital activities include transmitting and receiving communications; creating, modifying, and deleting filters to be applied to incoming communications; initiating sign-in activities; and the like. Examples of communications include emails and messages. As shown inFIG.1A, threat detection platform100includes a profile generator102, a training module104, a monitoring module106, a scoring module108, a reporting module110, and a remediation module116. Some embodiments of threat detection platform100include a subset of these components, while other embodiments of the threat detection platform100include additional components that are not shown inFIG.1A. At a high level, threat detection platform100can acquire data related to digital conduct of accounts associated with employees and then determine, based on an analysis of the data, how to handle security threats in a targeted manner. Examples of such data include information related to emails, messages, mail filters, and sign-in activities. These data are not necessarily obtained from the same source. As an example, data related to emails can be acquired from an email service (e.g., Microsoft Office365) while data related to messages may be acquired from a messaging service (e.g., Slack). Threat detection platform100can identify security threats based on an analysis of incoming emails (e.g., the content of the body, the email address of the sender, etc.), metadata accompanying the incoming emails (e.g., information regarding the sender, recipient, origin, time of transmission, etc.), attachments, links, and/or other suitable data. Threat detection platform100can be implemented, partially or entirely, within an enterprise network112, a remote computing environment (e.g., through which the data regarding digital conduct is routed for analysis), a gateway, or another suitable location or combinations thereof. The remote computing environment can belong to, or be managed by, the enterprise or another entity. In some embodiments, threat detection platform100is integrated into the enterprise's email system (e.g., at a secure email gateway (SEG)) as part of an inline deployment. In other embodiments, threat detection platform100is integrated into the enterprise's email system via an application programming interface (API) such as the Microsoft Outlook API. In such embodiments, threat detection platform100can obtain data via the API. Threat detection platform100can thus supplement and/or supplant other security products employed by the enterprise. In some embodiments, threat detection platform100is maintained by a threat service (also referred to herein as a “security service”) that has access to multiple enterprises' data. In this scenario, threat detection platform100can route data that is, for example, related to incoming emails to a computing environment managed by the security service. An example of such a computing environment is as one or more instances on Amazon Web Services (AWS). Threat detection platform100can maintain one or more databases for each enterprise it services that include, for example, organizational charts (and/or other user/group identifiers/memberships, indicating information such as “Alice is a member of the Engineering group” and “Bob is a member of the Marketing group”), attribute baselines, communication patterns, etc. Additionally or alternatively, threat detection platform100can maintain federated databases that are shared among multiple entities. Examples of federated databases include databases specifying vendors and/or individuals who have been deemed fraudulent, domains from which incoming emails determined to represent security threats originated, etc. The security service can maintain different instances of threat detection platform100for different enterprises, or the security service may maintain a single instance of threat detection platform100for multiple enterprises, as applicable. The data hosted in these instances can be obfuscated, encrypted, hashed, depersonalized (e.g., by removing personal identifying information), or otherwise secured or secreted as applicable. Accordingly, in various embodiments, each instance of threat detection platform100is able to access/process data related to the accounts associated with the corresponding enterprise(s). In some embodiments, threat detection platform100is maintained by the enterprise whose accounts are being monitored—either remotely or on premises. In this scenario, all relevant data is hosted by the enterprise itself, and any information to be shared across multiple enterprises (if applicable) can be transmitted to a computing system that is maintained by the security service or a third party, as applicable. As shown inFIG.1A, profile generator102, training module104, monitoring module106, scoring module108, reporting module110, and remediation module116can be integral parts of the threat detection platform100. Alternatively, these components can be implemented individually, or in various combinations, while operating “alongside” threat detection platform100. For example, reporting module110can be implemented in a remote computing environment to which the threat detection platform100is communicatively connected across a network. As mentioned above, threat detection platform100can be implemented by a security service on behalf of an enterprise or the enterprise itself. In some embodiments, aspects of threat detection platform100are provided by a web-accessible computer program operating on a computer server or a distributed computing system. For example, an individual can interface with threat detection platform100through a web browser that is executing on an electronic computing device (also referred to herein as an “electronic device” or “computing device”114). Enterprise network112can be a mobile network, wired network, wireless network, or some other communication network (or multiple of any/all of such networks) maintained by the enterprise or an operator on behalf of the enterprise. As noted above, the enterprise can use a security service to examine communications (among other things) to discover potential security threats. The enterprise may grant permission to the security service to monitor the enterprise network112by examining emails (e.g., incoming emails and/or outgoing emails) and then handling those emails that represent security threats. For example, threat detection platform100can be permitted to remediate threats posed by those emails (e.g., by using an API made available by an email service provider such as a cloud-based email service provider to move or delete such messages), or the threat detection platform100may be permitted to surface notifications regarding the threats posed by those emails (and/or that a recipient or sender account is likely to have been compromised, etc.), or combinations thereof. In some embodiments, the enterprise further grants permission to the security service to obtain data regarding other digital activities involving the enterprise (and, more specifically, employees of the enterprise) in order to build a profile that specifies communication patterns, behavioral traits, normal context of emails, normal content of emails, etc. For example, threat detection platform100may identify the filters that have been created and/or destroyed by each employee to infer whether any significant variations in behavior have occurred. Such filters may comprise rules manually specified by the user (e.g., by the user explicitly interacting with tools made available by an email service) and/or may also be inferred based on users' interactions with their mail (e.g., by obtaining from the email service log data indicating which messages the user has moved from an inbox to a folder, or vice versa). As another example, threat detection platform100may examine the emails or messages received by a given employee to establish the characteristics of normal communications (and thus be able to identify abnormal communications). Threat detection platform100can manage one or more databases in which data can be stored. Examples of such data include enterprise data (e.g., email data, message data, sign-in data, and mail filter data), remediation policies, communication patterns, behavioral traits, etc. The data stored in the database(s) can be determined by threat detection platform100(e.g., learned from data available on enterprise network112), provided by the enterprise, or retrieved from an external database (e.g., associated with LinkedIn, Microsoft Office365, or G Suite) as applicable. Threat detection platform100can also store outputs produced by the various modules, including machine- and human-readable information regarding insights into threats and any remediation actions that were taken. As shown inFIG.1A, threat detection platform100includes a profile generator102that is responsible for generating one or more profiles for the enterprise. For example, profile generator102can generate a separate profile for each account associated with an employee of the enterprise based on sign-in data, message data, email data, and/or mail filter data, etc. Additionally or alternatively, profiles can be generated for business groups, organizational groups, or the enterprise as a whole. By examining data obtained from enterprise network112, profile generator102can discover organizational information (e.g., employees, titles, and hierarchy), employee behavioral traits (e.g., based on historical emails, messages, and historical mail filters), normal content of incoming or outgoing emails, behavioral patterns (e.g., when each employee normally logs in), communication patterns (e.g., who each employee communicates with internally and externally, when each employee normally communicates, etc.), etc. This information can be populated into the profiles so that each profile can be used as a baseline for what constitutes normal activity by the corresponding account (or group of accounts). An example profile includes a number of behavioral traits associated with a given corresponding account. For example, profile generator102can determine behavioral traits based on sign-in data, message data, email data, and/or mail filter data obtained from enterprise network112or another source (e.g., a collaboration suite via an API). Email data can include information on the senders of past emails received by a given email account, content of those past emails, frequency of those past emails, temporal patterns of those past emails, topics of those past emails, geographical locations from which those past emails originated, formatting characteristics (e.g., usage of HTML, fonts, styles, etc.), and more. Profile generator102can use the aforementioned information to build a profile for each email account that represents a model of normal behavior of the corresponding employee. The profiles can be helpful in identifying the digital activities and communications that indicate that a security threat may exist. Monitoring module106is responsible for monitoring communications (e.g., messages and emails) handled by the enterprise network112. These communications can include incoming emails (e.g., external and internal emails) received by accounts associated with employees of the enterprise, outgoing emails (e.g., external and internal emails) transmitted by those accounts, and messages exchanged between those accounts. In some embodiments, monitoring module106is able to monitor incoming emails in near real time so that appropriate action can be taken, in a timely fashion, if a malicious email is discovered. For example, if an incoming email is determined to be representative of a phishing attack (e.g., based on an output produced by scoring module108), the incoming email can be prevented from reaching its intended destination by the monitoring module106or another applicable component or set of components. In some embodiments, monitoring module106is able to monitor communications only upon threat detection platform100being granted permission by the enterprise (and thus given access to enterprise network112). Scoring module108is responsible for examining digital activities and communications to determine the likelihood that a security threat exists. For example, scoring module108can examine each incoming email to determine how its characteristics compare to past emails sent by the sender and/or received by the intended recipient. In various embodiments, scoring module108may determine whether characteristics such as timing, formatting, and location of origination (e.g., in terms of sender email address or geographical location) match a pattern of past emails that have been determined to be non-malicious. For example, scoring module108may determine that an email is likely to be malicious if the sender email address (“[email protected]”) differs from an email address (“[email protected]”) that is known to be associated with the alleged sender (“John Doe”). As another example, scoring module108may determine that an account may have been compromised if the account performs a sign-in activity that is impossible or improbable given its most recent sign-in activity (e.g., the user logs in from Germany ten minutes after having logged in from California, or a user that typically accesses email from 9 am-5 pm on weekdays begins accessing email on weekends at 3 am). Scoring module108can make use of heuristics, rules, neural networks, or other trained machine learning (ML) approaches such as decision trees (e.g., gradient-boosted decision trees), logistic regression, linear regression, or other appropriate techniques. A variety of packages can be used to generate the models, including PyTorch and TensorFlow. Scoring module108can output discrete outputs or continuous outputs, such as a probability metric (e.g., specifying the likelihood that an incoming email is malicious), a binary output (e.g., malicious or not malicious), or a sub-classification (e.g., specifying the type of malicious email). Further, scoring module108can rank or otherwise generate a prioritized list of the top features, facets, or combinations thereof that result in a particular message being identified as posing a security threat. In various embodiments, scoring module108executes a topic inference module. The topic inference module can be used to identify topics of digital communications. Assume, for example, that scoring module108is tasked with quantifying risk posed by an incoming email. In that situation, the topic inference module may identify one or more topics based on an analysis of the incoming email, its metadata, or information derived by the scoring module. These topics may be helpful in conveying the risk and relevance of the incoming email and for other purposes. Reporting module110is responsible for reporting insights derived from outputs produced by scoring module108in various embodiments (e.g., as a notification summarizing types of threats discovered or other applicable output). For example, reporting module110can provide a summary of the threats discovered by scoring module108to an electronic device114. Electronic device114may be managed by the employee associated with the account under examination, an individual associated with the enterprise (e.g., a member of the information technology (IT) department), or an individual associated with a security service, etc. Reporting module110can surface these insights in a human-readable format for display on an interface accessible via the electronic device114. Such insights can be used to improve the overall security position of the enterprise, by providing specific, concrete reasons why particular communications are problematic to security personnel (or other appropriate individuals, such as end users). Remediation module116can perform one or more remediation actions in response to scoring module108determining that an incoming email is likely representative of a threat. The types of remediation that can be taken can be based on the nature of the threat (e.g., its severity, the type of threat posed, the user(s) implicated in the threat, etc.), policies implemented by the enterprise, etc. Such policies can be predefined or dynamically generated based on inference, analysis, and/or the data obtained from enterprise network112. Additionally or alternatively, remediation action(s) can be based on the outputs produced by the models employed by the various modules. Examples of remediation actions include transferring suspect emails to another folder such as a quarantine folder, revising the email (e.g., by modifying content included in the email such as an included link), generating an alert (e.g., to an administrator or to the user), etc. Various embodiments of threat detection platform100include a training module104that operates to train the models employed by other modules. As an example, training module104can train the models applied by the scoring module108to the sign-in data, message data, email data, and/or mail filter data, etc., by feeding training data into those models. Example training data includes emails that have been labeled as malicious or non-malicious, policies related to attributes of emails (e.g., specifying that emails originating from certain domains should not be considered malicious), etc. The training data can be employee, group, enterprise, industry, or nationality specific so that the model(s) are able to perform personalized analysis. In some embodiments, the training data ingested by the model(s) includes emails that are known to be representative of malicious emails sent as part of an attack campaign. These emails may have been labeled as such during a training process, or these emails may have been labeled as such by other employees. Training module104can implement a retraining pipeline (also referred to herein as a “pipeline”) in order to protect against novel threats as further discussed below. At a high level, the pipeline corresponds to a series of steps that, when executed by the training module104, cause the models employed by the scoring module108to be retrained. By consistently training models using up-to-date information, the threat detection platform100can protect against novel threats that would otherwise escape detection. Unlike conventional email filtering services, a threat detection platform (e.g., threat detection platform100) can be completely integrated within an enterprise environment. For example, threat detection platform100can receive input indicative of an approval by an individual (e.g., an administrator associated with the enterprise or an administrator of the email service employed by the enterprise) to access email, active directory, mail groups, identity security events, risk events, documents, etc. The approval can be given through an interface generated by the threat detection platform. For example, the individual can access the interface generated by the threat detection platform and then approve access to these resources as part of a registration process. Upon receiving the input, the threat detection platform can establish a connection with storage medium(s) that include these resources via application programming interface(s) (APIs). For example, the threat detection platform may establish, via an API, a connection with a computer server managed by the enterprise or some other entity on behalf of the enterprise. The threat detection platform can then download resources from the storage medium(s) to build an ML model that can be used to identify email-based security threats. The threat detection platform can build an ML model based on retrospective information in order to better identify security threats in real time as emails are received. For example, threat detection platform100can ingest incoming emails and/or outgoing emails corresponding to the last six months, and then build an ML model that understands the norms of communication with internal contacts (e.g., other employees) and/or external contacts (e.g., vendors) for the enterprise. Such an approach allows the threat detection platform to employ an effective ML model nearly immediately upon receiving approval from the enterprise to deploy it. Most standard integration solutions, such as anti-spam filters, will only have access going forward in time (i.e., after receiving the approval). Here, however, threat detection platform100can employ a backward-looking approach to develop personalized ML model(s) that are effective immediately. Moreover, such an approach allows the threat detection platform to go through a repository of past emails to identify security threats residing in employees' inboxes. The aforementioned API-based approach provides a consistent, standard way of looking at all email handled by an enterprise (or another entity, such as an email service, on behalf of the enterprise). This includes internal-to-internal email that is invisible from standard integration solutions. An SEG integration, for example, that occurs through the mail exchanger (MX) record will only be able to see incoming email arriving from an external source. The only way to make email arriving from an internal source visible to the SEG integration would be to externally reroute the email through the gateway. FIG.1Bdepicts an example of a threat detection platform in accordance with various embodiments. In various embodiments, the infrastructure used by threat detection platform100is located across multiple data centers and makes use of various orchestration technologies (e.g., Amazon ECS or Kubernetes on Azure). As illustrated inFIG.1B, an end user (e.g., “Alice,” an employee of “ACME Corporation”) accesses one or more enterprise accounts (e.g., on Microsoft Office365) via a client device120. Threat detection platform100programmatically monitors for the occurrence of events associated with such accounts (e.g., via one or more APIs). An example way for threat detection platform100to integrate with such APIs is via an API manager122that coordinates periodically/continuously requesting notifications for each employee of each tenant included in an employee database124. API manager122can similarly facilitate communications between other providers (e.g., G-Suite, Slack, etc.) and threat detection platform100. One example of an event is the receipt of an incoming email (e.g., from Bob to Alice). One task performed by threat detection platform100is an entity resolution procedure in order to identify the entities involved in the event. In some embodiments, the entity resolution procedure is a multi-step process. First, threat detection platform100will acquire information regarding the event. For example, if the event is the receipt of an incoming email, threat detection platform100can examine the incoming email to identify the origin, sender identity, sender email address, recipient identity, recipient email address, subject, header(s), body content, etc. Moreover, threat detection platform100can determine whether the incoming email includes any links, attachments, etc. Second, threat detection platform100will resolve the entities involved in the event by examining the acquired information.FIG.1Cdepicts how information obtained from an email can be used to establish different entities (also referred to as “features” or “attributes” of the incoming email). Some information may correspond directly to an entity. In the example ofFIG.1C, for example, the identity of the sender (or purported sender) may be established based on the origin or sender name. Other information may correspond indirectly to an entity. In the example ofFIG.1C, for example, the identity of the sender (or purported sender) may be established by applying a natural language processing (NLP) algorithm and/or computer vision (CV) algorithm to the subject, body content, etc. Entities may be established based on the incoming email, information derived from the incoming email, and/or metadata accompanying the incoming email. In some embodiments, threat detection platform100will augment the acquired information with human-curated content. For example, feature(s) of an entity may be extracted from human-curated datasets of well-known brands, domains, etc. These human-curated datasets can be used to augment information gleaned from the enterprise's own datasets. Additionally or alternatively, humans can be responsible for labeling entities in some situations. For example, a human may be responsible for labeling landing pages and/or Uniform Resource Locators (URLs) of links found in incoming emails. Human involvement may be useful when quality control is a priority, when comprehensive labeling of evaluation metrics is desired, etc. For example, a human may actively select which data/entities should be used for training the ML model(s) used by the threat detection platform. Returning toFIG.1B, when a new message arrives, feature extraction service126processes it. One way of implementing feature extraction is as a set of Python services that receive messages via a load balancer and perform applicable tasks. A notification received from Office365includes information about the message being delivered. The information included with the notification can be used to perform top level filtering, which can help reduce infrastructure costs. If sufficient information is provided that indicates that the message is safe, further processing can be omitted. Otherwise, the message content can be downloaded into threat detection platform100for parsing (128). For performance reasons, at this stage, metadata about any attachments (e.g., file size, file type, file name) can also be obtained for analysis (without needing to download the attachment itself at this point). Feature extraction service126creates a message object and creates primary attributes. It calls a prediction service using the message body text. The prediction service provides information about the content (e.g., whether it contains “urgent” vocabulary words, whether it is a solicitation, key topics in the message, etc.). An example way of implementing the prediction service is as a python service (that can make use of one or more heuristics/models/etc. in evaluating message content). Example techniques for determining topics are described below. Feature extraction service126optionally makes use of a link crawling service180and a file processing service182(e.g., based on whether a given message includes links or attachments) to gain insight into links and attachments, respectively. In an example embodiment, the link crawling service is a python process that, given a link, follows the link (and traverses any redirects) to determine the actual destination of the target link. Such redirects might be due to legitimate security systems (e.g., Microsoft Safelinks URLs) or attempts to maliciously conceal a destination from an end user (or other reasons, e.g., load balancing, advertisement tracking, etc.). Domain information about the actual destination can also be obtained (e.g., by querying domain database170). Link crawling service180can also crawl the landing page of the target link, e.g., to evaluate whether other links are present on the page, evaluate whether the landing page is likely to be a phishing site (e.g., based on page content purporting to belong to a Microsoft login page, a bank login page, etc., but for which the domain does not match the content), etc. If link crawling service180(or another appropriate component of threat detection platform100, such as one or more models134) determines that a link is malicious, that information can be used to flag the message containing the link as malicious. If link crawling service180(or another appropriate component of threat detection platform100) determines that a link is suspicious, an entry can be added into URL database188indicating that a particular message includes a particular suspicious URL that should be rewritten by remediation service162(via URL rewriter164). An example way of implementing URL database188is using Postgres. An example schema for storing information in URL database188is: messageID, recipientID, URL. File processing service182(which can also be implemented in python), given an attachment (e.g., downloaded from Office365or G-Suite), can perform a variety of tasks. As an example, if the attachment is a PDF, file processing service182can parse the PDF and look for links (and, as applicable, provide those links to link crawling service180for examination). If the attachment is a Word document, file processing service182can determine whether any macros are present, and if so, whether they make service calls outside. File processing service182can also take more resource intensive actions, such as opening attachments in sandboxes and determining, for example, whether attempts are made to modify registry entries. Information extracted from payloads (whether attachments or links) can be used as primary attributes (e.g., “includes a link to a GoogleDoc,” “has an attachment of a PDF over 125 k in size,” or “has an attachment that is an Excel document with a macro”). Derived attributes are also referred to herein as secondary attributes. A file processor is a computer program that is designed to systematically browse documents attached to communications to establish, infer, or otherwise obtain information regarding the content of those documents. For example, threat detection platform100can employ a file processor to establish whether a document attached to a communication includes any links or forms. Any such attachments can be processed by file processing service182that can perform static analysis on a variety of different types of files (e.g., to further build out a set of features associated with the message). A variety of policies can be used by threat detection platform100in determining whether to perform additional (supplemental) file processing. Generally (e.g., to minimize resource usage), a goal is to only scan files (or portions of files) that could pose security risks. To that end, limits can be placed on file processing at a high level (e.g., for a maximum file size and maximum number of files processed). For a given file type (e.g., PDF, HTML, DOC, XLS, etc.), one approach is to process all such attachments that pass a file size filter criteria. For images (and/or for images included within other filetypes, which can be used, for example, in phishing related attacks), optical character recognition (OCR) can be used. Unfortunately, performing OCR is resource intensive. Policies can be used to specify limits on how much processing is performed (e.g., limiting OCR or other supplemental analysis based on message body length, file size, number of attachments included in a message, etc.). As another example, only the first sheet of a multi-page spreadsheet could be subjected to analysis. Further, such policies can be flexible, e.g., adjusting up or down based on attributes of the message (e.g., with a more suspicious message having a higher size threshold for analysis than a less suspicious message). A link crawler (also referred to as a “Web crawler”) is a computer program that is designed to systematically browse the Internet to establish, infer, or otherwise obtain information about the websites to which links lead. Assume, for example, that threat detection platform100discovers that a link is included in either the body or payload of a communication. In such a scenario, threat detection platform100can employ a link crawler (provided by link crawling service180) to establish the landing website to which the link ultimately leads, whether there are any intermediary websites through which the link crawler is routed before accessing the landing website, etc. Determinations can then be made, for example, about whether the landing page (or a subsequent page) inappropriately solicits credentials (e.g., using topic analysis or other approaches), etc., and any such issues can be imputed to the message (e.g., to flag the message as related to credential phishing). While the attachment and link crawlers perform similar tasks, those normally correspond to separate computer programs that can be individually executed by the threat detection platform. This approach is beneficial in several respects. As an example, this approach to recording signatures of payloads deemed malicious allows threat detection platform100to readily detect malicious payloads through comparison to the signature store. The term “signature” may refer to the payload itself, malicious content (e.g., malware) included in the payload, or information that is extracted or derived from the payload. This approach allows malicious payloads to be discovered more quickly because threat detection platform100can autonomously generate signatures without human confirmation. Moreover, these signatures can be shared across a broader community. For example, the signature store may be representative of a federated database that is shared across multiple enterprises. In such a scenario, threat detection platform100can generate a signature for a communication directed to an employee of a first enterprise and then use the signature to identify malicious payloads amongst communications directed to a second enterprise. A variety of policies can be used by threat detection platform100in determining whether to crawl links. The following are examples:Follow the first link in a message if that link has not been seen before and the message is from a previously unseen sender. (Crawling only the first link, or first few links saves resources over crawling all links.)Crawl young domains (e.g., those registered less than 3 months ago).Follow links in attachments.Follow links that are in a set list (e.g., file sharing domains, redirect domains).Do *not* crawl links that have sensitive phrases such as “unsubscribe.”Do *not* crawl links that have anchor text that may be stateful. Each of the policies can be applied with different link crawlers that crawl a page in different ways (e.g., HTTP.HEAD, HTTP.GET, as well as a screen shot service that renders a page in a headless browser and screenshots the page). In some embodiments, each policy defines which crawler(s) to apply and emits an allow, deny, or no decision. A link crawler can be configured to only crawl a particular URL if any applicable policies include at least one “allow” and no “denies.” In general, this approach favors crawling links that could be harmful (and thus aspects such as link rarity, domain location, domain age, and anchor text can all be factors), and not crawling links that could be stateful (e.g., inadvertently triggering automatic “unsubscribe” actions by crawling). As an example, suppose threat detection platform100is processing a message that includes three links (A, B, C). Two policies (X, Y) are applicable to the links—specifically, for link crawler HTTP.GET, crawling link A is allowed by policy X, crawling link B is allowed by policy X, but crawling link B is denied by policy Y. Crawling link C is not actioned by either policy. In this scenario, the link crawler would crawl as follows (based on the policies):Link A: [Allow, No action]→crawlLink B: [Allow, Deny]→do not crawlLink C: [No action, no action]→do not crawl After parsing, feature extraction service126performs hydration (130), by looking up information in various databases (e.g., employee database124or domain database170) to determine entity attributes and determine secondary attributes (e.g., cross products such as “is this an unknown sender for this recipient” or “how many times has this recipient received a message from this sender”). Behavior database190is an aggregates database that includes various information for predefined aggregate keys. Examples of predefined cross products (or singular keys) include: information about a sender across a company ID, information about a sender across an IP address, etc., and counts pivoted by good and bad messages. So, for a given sender, information such as “how many times have we seen a message from this sender that was determined to be malicious” can readily be determined during hydration (and also used for other purposes, such as model training and evaluation). An example way of implementing behavior database190is to download information about messages on a daily basis, and then have a behavior profiling module178look back over a 30-day period to generate the aggregate data (e.g., as a batch pipeline that runs on top of Spark). Once the aggregate information is collected, a flat S3 file structure can be generated which has the various aggregates as key value pairs. The key value pairs can be loaded into memory for realtime lookup, e.g., using RocksDB. RocksDB can also be used for employee database124and domain database170. One way of populating employee database124is for threat detection platform100to make API calls (e.g., to Office365) to obtain directory and other information (e.g., user name, display name, job title, department, etc.). Additional information can also be inferred from the obtained information (e.g., whether the employee is a VIP) or, as needed, manually specified (e.g., with an enterprise providing a list of employees who should be flagged as VIP). Behavioral information about a given user (e.g., “Alice typically writes emails in HTML” instead of plaintext or “Alice typically signs messages with ‘best regards’” instead of “cheers”) can be stored in behavior database190. Domain database170stores various information about domains (e.g., obtainable through one or more third party services and/or by performing domain-related lookups). Examples include information such as the age of the domain, the identity of the registrar, a cache of whois information, last time changes were made to the domain, the associated top level domain (e.g., .com vs. .cn), etc. Domain database170can be manifested using RocksDB and also as an online Postgres table which can be queried in realtime and updated when it is determined (e.g., by message scorer132) that information is missing. Each message processed by threat detection platform100has a unique identifier assigned to it. The identifier gets persisted into messages database140and various information about the message, including metadata (such as the subject, from address, to address, and other details) and actions taken with the message (e.g., tracking whether it is moved from one folder to another) is stored in a variety of tables. One example way of implementing messages database140is as a MySQL database. Most entries in the messages database will have an entry into a key value store154(implemented, for example, as a Postgres database). Various datasets can be extracted from messages database140(e.g., as a view on top of messages database140and/or as a batch extraction that can be joined with other datasets). One example is master attack dataset158. Another example is a set of messages which received erroneous verdicts from threat detection platform100(e.g., false negatives or false positives reported by end users, enterprises, etc.). Another example is a rescoring dataset, which can be used to determine (e.g., by rescoring engine160) how scores for particular messages change based on, for example, updates made to given models and manifested in master attack dataset158. Message scorer132(which can also be implemented as a set of Python services) makes use of a variety of ML models (134) created by ML model training/evaluating module136. Some of the messages with entries in messages database140are hand labeled by humans (or have had their automatically generated labels confirmed by humans). Threat detection platform100includes a golden label dataset138which is a batch dataset of messages which also have all of their corresponding features (e.g., message content) extracted and stored. This is in contrast to other messages represented in messages database140whose complete set of features are not extracted/stored (e.g., for efficiency and/or privacy reasons which would disfavor downloading and persisting all content of all messages—particularly for benign message content). The golden label dataset can be used as true positive samples for model training purposes (e.g., to create new models, update models, and evaluate the efficacy of models at detecting various types of messages). One way of generating the golden label dataset is by joining information in messages database140with label dataset142and logs144. In various embodiments, when message scorer132determines a verdict for a message, the verdict is written to an S3 firehose, which is manifested as a set of S3 logs. The S3 logs are written out on an hourly cadence and rolled up into daily message scorer logs. Logs144represent these logs, optimized for querying in a variety of ways (e.g., including using Apache Parquet, Amazon Athena, etc.). One way of providing ML models134to message scorer132is by persisting them to S3 files which are loaded via an infrastructure layer process which allows files to be loaded into memory. The models are serialized using Apache Thrift. Each model has its own scored attribute146that indicates, for a given message, a score determined by a given model. Decisioning layer148can consider both individual scored attributes as well as an ensemble of scored attributes when evaluating a message (with the ensemble model score being another example of a scored attribute). As an example, one scored attribute is “suspicious link” (does a message have a suspicious link or not) and another scored attribute is “spam” (is a message spam or not). Decisioning layer148can form a verdict (e.g., “good,” “suspicious,” or “bad”) based on a combination of those two scored attributes (along with others, as applicable). Or, if a “business email compromise” model reports a sufficiently high score, the decisioning layer can label the message purely based on that single model's score. As another example, one ML model can provide a verdict of whether a given message is “business critical” or not (which might be omitted from the ensemble). Further, scored attributes can also be provided by applying various signatures150and/or heuristics/rules152to messages. An example of a rule is: “a message having an unknown sender” and also “includes a link to a file hosting server” (e.g., Google Drive) should be flagged. One way to implement such rules is using a domain specific language and storing them in a .JSON file in S3, which can then be loaded by message scorer132. For efficiency, if any of the scored attributes (e.g., whether obtained from ML models134, signatures150, or heuristics/rules152) indicate that a particular message is bad (e.g., above a particular threshold), decisioning layer148can terminate additional evaluation of the message. For example, if a message is flagged by a signature, evaluation of the message by one or more models134can be cancelled (or not initiated, etc.). One way to implement decisioning layer148is in Python. In some embodiments, threat detection platform100maintains separate signature stores for different types of entities. As an example, a first signature store for attachments can be maintained, along with a second signature store for links. In some embodiments, the various message attributes (e.g., primary attributes, entity attributes, secondary attributes, and/or scored attributes) are represented as Thrift objects. This allows for them to be stored as a big blob in key value store154, and is also useful in debugging (e.g., if a message is determined to have had an incorrect verdict, an examination can be made as to how each of the various models performed). Realtime signatures database156is based on a realtime pipeline that works off of Kafka and feeds into a Redis store. During an attack, initially only some messages may be identified as being harmful (e.g., those with particular language or particular attachments). However, once a threshold number of messages have been identified as bad (in realtime), e.g., a particular sender is actively perpetrating an attack, message scorer132can use information from realtime signatures database156to automatically block any messages from that sender (even if individual messages may not otherwise be flagged, e.g., by models134). As an example, if the same sender using the same IP address has had a threshold number of messages flagged as bad within a particular time frame, all subsequent messages from that sender and address can be automatically blocked by the inclusion of particular attributes (e.g., sender identity and IP address) inserted as a signature into realtime signatures database156. Once message scorer132arrives at a particular verdict for a message (e.g., “good,” “bad,” or “suspicious”), remediation service162can take an appropriate action (e.g., based on a configuration provided by an enterprise). For example, ACME could configure a preference that threat detection platform100move messages determined to be “bad” to a quarantine folder or a hidden folder, but move messages determined to be “suspicious” to a Junk folder. In addition to forming verdicts for entire messages, message scorer132can also determine that certain parts of a message are problematic, and instruct remediation service162to take more granular actions as well. As one example, message scorer132may determine that a link included in a message is suspicious, but the message itself is not otherwise problematic. Message scorer132can add an entry for the message (enumerating any suspicious URLs) in URL database188. When message scorer132instructs remediation service162that the message needs remediation, one task remediation service162will perform is checking URL database188for the message ID (e.g., based on a flag of “check URL database188” included in the instruction from message scorer132to remediation service162), and then rewriting any noted URLs (via URL rewriter164) but otherwise leave the message alone. As another example, message scorer132could have a low confidence in its verdict and add the message to a human review queue (e.g., managed by platform166) as a false positive mitigation. In some embodiments, decisions regarding particular messages are stored in decisions table168(stored, e.g., in messages database140). In some embodiments, as applicable, only decisions for messages ultimately deemed to be bad, suspicious, or for sampling purposes are stored in messages database140(in decisions table168). In some embodiments, remediation service162uses asynchronous workers (e.g., based on Celery) to take appropriate actions as needed based on the contents of decisions table168. Another component of threat detection platform100is vendor database172. This database is built from known lists of common vendors (e.g., payroll services, business supply providers, shipping companies, etc.) and also by evaluating emails (e.g., to determine which enterprises make use of which vendors). Behavioral information about the vendors (e.g., whether they have been compromised, the frequency with which they send messages, etc.) is stored in behavior database190. Threat detection platform100also includes a search index174which can be used as a real time lookup index on all messages. In some embodiments, search index174is built on top of Elasticsearch. As messages are scored (e.g., by message scorer132), they are indexed (e.g., via Kafka) into search index174. The search index can be used for a variety of purposes, one of which is determining engagements. For example, if a message is determined to be malicious, a determination can be made as to whether any users responded to the message (e.g., replied to the message) so that remedial actions can be taken. As another example, if a report of a false negative is received (e.g., by a user forwarding a suspicious message to a reporting email such as [email protected] or [email protected]), the search index can be used to identify the message's unique identifier within threat detection platform100. Search index174is also used by portal176to deliver reporting and other information in a web interface (e.g., to users such as Alice, administrators, etc.). In some embodiments, portal176also makes available administrative tools to enterprises (as a frontend to SOC tools platform166), e.g., allowing them to search for a particular message or set of messages and remove them from user mailboxes. In addition to accessing reports and other information via portal176, enterprises can also access information on threat detection platform100via an API service186(e.g., after being issued a token). An example way of implementing API service186is using a Django-based API service, and an example way of implementing portal176is using React and JavaScript. Also included in threat detection platform100is a URL redirect service184. One way of implementing URL redirect service184is as a Django application. As mentioned above, one task that can be performed by remediation service162is (as applicable) the rewriting of suspicious URLs (via URL rewriter164). Both URL rewriter164and URL redirect service184can make use of a URL wrapper library (e.g., authored in python) to wrap and unwrap (rewrite and un-rewrite) URLs, sign/verify that wrapped (rewritten) URLs were generated by threat detection platform100, and support user identifier query parameters for click tracking purposes. When a user opens a message that includes such a rewritten URL and clicks on the rewritten link, they will be directed to threat detection platform100(or another appropriate location that notifies threat detection platform100that the rewritten link has been clicked on). URL redirect service184provides an appropriate landing page for the user which can alert the user that the URL has been rewritten, provide reasons why the URL was rewritten (e.g., alerting the user to the reason(s) the URL was deemed suspicious), and ask the user to confirm whether they wish to proceed (to the original destination). Tracking information can be obtained (e.g., noting whether the user landed on the landing page, noting whether the user opted to click through to the original destination, etc.). II. Techniques for Deriving Topics for Messages Threat detection platform100can characterize digital communications along several dimensions. These dimensions are also referred to herein as “facets.” Facets are useful in several respects. As a first example, the facets can be used by an individual to resolve the types of attacks employed against an enterprise, as well as to create datasets that are useful for training, introspection, etc. The individual may be a member of the IT department of the enterprise, or the individual may be employed by a security service responsible for monitoring the security of the enterprise. As a second example, facets can be used as a way to divide data internally to allow teams to work on specific subsections of email attacks. These teams can then improve detection of the email attacks by training models on subset data and improve scoring module108. As a third example, the facets can be provided as input to security operations center (SOC) tools that may be used to filter data, generate reports, etc. An incoming email may be associated with one or more of the following example facets:Attack Type: This facet indicates whether the incoming email is indicative of business email compromise (BEC), phishing, spoofing, spam, etc. It is derived based on combinations of the following five facets.Attack Strategy: This facet indicates whether the incoming email qualifies as name impersonation, internal account compromise, external account compromise, a spoofed message, a message originating from an unknown sender, etc.Impersonated Party: This facet indicates who, if anyone, the incoming email intended to impersonate. Examples include very important persons (VIPs) such as c-suite executives, assistants, employees, contractors, partners, vendors, internal automated systems, external automated systems, or no one in particular.Attacked Party: This facet indicates who was the target of the attack carried out by the incoming email. Examples include VIPs, assistants, employees, and external recipients such as vendors, contractors, and the like. In some embodiments, this facet may further identify the group or department under attack (e.g., the accounting department, human resources department, etc.).Attack Goal: This facet indicates the goal of the attack carried out by the incoming email. Examples include invoice fraud, payment fraud, credential theft, ransom, malware, gift card fraud, and the like.Attack Vector: This facet indicates how the attack is actually carried out, for example, by specifying whether the risk is posed by text, links, or attachments included in the incoming email. These above example facets can be used as the “building blocks” for describing the nature of communication-based attacks, for example, to enterprises. Together, these facets can be used to characterize an attack along predetermined dimensions. For example, incoming emails can be characterized using one, some, or all of the above facets. A layer of configuration can be used over facets to define, establish, or otherwise determine the nature of an attack. For example, if threat detection platform100determines that, for an incoming email, (i) the attack goal is invoice fraud and (ii) the impersonated party is a known partner, then the threat detection platform can define the incoming email as an instance of “external invoice fraud.” Consequently, these facets can flow into other functionality provided by threat detection platform100such as: (i) internal metrics indicating how the threat detection platform is managing different attack types, (ii) reporting to enterprises, and (iii) filtering for different attack types. The above facets can be augmented to more completely/accurately represent the nature of a malicious communication. In particular, information regarding the topics mentioned in such communications can be used. Assume, for example, that several incoming emails related to different merger and acquisition scenarios are determined to be representative of phishing attacks. While each of the incoming emails have the same attack goal—that is, scamming the recipients—each incoming email is rather specific in its content. In such a situation, it would be useful to provide information about the actual content of the incoming emails to those individuals responsible for managing the threat posed by those incoming emails. Furthermore, some scenarios call for a more fluid approach to characterizing threats that allows threat detection platform100to more quickly surface new attack types. Historically, it has been difficult to measure, characterize, and report new attack types until sufficient training data regarding those new attack types has been provided to the appropriate models. Note that characterizing threats along a greater number of dimensions also lessens the likelihood of different communications being characterized as similar or identical. As an example, an email inquiring about invoices and an email requesting a quote may both be classified as instances of payment fraud if those emails are characterized along a limited number of dimensions. While those emails may have the same attack goal, the content of those messages is different (and that may be useful information in determining how to discover or remediate future instances of similar emails). An example of two messages sharing the same topic but two different attack goals is a shared topic of “invoice,” with the first message having an attack goal of credential phishing (“click here to sign into your account and make a payment or update your payment information”) and the second message having an attack goal of payment fraud (“your account is overdue, please send a check to pay your outstanding balance”). An example of two messages sharing the same attack goal but two different topics is a shared attack goal of “credential phishing,” with the first message having a topic of “debit account detail updates” (“set up your new direct debit by clicking here”) and the second message having a topic of “COVID-19” (“due to COVID-19 we have a new policy, click here to access our client portal and find out more”). Described herein are techniques for characterizing digital communications along a type of dimension referred to as “topics.” Upon receiving a digital communication, threat detection platform100can apply one or more models in order to establish one or more topics of the digital communication. The term “topic” refers to a subject that is mentioned (either directly or indirectly) in content of the digital communication. As with the facets mentioned above, a given digital communication can be associated with multiple topics. Various combinations of topics, if present in a given message, can also be assigned/associated with more human meaningful descriptions (e.g., that can then be used to describe the message content instead of/in addition to each of the individual topics). Topics can be derived by threat detection platform100regardless of whether the digital communication is deemed to be representative of an attack or not. In the event that the threat detection platform determines a digital communication is representative of an attack, the threat detection platform can generate and then surface a report that specifies an attack goal and topic(s) of the digital communication. Together, these pieces of information allow greater insight to be gained by the individual responsible for reviewing the report into the actual threat posed by the digital communication. FIG.2Aillustrates an example of how topics of a digital communication can be discovered. Topics are designed to be fluid, and thus can be as expansive or specific as desired. Some enterprises may wish for more detailed information regarding the subjects discussed in malicious emails (e.g., “mergers and acquisitions” vs. “IPOs” vs. “bankruptcy”), in which case more topics may be available for classifying emails. Other enterprises may wish for less detailed information regarding the subjects discussed in malicious emails (e.g., “financial”), in which case fewer topics may be available for classifying emails. Further, enterprises can customize topics of particular relevance/importance to them (e.g., an engineering firm defining a set of topics around research and development vs. a shipping company defining a set of topics around transit, supply chains, etc.), instead of/in addition to topics of broad applicability (e.g., invoices). As applicable, enterprises can provide examples of labeled messages to threat detection platform100so that custom models/rules for identifying topics in accordance with those labels can be built/deployed. If needed, a larger data set can be constructed, e.g., using techniques such as nearest neighbor, text augmentation, etc. In various embodiments, topics are hierarchical/multi-class, e.g., with several different subtopics/related topics grouped together (e.g., using multinomial prediction). In an example implementation, a topic is: (i) a potential subject of text included in an email, (ii) inferable by a human and machine, and (iii) independent of malicious intent. Accordingly, topics can be defined for all emails examined by the threat detection platform, irrespective of whether those emails are representative of attacks. Note that, in some embodiments, topics are defined with sufficient granularity that a given email is labeled as pertaining to multiple topics. This can be done to increase the likelihood that different emails with similar attack goals, such as those mentioned above, are distinguishable from one another. To create a new topic, the topic is added to configurator202by an administrator (e.g., of threat detection platform100). As shown inFIG.2A, phrase types (204) and label types (206) may initially be provided to configurator202as input. The phrase types can be used by configurator202to generate phrase definitions (208), and the label types and phrase definitions can be used by configurator202to generate topic definitions (210), mapping topics to different phrase definitions and locations. Topics defined within configurator202can then be persisted through to other components and/or layers of threat detection platform100. As an example, topic definitions210can be provided to a topic inference module212of a facet inference extractor214. As shown inFIG.2A, in some embodiments, facet inference extractor214is executed by a real-time scoring module (e.g., an embodiment of scoring module108) that is configured to quantify the risk posed by incoming emails as discussed above. Topic inference module212is configured to infer, based on outputs produced by scoring module108, one or more appropriate topics for the email. In some embodiments, a given email will have two sets of topics associated with it by threat detection platform100. The first set of topics corresponds to topics inferred by threat detection platform100. The second set of topics corresponds to topics explicitly defined or curated by a user of the threat detection platform (e.g., an analyst or administrator of threat detection platform100, or a representative of an enterprise). Applicable topics are associated with a given email, e.g., in an appropriate storage location. For example, topic inference module212can append labels that are representative of the topics to the email itself, e.g., by using an API provided by an email provider to edit the message (e.g., stored within email store216) to include the topics (e.g., as one or more X-headers or other metadata). As another example, topic inference module212can populate a data structure with information regarding the labels. This data structure can be stored in a database in which email-topic relationships are formalized (e.g., database218). In an example of how threat detection platform100can be used, suppose a particular type of attack makes use of a malicious email that discusses a merger and acquisition scenario. Configurator202can be used to create an appropriate topic so that similar emails can be identified in the future. In particular, configurator202creates an appropriate label (e.g., “merger & acquisition” or “M&A”) for the topic and then associates with that label, a set of phrases (e.g., “merger and acquisition,” “merger/acquisition,” “tender offer,” “purchase of assets,” etc.) that can be used (e.g., as filters) to identify messages to be associated with the label. The topic definition (comprising a label and corresponding phrases) can then be provided to other portions of threat detection platform100, such as a data object usable by topic inference module212(and, e.g., stored in topic framework database220). New topics can be automatically learned by/added to threat detection platform100based on an analysis of incoming emails and/or outgoing emails. Additionally or alternatively, individuals (e.g., an administrator of threat detection platform100) can be permitted to manually create topics (e.g., by accessing an administrative console provided by threat detection platform100). Any human-labeled topics can be altered or deleted by threat detection platform100as applicable, based on, for example, whether the manually added topics are actually present in emails (i.e., do any messages match the topic), whether those manually added topics align or overlap with existing topics, etc. The attack goal facet attempts to characterize an end goal of a given email. As such, the attack goal facet has malicious intent associated with it. Conversely, the topic facet refers to the subjects that are raised in, or related to, the content of an email or other communication (without regard to maliciousness). Table I includes examples of emails with corresponding topics and attack goals. TABLE IExamples of emails and corresponding topics and attack goals.Email DescriptionPossible TopicPossible Attack GoalCredential theft message inFile Sharing, InvoiceCredential Theftthe context of file sharing alink to an invoiceFraud message in theBank Account Information,Invoice Fraudcontext of external invoiceCall toAction/Engagement,Invoice PaymentMerger and AcquisitionMergers and AcquisitionScamScamCryptocurrency EngageCall toEngageMessageAction/Engagement,CryptocurrencyReconnaissance MessageNoneSpamPayment Fraud MessageCOVID-19, Request forPayment Fraudthat uses COVID-19 asQuote (RFQ)Pretense As can be seen in Table I, it is possible for topics and attack goals to overlap in some instances. For each email, threat detection platform100may introduce a many-to-many relationship between the email and the topic labels in which a topic can be associated with more than one email and an email can be associated with more than one topic. Such an approach allows the threat detection platform to support several possible queries, including:The ability to filter emails by topic or combination of topics;The ability to count the number of emails associated with a given topic; andThe ability to modify the topics associated with an email, as well as create labels for those topics. Tables II-IV illustrate various examples of schemas that can be used by embodiments of threat detection platform100to associate emails with topics (e.g., in database218). TABLE IIExample schema for topics.Column NameData TypeColumn MetadataTopic_IDIntegerPrimary KeyTopic_Namestr/varchar(255)Indexed, unique, fixedDate_CreatedDate, TimeTopic_Display_Namestr/varchar(255)How topic is shown to user TABLE IIIExample schema for storing human-confirmed topics.Column NameData TypeColumn MetadataTopic_IDIntegerPrimary KeyMessage_IDIntegerForeign KeyHuman_LabeledBooleanDate_CreatedDate, Time TABLE IVExample schema for storing inferences for measurement.Column NameData TypeColumn MetadataTopic_IDIntegerPrimary KeyMessage_IDIntegerForeign KeyDate_CreatedDate, Time In some embodiments, threat detection platform100uses a domain specific language (DSL) to match against messages and their attributes. The DSL allows for the dynamic addition of different rules to assign messages topics, based on static features of the message (e.g., does it contain particular pre-defined phrases) or more dynamic features (e.g., using one or more models to score a message and derive topic information from the score(s)). One benefit of the lightweight nature of topic specification is that time-sensitive topics can be readily added to threat detection platform100. As an example, attackers often make use of current/world events to lend legitimacy to their attacks (e.g., an attacker referencing a recent fire or other natural disaster as a reason that an email recipient should take an action, such as logging into a payment system). Such topics can efficiently be added to threat detection platform100to help identify attacks. Below are examples of topics and corresponding DSL to identify when a given message matches a topic: Example Topic: Cryptocurrency “topic_cryptocurrency”: [{“sec:HAS_BITCOIN_ADDRESS”: true},{“sec:HAS_BTC_RANSOMWARE_LINGO”:true},{“feat_attr:CRYPTO_TOPIC_MODEL/gt”:0.7}] The above DSL states that a message can be classified as having a “cryptocurrency” topic if any of the following is true: (1) it includes a bitcoin address, (2) it uses commonly found bitcoin ransomware expressions, or (3) a trained cryptocurrency topic model scores the content higher than 0.7. Example Topic: Document Sharing “topic_document_sharing”: [{“sec:SUBJECT_HAS_DOCUMENT_SHARING_VOCAB”: true,“feat_attr:DOCUMENT_SHARE_TOPIC_MODEL/gt”:0.9},{“sec:BODY_HAS_DOCUMENT_SHARING_VOCAB”: true,“feat_attr:DOCUMENT_SHARE_MODEL/gt”:0.8}] The above DSL states that a message can be classified as having a “document sharing” topic if either of the following is true: (1) it has document sharing vocabulary in its subject line and the topic model gives it a score of higher than 0.9, or (2) it has a document sharing vocabulary in its body and the topic model gives it a score of higher than 0.8. FIG.2Billustrates an example of a message being processed by a scoring module. As shown, message252(e.g., an email message retrieved by threat detection platform100from an email provider) has various features254(e.g., a body text256, a subject text258, payload features260, and behavioral features262) that are extracted by threat detection platform100and provided to scoring module108. Scoring module108includes a variety of attack models264(e.g., assessing whether a particular message is likely to be a phishing attack or a payment fraud attack), and topic models266(e.g., assessing whether a particular message discusses a corresponding topic). The extracted features are consumed by both the attack models and the topic models. In various embodiments, scoring module108(and subcomponents thereof) is implemented using a set of python scripts. As previously discussed, topic inference module212infers one or more topics to associate with message252and stores the message/topic(s) mapping in database218. As applicable, if the topics assigned by topic inference module212are determined to be incorrect (e.g., as reported by an end user/message recipient), they can be changed (e.g., by an analyst) and any such mislabeled messages can be used to retrain topic models266. In various embodiments, attack models264include one or more models related to payloads. A first example of such a model is an attachment text model which can be used to determine whether text included in a payload is potentially harmful (e.g., an attachment includes language referencing a ransomware attack). A second example of such a model is a web page model which can be used to determine whether a link included in a message is directly (or indirectly) potentially harmful. A third example of such a model is an attachment model which can be used to determine whether the attachment includes a malicious mechanism (e.g., an Excel document with problematic macros as contrasted with a plain text document or simple Microsoft Word document that does not contain any links). IV. URL Rewriting A. Introduction Described herein are various techniques related to the rewriting of URLs in emails. In an example embodiment, threat detection platform100rewrites URLs in messages classified as suspicious (or as otherwise configured, e.g., links on messages moved to the Junk folder), redirecting end user clicks through threat detection platform100. As an example, URLs can be rewritten to include a “urlprotect.com” prefix in which the original URL is still identifiable (e.g., by hovering over the URL or optionally inserted to the side of the URL). When an end user clicks on a message with a rewritten URL, the user is presented with an interstitial web page that provides a warning identifying the suspiciousness/potential maliciousness of the destination webpage and an option to proceed to the destination. This creates a deterrent, and also provides an extra step that aids security and awareness around cybersecurity, since additional content can be added over time. In a Dashboard (e.g., served by portal176), enterprises are able to see which users clicked on links in suspicious messages. This can be especially useful if one of the suspicious messages is indeed malicious, and thus a false negative, since enterprises can use the information to follow up directly with their end users as needed (e.g., for password reset or further training). Various aspects and benefits of approaches described herein in accordance with various embodiments are provided below. They include (but are not limited to): 1. Not all URLs present in messages are rewritten by threat detection platform100. Known good URLs need not be rewritten, and known bad URLs will be completely blocked. Only those URLs deemed potentially problematic (e.g., suspicious) are rewritten (or as applicable, removed). Further, not all links within a single message need be rewritten. Instead, granular rewriting of only suspicious URLs allows the user to interact with aspects of the message determined to be benign, including benign links, while protecting the user should they choose to click on suspicious links. This maximizes user experience by preserving aspects of the message that are not suspicious (minimizing handling of their emails unnecessarily). 2. URLs are rewritten in a way that still allows for a degree of human readability. For example, if an original link directs to a particular domain (e.g., google.com or example.com), the rewritten URL includes an indication that allows for an inference of the original (un-rewritten) domain. 3. Which users clicked a given rewritten URL, and also, which users chose to click through to the original link (e.g., after being shown an interstitial warning) are tracked and made available to enterprises (e.g., via a portal). Additional reports can also be made available, e.g., aggregating information to indicate the top n users of an enterprise who receive messages for which URLs are rewritten, the top n users of the enterprise that click through to rewritten URLs, etc. 4. Assessment of the destination page can be performed “just-in-time” (at “time-of-click”) to help catch attacks in which a malicious individual sends a link to an initially benign domain and then subsequently changes the page content. 5. User feedback (e.g., confirming whether the suspicious URL is benign or malicious) can be solicited via the interstitial page at time of click. B. Example Implementation Suppose ACME Corporation would like to configure threat detection platform100to automatically rewrite suspicious URLs in its users' messages (e.g., messages received by employees such as Alice). An administrator of ACME can access an interface such as is shown inFIG.3(e.g., served by portal176) and turn on the URL-rewrite feature by selecting the option shown in region302. If desired, the administrator can configure the text shown to users (e.g., the interstitial text) by interacting with region304. The administrator can also customize safe domains (e.g., by adding additional domains (e.g., acmecorporation.com) or removing default safe domains such as netflix.com, youtube.com, fedex.com, etc.). Another configuration an administrator can make is whether or not to rewrite URLs in signed messages (e.g., messages signed using S/MIME, DKIM, PGP). In some embodiments, if a URL has already been modified by a third party (e.g., a given URL was previously modified by SafeLinks), the third party modified URL will be rewritten by threat detection platform100. In other embodiments, threat detection platform100determines the original URL (e.g., by visiting the SafeLinks URL) and rewrites the original URL. Once the feature is turned on, end users (such as Alice) will notice that messages classified as suspicious (e.g., appearing in the Junk folder) may have at least some of their URLs rewritten (e.g., will notice that at least some URLs in such messages will start with the domain, urlprotect.com). In some embodiments, additional indications are also provided (e.g., a banner is inserted into the message, the subject line is modified, etc., to indicate that a URL in the message has been altered). Unlike in other approaches, URLs rewritten using techniques in accordance with embodiments described herein are at least partially human readable, and preserve at least a portion of the original URL so that an end user (e.g., Alice) can have some sense of what the original URL was prior to rewriting. An example format for rewriting URLs is as follows: https://urlprotect.com/v1/w?u=<original_url>&m=<encoded_message_ID>&r=<e ncoded_recipient>&s=<encoded_signature> In the example format, the domain of the rewritten URL is “urlprotect.com.” The query parameters are:u: the original URL, percent encoded. For example, “http://google.com” would become “http %3A %2F %2Fgoogle.com”s: an encoded signature. This is built from a hash of the query parameters message_ID, recipient_ID, URL, and versionr: encoded recipientm: encoded message identifier In an example implementation, suppose that Alice receives a message that includes a suspicious link (i.e., message scorer132forms such a verdict (using decisioning layer148and in cooperation with link crawling service180)). Message scorer132inserts the suspicious URL(s) into URL database188and notifies remediation service162that the message needs remediation. Remediation service162examines URL database188and (using URL rewriter164) modifies the message in Alice's mailbox (e.g., using an API provided by Office365to modify the message body) to replace the original suspicious URL(s) with rewritten ones. Any non-suspicious URLs in the message will remain unmodified. In an example implementation, URL rewriter164takes as input the parameters described above and uses that information to make a unique rewritten URL (e.g., using the format described above). If another user (e.g., Charlie) receives the same message with the same suspicious URL, if a determination is made to modify his message (e.g., to also replace the URL with a rewritten URL), the resulting rewritten URL for Charlie will be different from Alice's (because Charlie's message will have a different messageID and Charlie will have a different userID, etc.). Uniquely modifying the URLs in this manner allows threat detection platform100to track which user clicks on the suspicious URL and also allows threat detection platform100to un-rewrite the URL if the user wishes to further interact with the URL, as applicable. Further, in various embodiments, user context is taken into account when message scorer132determines whether or not a URL needs to be rewritten (e.g., during remediation). As an example, if the recipient routinely receives messages from the sender of the message, that may weigh against marking the URL as suspicious as compared to the same message (with the same URL) being received by a recipient who has never communicated with the sender. Accordingly, threat detection platform100might determine that a URL should be rewritten for one user of an enterprise (e.g., Alice) and not another at the same enterprise (e.g., Charlie). Rewritten URLs will redirect through threat detection platform100(e.g., via URL redirect service184), ensuring that the clicks are tracked and made visible to ACME's administrators. When an end user clicks on a rewritten link included in a message, an interstitial web page will be displayed. An example of an interstitial page is shown inFIG.4. In this example, suppose that Alice has received a suspicious message. That is, the message was not deemed either safe, or malicious, by threat detection platform100, but received a verdict (e.g., from message scorer132) in between. The message included at least one link that was rewritten by threat detection platform100(e.g., by URL rewriter164). As mentioned above, only suspicious links are rewritten by URL rewriter164. If, for example, the suspicious message includes a link to www.amazon.com or www.wikipedia.org, those links will remain untouched by threat detection platform100. If the suspicious message includes a suspicious link, that link will be rewritten. In some embodiments, when a user clicks on a rewritten URL (e.g., Alice clicks on a rewritten URL in a message in her inbox), in connection with being shown the interstitial depicted inFIG.4, threat detection platform100repeats its analysis of the original link (e.g., via link crawling service180) to see if its status should be updated (e.g., from suspicious to benign or to malicious). Also referred to herein as just-in-time (and time-of-click) checking, this approach helps save resources (i.e., no need to repeatedly check a suspicious site if no user intends to click on the link), while also helping protect users against attacks where attackers replace initially benign content with subsequently malicious content. In this example, Alice has opened the message and has clicked on one of the rewritten links. She is taken to the interstitial shown inFIG.4(served by threat detection platform100). In this example, the interstitial includes explanations as to why the link that was rewritten (and that Alice clicked on) was deemed suspicious by threat detection platform100. In particular, the original link comes from an unusual location (an IP address corresponding to Nigeria) and also the link (evilcompany.com) does not match the alleged sender's domain (e.g., company.com). As another example (not shown), Alice can be warned that the rewritten link leads to a “possible phishing link.” Alice can choose to click through to the original link destination by clicking region402. Or, Alice can simply close the window (without clicking further). She can also help confirm the link as malicious by clicking region404(which can be reported to an administrator, added to a review queue of platform166, etc., as applicable). Alice's feedback can be used to help classify the URL (and thus the message) as malicious, potentially preventing it from being distributed to other users of threat detection platform100(and/or helping prevent them from accessing the link). Any/all of Alice's clicks can be recorded by threat detection platform100(e.g., added as counts to the record for the URL in URL database188or in another appropriate location) and appropriate actions taken in response as applicable. Also, in the event Alice does not click further, this information can be used, for example, to determine that no further actions need to be taken with respect to Alice—such as requiring that she change a password or undergo training (as contrasted with other users who click through). Further, if Alice does choose to click through, additional actions can be taken, such as alerting an administrator. C. Portal Examples FIG.5illustrates an example of an administrative interface. In this example, an ACME administrator is viewing (via portal176) high level information on attacks taken against ACME in the last 30 days. The administrator can scroll down through the interface to see various, more detailed information about attacks. As one example, the administrator can see a breakdown of the different ways by which different attacks have been perpetrated. As shown inFIG.6at602, in the last 30 days, 512 attacks that make use of links have been attempted against ACME. As shown inFIG.7, the administrator can also see a list of the employees who, in the last 30 days, have clicked through on the most rewritten URLs. In the example shown inFIG.7, user Terry Field (the CFO of ACME) has clicked 21 times on rewritten links across 21 messages (i.e., she has been presented with an interstitial such as is shown inFIG.4a total of 21 times). Of those rewritten links, she clicked through to two original URLs. User Randall Jones has clicked 232 times on rewritten links across 12 messages. Of those rewritten links, he clicked through to four original URLs. Domains corresponding to the original URLs clicked on by Terry and Randall (and others) are shown in column702. FIG.8illustrates an example of an interface an administrator can use to obtain more information about a domain. Of note, for the suspicious domain evil.com, three ACME employees have received messages that include evil.com domains. Of those three recipients, two clicked on rewritten URLs, and one clicked through to the original domain. As mentioned above, threat detection platform100includes a search index174which can be used, for example, by an administrator to perform searches (e.g., via portal176or via API service186). Examples of searches that can be performed in connection with URL rewriting are:given a message that included a suspicious link, which users clicked through?what are the commonly clicked suspicious URLs?what are the commonly clicked through suspicious URLs? D. Example Flows FIG.9depicts an example of a URL rewriting flow in accordance with various embodiments. One example table included in URL database188is urlrewrite request. It is used to store basic metadata about a message required to fulfill the rewrite and also information about the rewrite task itself. There is one entry per request. Example schema is as follows: FieldDescriptionidIncremental id per data entrydate_createdTimestamp when entry wascreatedraw_message_idnative_user_idabnormal_Message_id if message wasmessage_idpersisted in the ORMdate_rewrittenTimestamp at which rewritecompletedrequest_statusNOT_STARTED,IN_PROGRESS, DONE Another example table included in URL database188is urlrewrite_urls. It is used to store information on URLs for which rewriting was performed for each rewrite request. There is one entry per URL per message. For example, if there are three URLs to rewrite within a given message, there will be three entries mapping to a single request_id. Example schema is as follows: FieldUseidIncremental id per data entryrequest_idID of urlrewrite_requestdate_createdTimestamp when entry wascreatedunwrapped_urlUnique unwrapped (original)URL to rewriterewrite_statusNOT_STARTED,IN_PROGRESS, SUCCESS,FAILURErewrite_Reason for failure (e.g., URLfailure_reasonnot found, etc.) One task performed by scorer132is unwrapping rewritten URLs (i.e., URLs rewritten by URL rewriter164) for scoring. Scorer132outputs URL decisions (e.g., malicious, suspicious, etc.). The URL decisions are stored in key value store154in an inMEssageWithAttributes object, along with other model scores and decisions. A task is created that stores the metadata about the message and which URLs to rewrite (e.g., in URL database188). When a request to remediation service162includes a call to the URL rewrite task, a URL rewrite request entry is created in URL database188and a call is made to the URL rewrite task. The URL rewrite task is an asynchronous task. It wraps suspicious URLs using the URL wrapper library. It also updates the message body (e.g., in Office365or G-Suite) with the wrapped URL(s). Examples of such rewriting actions include:Rewriting URLs in body anchor tags (HTML)“Unremediation”: URLs that have been wrapped once before are not unwrapped.Rewrite already-wrapped URLs: If the URL is already wrapped by threat detection platform100or a third party, do not wrap it.If a message can be eligible for banner injection remediation (if the enterprise has this feature enabled), reject the URL rewriting request.Add user identifiers into wrapped URLs.For URLs wrapped by third party services, unwrap to the original URL and then rewrite in accordance with techniques described herein.Rewrite URLs in plain text.Rewrite URLs in attachments. FIG.10is a sequence diagram illustrating the logging and storing of URL clicks. In various embodiments, URL redirect service184supports two types of requests. The first is illustrated at (la), where the redirect service processes GET requests at a wrapped (rewritten) URL with the following format: https://<url watch domain>/v1/${k}?u=${u}&t=${t}&r=${r}&m=${m}&s=${s} Then, (1b) after verification of the wrapped url, the redirect service will (1c) render the interstitial page at this wrapped URL. After this, (1d) a log will be sent. The second is illustrated at (2a), where the client clicks on the “proceed” button on the interstitial page, which contains a link to a new endpoint, with an additional “c” query param/v1/${k}/click?u=${u}&t=${t}&r=${r}&m=${m}&s=${s}&c=2 This sends a GET request with the same query parameters from the wrapped URL with an additional “c” query parameter to indicate the click type (e.g., c=2, implying proceed). The query parameter gives flexibility to track other types of clicks in future, e.g., different client application clicks. Then, similar to (1b), wrapped URL verification step (2b) is done, before (2c) the service redirects the user to the destination URL. After this, (2d) sends a log. Clicks are logged after successful verification and responses are returned (1e-1g, and2e-2g). In various embodiments, only verified wrapped URLs are logged:Clicks are logged only after a successful response has been provided to the client. I.e., after the interstitial page html response is sent to the client in (1c) and after returning an HTTP redirect response to the client in (2c).The wrapped URL is verified using the URL wrapper library. E. Example Process FIG.11illustrates an example of a process for rewriting a URL. In various embodiments, process1100is performed by a threat detection platform, such as threat detection platform100. The process begins at1102when an indication that a message has arrived at a user message box is received. The indication can be either manually triggered or automatically (e.g., as part of a daily batch process). Office365sending threat detection platform100a notification that Alice has received a new message is an example of such an indication being received at1102. At1104, a determination is made that the message includes a first link to a first resource. As an example, feature extraction service126can determine that the message (e.g., sent to Alice) includes one or more links. And, at1106, the first link is analyzed to determine whether the first link is classified as a non-rewrite link. Various examples of non-rewrite links in accordance with various embodiments are as follows: links included in verdict=good messages; links included in verdict=bad messages; links to safe list domains (e.g., Netflix.com). As mentioned above, URL rewriting is targeted at suspicious URLs (i.e., those for which the good or bad confidence is low). At1108, in response to determining that the first link is not classified as a non-rewrite link, a replacement link for the first link is generated. As mentioned above, one way of accomplishing this is for message scorer132to insert any such URLs into URL database188and inform remediation service162that remediation of the message is needed. Remediation service162(via URL rewriter164) can then rewrite the URL (and edit the message to include the rewritten URL). Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive. | 98,544 |
11943258 | DETAILED DESCRIPTION Various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and embodiments are for illustrative purposes, and are not intended to limit the scope of the claims. The term “computing device” is used herein to refer to any one or all of network elements such as servers, routers, set top boxes, head-end devices, and other similar network elements, cellular telephones, smartphones, portable computing devices, personal or mobile multi-media players, laptop computers, tablet computers, smartbooks, ultrabooks, palmtop computers, wireless electronic mail receivers, multimedia Internet-enabled cellular telephones, cordless phones, network-connected displays (such as advertisement screens, news screens, and the like), wireless local loop (WLL) station, entertainment devices (for example, a music or video device, or a satellite radio), gaming devices, wireless gaming controllers, cameras, medical devices or equipment, biometric sensors/devices, wearable devices (such as smart watches, smart clothing, smart glasses, smart wrist bands, smart jewelry (for example, smart ring, smart bracelet)), smart meters/sensors, industrial manufacturing equipment, router devices, appliances, global positioning system devices, wireless-network enabled Internet of Things (IoT) devices including large and small machinery and appliances for home or enterprise use, wireless communication elements within autonomous and semiautonomous vehicles, a vehicular component or sensor, wireless devices affixed to or incorporated into various mobile platforms, and similar electronic devices that include a memory, wireless communication components and a programmable processor, or that is configured to communicate via a wireless or wired medium. DNS queries are typically sent in unencrypted plaintext (e.g., in the clear), which enables another party to observe the content of DNS queries. DoH provides a protocol for sending DNS queries and responses using HTTPS, which enables the transport of DNS queries and responses via an encrypted communication channel. However, DoH also bypasses or renders non-functional DNS-based policies and services that rely on such DNS-based policies. For example, DoH may bypass DNS-based parental controls. DNS-based parental controls rely on a DNS query including plaintext, which allows an intermediate device to view the domain requested in the DNS query, determine that the requested domain is subject to parental control blocking, and replace the response IP address with a “blocked” response (e.g., a blocked IP address). Since DoH transports DNS queries and responses via an encrypted communication channel, DoH disables DNS services that involve the inspection and alteration of DNS requests or responses, including lists permitting or blocking certain IP addresses, DNS ad blocking, and cross-site tracking. Various embodiments enable a network computing device to perform operations for DoH that enable the application of DNS-based policies to DNS queries and/or responses and provide communication security for DNS queries and/or responses. Various embodiments enable the management of client-specific DNS-based policies in a DoH system. As used herein, the term “client-specific” means per-client computing device, per-user computing device, or per-account (which may include one or more computing devices). As used herein, the term “client-specific policy” means an information management process, procedure, or technique as applied in a technological and client-specific manner. The terms “client-specific” and “client-specific policy” refer to technological entities and processes, and not to means of organizing human activity or other non-technological interpretations of those terms. Various embodiments may enable a network computing device to be configured to apply DNS policies and provide DNS features on a client-specific basis. In some embodiments, a client computing device may receive an input (e.g., from an end user) to configure DNS preferences via a local or network-based application. In some embodiments, a network operator or network access provider may use the DNS preferences to configure and/or apply DNS policies or DNS features to one or more client computing devices (such as on an account basis, an individual basis, and/or the like). In some embodiments, a network operator or network access provider may configure a client computing device to refer DNS requests to a network computing device that enables such network operator or access provider to manage DNS requests over HTTPS (i.e., DoH), and to apply a feature set on a client-specific basis. In some embodiments, the network computing device may be configured to perform a configuration process for a client device or client account. In some embodiments, as part of the configuration process, the network computing device may receive from the client computing device a public key certificate (a public certificate) associated with a client identifier (e.g., a unique identifier that identifies a client computing device or client account). The network computing device may generate a fingerprint of the public certificate, and may generate the association between the fingerprint of the public certificate and the client-specific DoH policy. The network computing device may store in a data structure (such as a database or another suitable data structure) the association between the fingerprint of the public certificate and the client-specific DoH policy. In some embodiments, the network computing device may store the client-specific DoH policy in or with the data structure, or in another member location or memory device. In some embodiments, the network computing device may configure the client computing device (e.g., by providing configuration settings to the client computing device) to send DoH requests to the network computing device, which enables the network computing device to apply one or more feature sets and/or DNS policies to DoH requests from the client computing device. As noted above, various embodiments enable the network computing device to configure and apply a DoH policy on a client-specific basis. In some embodiments, the network computing device may receive from the client computing device policy parameters. The network computing device may configure the client-specific DoH policy based on the received policy parameters. In such embodiments, for example, following the receipt of a DoH request from a client computing device, the network computing device may obtain the client-specific DoH policy that is configured based on the received policy parameters. In some embodiments, a network computing device may receive from the client computing device a DoH request including the public certificate that is associated with the client identifier. A DoH request includes a DNS request that is transported using the DoH protocol. The network computing device may generate a fingerprint of the public certificate received with the DoH request. The network computing device may obtain (e.g., from a memory accessible by the network computing device) the client-specific DoH policy based on an association between the fingerprint of the public certificate and the client-specific DoH policy. The network computing device may apply the client-specific DoH policy to the DoH request to formulate a response to the DoH request. In some embodiments, the network computing device may apply the DoH policy to the DoH request to perform a DNS-based advertisement management operation. In such embodiments, the network computing device may formulate the response to the DoH request using an output of the DNS-based advertisement management operation. In some embodiments, the network computing device may apply the DoH policy to the DoH request to perform a DNS-based parental control operation. In such embodiments, the network computing device may formulate the response to the DoH request using an output of the DNS-based parental control operation. In some embodiments, the network computing device may apply the DoH policy to the DoH request to perform a DNS-based content filtering operation. In such embodiments, the network computing device may formulate the response to the DoH request using an output of the DNS-based content filtering operation. Various embodiments improve the operation of a network computing system and a communication network by enabling the network computing device to maintain and apply client-specific DNS policies in a system employing DoH. Various embodiments improve the operation of a network computing system and a client computing device by enabling the network computing device to provide client-specific services in a system employing DoH. Various embodiments improve the operation of a network computing system and a client computing device by enabling the network computing device to apply DNS-based policies in a system employing DoH. Various embodiments may be implemented within a variety of communication systems100, an example of which is illustrated inFIG.1. With reference toFIG.1, the communication system100may include various user equipment (UE) such as a set top box (STB)102, a mobile device104, a computer106. In addition, the communication system100may include network elements such as network computing devices110,112, and114, and a communication network150. The STB102, the mobile device104, the computer106, and the network computing devices110,112, and114may communicate with the communication network150via a respective wired or wireless communication link120,122,124,126,128and132. The network computing device110may communicate with a data store110avia a wired or wireless communication link130. The network computing device112may communicate with a data store112avia a wired or wireless communication link134. The STB102may include customer premises equipment, which may be implemented as a set top box, a router, a modem, or another suitable device configured to provide functions of an STB. The mobile device104may include any of a variety of portable computing platforms and communication platforms, such as cell phones, smart phones, Internet access devices, and the like. The computer106may include any of a variety of personal computers, desktop computers, laptop computers, and the like. The STB102, the mobile device104, and the computer106may each include a processor or processing device that may execute one or more client applications (e.g., client application104a). The client application104amay be configured to send a DoH request and to receive send a DoH response. The network computing devices110and112may be configured to perform operations related to management of DoH. In some embodiments, the network computing device110may be configured to perform operations as a DoH edge server. In some embodiments, the computing device110may be part of an edge of the communication network150, which may be physically located relatively proximate to one or more of the STB102, the mobile device104, and the computer106. In some embodiments, the network computing device112may be configured to perform operations as a DoH policy server. In some embodiments, the computing device112may manage and/or provide one or more DoH policies to the network computing device110, such as for use in operations related to providing one or more DNS services for the STB102, the mobile device104, or the computer106. In some embodiments, execution of a task or service may require data or information stored in the data store110aand/or112a. In some embodiments, the functions of the computing devices110and112may be combined in a single network computing device. In some embodiments, the functions of the network computing devices110and112may be separated into a plurality of network computing devices. The network computing device114may be configured to function, for example, as an application server or web server, to provide one or more resources, services, information, and the like. In some embodiments, a resource, service, or information provided by the network computing device114may be the subject of a DoH request sent by one of the STB102, the mobile device104, and the computer106to the network computing device110. The communication network150may support wired and/or wireless communication among the STB102, the mobile device104, the computer106, and the computing devices110,112, and114. The communication network150may include one or more additional network elements, such as servers and other similar devices (not illustrated). The communication system100may include additional network elements to facilitate communication among the STB102, the mobile device104, the computer106, and the computing devices110,112, and114. The communication links120,122,124,126,128,130,132, and134may include wired and/or wireless communication links. Wired communication links may include coaxial cable, optical fiber, and other similar communication links, including combinations thereof (for example, in an HFC network). Wireless communication links may include a plurality of carrier signals, frequencies, or frequency bands, each of which may include a plurality of logical channels. Wired communication protocols may use a variety of wired networks (e.g., Ethernet, TV cable, telephony, fiber optic and other forms of physical network connections) that may use one or more wired communication protocols, such as Data Over Cable Service Interface Specification (DOCSIS), Ethernet, Point-To-Point protocol, High-Level Data Link Control (HDLC), Advanced Data Communication Control Protocol (ADCCP), and Transmission Control Protocol/Internet Protocol (TCP/IP), or another suitable wired communication protocol. The wireless and/or wired communication links120,122,124,126,128,130,132, and134may include a plurality of carrier signals, frequencies, or frequency bands, each of which may include a plurality of logical channels. Each of the wireless communication links may utilize one or more radio access technologies (RATs). Examples of RATs that may be used in one or more of the various wireless communication links120,122,124,126,128,130,132, and134include an Institute of Electrical and Electronics Engineers (IEEE) 802.15.4 protocol (such as Thread, ZigBee, and Z-Wave), any of the Institute of Electrical and Electronics Engineers (IEEE) 16.11 standards, or any of the IEEE 802.11 standards, the Bluetooth standard, Bluetooth Low Energy (BLE), 6LoWPAN, LTE Machine-Type Communication (LTE MTC), Narrow Band LTE (NB-LTE), Cellular IoT (CIoT), Narrow Band IoT (NB-IoT), BT Smart, Wi-Fi, LTE-U, LTE-Direct, MuLTEfire, as well as relatively extended-range wide area physical layer interfaces (PHYs) such as Random Phase Multiple Access (RPMA), Ultra Narrow Band (UNB), Low Power Long Range (LoRa), Low Power Long Range Wide Area Network (LoRaWAN), and Weightless. Further examples of RATs that may be used in one or more of the various wireless communication links within the communication system100include 3GPP Long Term Evolution (LTE), 3G, 4G, 5G, Global System for Mobility (GSM), GSM/General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Code Division Multiple Access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), Wideband Code Division Multiple Access (W-CDMA), Worldwide Interoperability for Microwave Access (WiMAX), Time Division Multiple Access (TDMA), and other mobile telephony communication technologies cellular RATs, Terrestrial Trunked Radio (TETRA), Evolution Data Optimized (EV-DO), 1×EV-DO, EV-DO Rev A, EV-DO Rev B, High Speed Packet Access (HSPA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Evolved High Speed Packet Access (HSPA+), Long Term Evolution (LTE), AMPS, and other mobile telephony communication technologies cellular RATs or other signals that are used to communicate within a wireless, cellular or Internet of Things (IoT) network or further implementations thereof. Various embodiments may use a network computing device as a server, router, or another suitable element of a communication network. Such network elements may typically include at least the components illustrated inFIG.2, which illustrates an example network computing device200. With reference toFIGS.1and2, the network computing device200(e.g., the network computing devices110,112, and114) may include a processor201coupled to volatile memory202and a large capacity nonvolatile memory, such as a disk drive203. The network computing device200may also include a peripheral memory access device such as a floppy disc drive, compact disc (CD) or digital video disc (DVD) drive204coupled to the processor201. The network computing device200may also include network access ports206(or interfaces) coupled to the processor201for establishing data connections with a network, such as the Internet and/or a local area network coupled to other system computers and servers. Similarly, the network computing device200may include additional access ports, such as USB, Firewire, Thunderbolt, and the like for coupling to peripherals, external memory, or other devices. FIG.3is a message flow diagram illustrating a method300for managing DOH according to various embodiments.FIG.3illustrates a client computing device360(e.g., the STB102, the mobile device104, and the computer106), a DoH edge server362(e.g., the network computing device110), a DoH policy server364(e.g., the network computing device112), and a network server366(e.g., the network computing device114). For conciseness, the DoH edge server362and the DoH policy server364are illustrated as separate entities, but this is not intended to limit the scope or functions described to two network entities. As noted above, the functions of the DoH edge server362and the DoH policy server364may be combined in a single network computing device or distributed among a plurality of network computing devices, as signified by a dashed box. With reference toFIGS.1-3, in various embodiments, the client computing device360may send to the DoH edge server362a message302that may include parameters for configuring a client-specific DoH policy (“policy parameters”) related to the client computing device. The message302also may include a client identifier. In some embodiments, the policy parameters may pertain to a specific client computing device or a group of client computing devices, including one, some, or all computing devices that may be managed under a client account (which may be associated with or identifiable using the client identifier). In some embodiments, the message302also may include a public certificate that is associated with the client identifier. The DoH edge server362may receive the message302, and may send a message304to the DoH policy server364. The message304may include the client identifier, the policy parameters, and or the public certificate associated with the client identifier. In some embodiments, the DoH policy server364may perform operations306. In some embodiments, the DoH policy server364may configure the client-specific DoH policy based on the received policy parameters. For example, the DoH policy server364may configure the client-specific DoH policy with parameters that enable the performance of a DNS-based advertisement management operation. As another example, the DoH policy server364may configure the client-specific DoH policy with parameters that enable the performance of a DNS-based parental control operation. As another example, the DoH policy server364may configure the client-specific DoH policy with parameters that enable the performance of a DNS-based content filtering operation. In some embodiments, the DoH edge server362may send to the client computing device360a message308that includes DNS configuration information for the client computing device. In some embodiments, such DNS configuration information that may enable the client computing device360to be configured to direct a DoH request to the DoH edge server362. The client computing device360may perform configuration operations310to store and/or implement the DNS configuration information. At a later time, the client computing device360may send a DoH request312to the DoH edge server362(e.g., in accordance with the DNS configuration implemented in the client computing device360). In various embodiments, the DoH request312is transported from the client computing device360to the DoH edge server362via an encrypted communication channel (e.g., using HTTPS). In some embodiments, the DoH request312may include a public certificate associated with a client identifier. In some embodiments, the DoH edge server362may generate314a fingerprint of the public certificate. In some embodiments, the fingerprint may include a unique digital pattern generated using a fingerprint-generating algorithm, and may include. for example, a digital fingerprint, a message digest, a checksum, a hash, or another unique digital pattern suitable for identifying the public certificate. In some embodiments, the DoH edge server362may send a request message316to the DoH policy server364. The request message316may include the fingerprint of the public certificate. The DoH policy server364may perform an operation318related to the client-specific DoH policy. In some embodiments, the DoH policy server364may obtain the client-specific DoH policy based on an association between the fingerprint of the public certificate and the client-specific DoH policy. In some embodiments, the DoH policy server364may obtain the client-specific DoH policy from a memory (e.g.,112a). In some embodiments, the DoH policy server364may apply the client-specific DoH policy to the DoH request312. The DoH policy server364may send a response message320to the DoH edge server362. In some embodiments, the response message320may include the client-specific DoH policy. In some embodiments, the DoH edge server362may store the client-specific DoH policy (e.g., in a memory, such as110a, which may include a cache memory), for example, for a period of time. In some embodiments, the response message320may include the result of the DoH policy server364applying the client-specific DoH policy to the DoH request312. In various embodiments, having received the client-specific DoH policy (or the result of the DoH policy server364applying the client-specific DoH policy to the DoH request), the DoH edge server362may perform various operations in different scenarios, indicated as scenarios300aand300b. Since the DoH request312is transported via an encrypted communication channel, an intermediate observer is typically unable to intercept and view the content of the DoH request312. However, because the DoH edge server362is on the terminating end of the encrypted communication channel between the client computing device360and the DoH edge server362, the DoH edge server362may access the content of the DoH request312. In various embodiments, the DoH edge server362may process the content of the DoH request312and may route the DoH request content (e.g., a DNS request, such as a DNS packet) to a DNS resolver that may provide a resolved IP address. The DoH edge server362may withhold information about the client computing device360, so that neither the DNS resolver nor any intermediate device (such as an eavesdropper) may determine an identity (or other information) about the client computing device360. In some embodiments, any DoH responses or DNS responses may be sent to the DoH edge server362rather than to the client computing device360. Further, in some embodiments, the DoH edge server362may perform an operation related to the requested domain (i.e., the subject of the DoH request), such as applying the DoH policy to the DoH request to formulate a response to the DoH request. In some embodiments, the DoH edge server362may use the result of the DoH policy server364applying the client-specific DoH policy to the DoH request312to formulate a response to the DoH request312. In some embodiments, the DoH edge server362may alter a DoH or DNS response, such as changing a resolved IP address to a response indicating that the requested IP address or domain is blocked (e.g., for a parental control operation). In some embodiments, the DoH edge server362may block or drop the DoH response (e.g., for an advertisement management operation). In some embodiments, the DoH edge server362may return the requested IP address to the client computing device360(e.g., for a valid request). In various embodiments, the client-specific DoH policy may be applied based on a domain name, or based on the IP address provided in response to the DoH request (e.g., provided by a DNS resolver). Referring to scenario300a, in some embodiments, the DoH edge server362may perform operations322based on the DoH request312without passing on or sending a request for information or services to a network server (e.g.,114). For example, the DoH edge server362may apply the DoH policy to the DoH request312to perform a DNS-based parental control operation. For example, the DoH request may include a request for information or services from a website, domain, or resource location that is blocked based on the DoH policy. As another example, the DoH edge server362may apply the DoH policy to the DoH request312to perform a DNS-based content filtering operation. For example, the DoH request312may include a request for content from a website, domain, or resource location, or of a type, genre, nature, etc. that is blocked based on the DoH policy. In some embodiments, based on an output of the DNS-based operation(s)322, the DoH edge server362may formulate a response to the DoH request312. The DoH edge server362may send the response to the DoH request312in a response message324. In some embodiments, the DoH edge server362may replace an IP address for the requested (and not permitted, or blocked according to the DoH policy) website, domain, or resource location with a different (i.e., second, replacement) IP address for a different website, domain, or resource location. For example, the second IP address may point to a server that hosts a web page that presents a notice such as “Content Blocked due to User Policy” or the like. Referring to scenario300b, in some embodiments, the DoH edge server362may perform operations338based on the DoH request in response to receiving a secondary request for information (such as a secondary DoH request) from the client computing device360. In some embodiments, the secondary request may include a request that is triggered by, or instructed by, information received in response to a prior or earlier DoH request. For example, as described above, the DoH edge server362may send to the client computing device360a DNS response326that includes information responsive to the DoH request312, such as an IP address for the network server366. Using information in the DoH response326, the client computing device360may send a DoH request328to the network server366for information or services. The client computing device360may receive from the network server366a message330including information or data related to a service. Then, the client computing device360may send, and the DoH edge server362may receive from the client computing device360, a DoH request332for additional information. For example, the request332may include a request for a website, domain, or resource location that provides advertising content, such as a network advertisement server or another such content provider. As another example, the request332may include malicious behavior triggered by information in the message330. Such malicious behavior may include a download request to download malicious software send to a known or suspected (from its IP address or range of IP addresses) to be a malicious actor source of such software. The message332may include information exfiltrated from the client computing device360according to a malicious instruction passed along from the network server in the message332and sent to a network device known or suspected from its IP address or range of IP addresses to be a malicious actor. In some embodiments, the DoH edge server362may perform operations334responsive to the (secondary) DoH request332. In some embodiments, the DoH edge server362may apply the DoH policy to the DoH request336to perform a DNS-based advertisement management operation, parental control operation, and/or content filtering operation. In some embodiments, based on an output of the DNS-based operation(s)334, the DoH edge server362may formulate a response to the DoH request. The DoH edge server362may send the response to the DoH request in a response message342. For example, the response message340may include an indication that the website, domain, or resource location requested in the DoH request is not permitted (e.g., according to the DoH policy). In some embodiments, the DoH edge server362may not provide an indication to the client computing device of the output of the operations340(i.e., the DoH edge server362may apply the DoH policy to the DoH request336without notifying the client computing device360). For example, the DoH edge server362may drop or discard the DoH request332. In such embodiments, the message340is optional. In some embodiments, the DoH edge server362may replace an IP address for the requested (and not permitted, or blocked according to the DoH policy) website, domain, or resource location with a different (i.e., second, replacement) IP address for a different website, domain, or resource location. For example, the second IP address may point to a server that hosts a web page that presents a notice such as “Advertisement Blocked due to User Policy,” “Malicious Activity Blocked due to User Policy,” and/or the like. FIG.4is a process flow diagram illustrating a method400for managing DoH according to various embodiments. With reference toFIGS.1-4, the operations of the method400may be implemented in hardware components and/or software components of a network computing device (e.g., the network computing device110,112,200) the operation of which may be controlled by one or more processors (e.g., the processor201and/or the like), referred to herein as a “processor”. In block402, the processor may receive from a client computing device (e.g.,102,104,106,360) a DoH request comprising a public certificate associated with a client identifier. In some embodiments, the DoH request may be directed to a domain configured to receive client-specific DoH requests (e.g., www.charter.dns.client123.com). In some embodiments, the DoH request may include a request configured to include an application programming interface (API) request. Other examples for configuring the DoH request are also possible. In block404, the processor may generate a fingerprint of the public certificate. In some embodiments, the fingerprint may include a unique digital pattern generated using a fingerprint-generating algorithm, and may include. for example, a digital fingerprint, a message digest, a checksum, a hash, or another unique digital pattern suitable for identifying the public certificate. In block406, the processor may obtain a client-specific DoH policy based on an association between the fingerprint of the public certificate and the client-specific DoH policy. For example, the processor may obtain the client-specific DoH policy from a network computing device (e.g., the DoH policy server364), or from a memory (e.g.,110a), or from another suitable device or memory location. In block408, the processor may apply the DoH policy to the DoH request to formulate a response to the DoH request. In some embodiments, the processor may determine an IP address or IP address range of a requested website, domain, resource location, or another suitable subject of the DoH request. The processor may determine whether the IP address or IP address range corresponds to a permitted or blocked IP address or IP address range in the DoH policy. The processor may formulate a response to the DoH request based on whether the IP address or IP address range corresponds to a permitted or blocked IP address or IP address range in the DoH policy. In some embodiments, the processor may apply the DoH policy to the DoH request to perform one or more DNS-based operation, such as an advertisement management operation, a parental control operation, a content filtering operation, and/or the like. FIGS.5A and5Billustrate process flow diagrams of operations500aand500bthat may be performed as part of the method400of managing DoH according to various embodiments. With reference toFIGS.1-5B, the operations500aand500bmay be implemented in hardware components and/or software components of a computing device (e.g., the computing device110) the operation of which may be controlled by one or more processors (e.g., the processor201and/or the like). With reference toFIG.5A, the processor may receive from the client computing device policy parameters in block502. For example, the processor may receive a message (e.g.,302) from the client computing device (e.g.,360) that may include policy parameters for configuring a client-specific DoH policy related to the client computing device. In block504, the processor may configure the client-specific DoH policy based on the received policy parameters. In block506, the processor may obtain the client-specific DoH policy that is configured based on the received policy parameters. In this manner, the processor may configure (and store) a client-specific DoH policy that includes user parameters input to the client computing device and provided to the processor. The processor may perform the operations of block402(FIG.4) as described. With reference toFIG.5B, the processor may, as part of a configuration process receive from the client computing device the public certificate associated with the client identifier in block510. For example, the processor may receive from the client computing device (e.g.,360) a public certificate associated with a unique identifier that identifies a client computing device or client account. In block512, the processor may generate a fingerprint of the public certificate. Various techniques for generating a fingerprint are described above. In block514, the processor may generate the association between the fingerprint of the public certificate and the client-specific DoH policy. In block516, the processor may store in a data structure the association between the fingerprint of the public certificate and the client-specific DoH policy. In some embodiments, the processor may store the data structure in a memory (e.g.,110a). In some embodiments, the processor may send the data structure to another network computing device (e.g., the DoH policy server112,364) for storage (e.g., in the memory112a). Various embodiments illustrated and described are provided merely as examples to illustrate various features of the claims. However, features shown and described with respect to any given embodiment are not necessarily limited to the associated embodiment and may be used or combined with other embodiments that are shown and described. Further, the claims are not intended to be limited by any one example embodiment. For example, one or more of the operations methods and operations400,500a, and500bmay be substituted for or combined with one or more operations of the methods400,500a, and500band vice versa. The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the operations; these words are used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an,” or “the” is not to be construed as limiting the element to the singular. Various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such embodiment decisions should not be interpreted as causing a departure from the scope of the claims. The hardware used to implement various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of receiver smart objects, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function. In one or more aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable storage medium or non-transitory processor-readable storage medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module or processor-executable instructions, which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable storage media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage smart objects, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable storage medium and/or computer-readable storage medium, which may be incorporated into a computer program product. The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the claims. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. | 40,763 |
11943259 | DETAILED DESCRIPTION This disclosure provides solutions to the aforementioned and other problems of previous technology through security management of application information and of a plurality of invalid interactions.FIG.1is a schematic diagram of an example system for security management of application information.FIG.2is a flow diagram illustrating an example operation of the system ofFIG.1.FIG.3is a schematic diagram of an example system for security management of a plurality of invalid interactions.FIG.4is a flow diagram illustrating an example operation of the system ofFIG.3. In a first particular embodiment described with reference toFIGS.1and2, a system for security management is used to screen for merchant data previously stored in a database operated by a bank. For example, a first merchant may attempt to establish an account or profile with the bank. The bank may request the first merchant to provide specific data, such as the first merchant's name, physical address, domain name, merchant category code (MCC), and the like in order to establish the account or profile. This process may entail the first merchant inputting the specific data through an online form or application via a computer to be verified by the bank. The bank may receive the specific data as well as information associated with the first merchant's computer, such as an Internet Protocol (IP) address, cookies, and the like. In this example, the first merchant may not be conducting or may not plan to conduct legitimate transactions with customers. To conceal the illegitimate transactions, the first merchant may attempt to operate using the identity of a second merchant. The bank may already have information associated with the second merchant stored in the database, and the bank may be able to process the specific data provided by the first merchant as well as the information associated with the first merchant's computer and designate the first merchant as suspicious if there is at least partial overlap between the information provided by the first merchant and the information of the second merchant. In this way, the system disclosed herein is able to screen merchant data points maintained by the bank to identify any merchants that do not have a legitimate business, or are attempting to commit fraud. In a second particular embodiment disclosed with reference toFIGS.3and4, a system for security management is used to monitor and evaluate the processing of credit card transactions by the bank in order to identify and thwart any attempts by an entity to conduct a scam or other unauthorized business activity against bank clients. For example, a first merchant may be conducting illegitimate transactions with a plurality of customers. The bank may receive data associated with each credit card transaction between the first merchant and the plurality of customers in order to allocate or distribute funds between the first merchant and each of the plurality of customers. The first merchant may operate the scam in order to avoid detection by the bank through conventional filtering processes. Conventional filtering processes may generally search for higher-value credit card transactions or a high volume of credit card transactions to determine suspicious activity by a given merchant. In the present embodiment, the bank may be operable to compare the number of invalid credit card transactions occurring with the first merchant to a threshold value to determine that the first merchant is suspicious even when the first merchant is conducting lower-value credit card transactions or a lower volume of credit card transactions. By identifying the first merchant as suspicious, the system disclosed herein allows the bank to identify merchants that are not legitimate businesses, even those that may be bank clients, and thereby help prevent scams from continuing to be run on legitimate clients. This system can also help a bank prevent money laundering by merchants. Example System for Security Management of Application Information FIG.1illustrates a schematic diagram of an example system100for security management of application information submitted by a first entity106, such as a merchant, to a server104of an organization, such as a bank. The system100may include a first entity device102associated with the first entity106and the server104. The system100may be communicatively coupled to a communication network108and may be operable to transmit data between the first entity device102and the server104through the communication network108. In general, the server104may perform an identification process with the first entity device102. For example, the identification process may entail the first entity106inputting specific data through an online form or application via a computer (or similar device) to be verified by the server104. In particular embodiments, this process utilizes application information110associated with the first entity106and entity device information112(for example, an IP address, browser cookies, and the like) associated with the first entity device102to verify that the first entity106is not operating as a second entity114associated with a second entity device116, thereby reducing suspicious activity. For example, in a particular embodiment, the first entity106may not be associated with the server104, which is associated with a particular organization (for example, a bank or vendor), at a first time period. In this example, the first entity106may attempt to participate in suspicious activity with one or more users once associated with the server104. In one or more embodiments, suspicious activity may be fraudulent activity. The server104may require application information110provided by an online application or form submitted by the first entity106in order to become associated with the first entity106(for example, to establish an account or profile for the first entity106). Without limitations, the application information110may comprise an entity name, a physical address of operation, an entity category code, a domain name registered to the first entity106, and any combinations thereof, where the entity category code may be used to classify an entity by the types of goods or services it provides. In certain embodiments, the first entity106may provide false or inaccurate information in order to facilitate the suspicious activity. In this particular example, the first entity106may submit an entity name of the second entity114, an entity category code of the second entity114, a domain name registered to the second entity114, and any combinations thereof as the application information110in an attempt to become associated to the server104as the second entity114. If the server104associates the first entity106as the second entity114, the first entity106may conduct suspicious activity as the second entity114. The present disclosure provides security management of the application information110received and may transmit an alert118to the second entity114indicating that the first entity106has submitted information associated with the second entity114as application information110for the first entity106. The server104may further request verification from the second entity114and may inhibit association with the first entity106. The first entity device102may be any suitable device for initiating an interaction. For example, first entity device102may be a cash register, a tablet, a phone, a laptop, a personal computer, a payment terminal, a kiosk, etc. The first entity device102may be operable to receive information from a user and/or payment card when a purchase is requested. The first entity device102then may proceed to process the requested purchase. The first entity device102may include any appropriate device for communicating with components of system100over the communication network108. As an example and not by way of limitation, first entity device102may include a computer, a laptop, a wireless or cellular telephone, an electronic notebook, a personal digital assistant, a tablet, or any other device capable of receiving, processing, storing, and/or communicating information with other components of system100. This disclosure contemplates first entity device102being any appropriate device for sending and receiving communications over communication network108. The first entity device102may also include a user interface, such as a display, a microphone, keypad, or other appropriate terminal equipment usable by a user and/or the first entity106. In some embodiments, an application executed by first entity device102may perform the functions described herein. In one or more embodiments, the second entity device116may also be any suitable device for initiating an interaction. The second entity device116may be operable to perform similar functions as the first entity device102and may include similar components as discussed for the first entity device102. The first entity106and the second entity114may be clients of the same and/or different organizations. The organizations may enable first entity106and/or the second entity114to access their respective accounts, receive funds from one or more users, etc. For example, the organizations may generally facilitate the interactions of the first entity106and/or the second entity114(e.g., as a vendor). In a particular embodiment, first entity106and second entity114may be financial organizations, such as a bank. The server104is generally a suitable server (e.g., including a physical server and/or virtual server) operable to store data in a memory120and/or provide access to application(s) or other services. The server104may be a backend server associated with a particular organization, such as a bank, that facilitates conducting interactions between entities and one or more users. Details of the operations of the server104are described in conjunction withFIG.2. Memory120includes software instructions that, when executed by a processor122, cause the server104to perform one or more functions described herein. For example, the server104may be a database operable to receive a transmission124from the first entity device102comprising an application associated with the first entity106and entity device information112associated with the first entity device102, wherein the application comprises one or more data fields of application information110. Once the server104receives the transmission124from the first entity device102, the processor122, associated with the server104, may determine that a portion of one or more data fields of the application information110associated with the first entity106corresponds to a portion of data fields of entity account data126associated with the second entity114or with one or more additional entities. For example, entity account data126associated with a plurality of entities (for example, second entity114and one or more additional entities) may be stored in the memory120. In this example, the server104may not comprise entity account data for the first entity106at a first time period. The processor122may be communicatively coupled to the memory120and may access the memory120to determine whether a portion of one or more data fields of the application information110associated with the first entity106corresponds to a portion of data fields of entity account data126associated with the second entity114or with one or more additional entities. If there is a determination that a portion of one or more data fields of the application information110associated with the first entity106corresponds to a portion of data fields of entity account data126associated with the second entity114or with one or more additional entities, the first entity device102may be attempting to operate as that entity. The processor122may be operable to perform further functions to verify this determination, such as to: determine that a portion of the entity device information112associated with the first entity device102does not correspond to a portion of the entity device information112associated with the second entity device116that is associated with the second entity114; and transmit the alert118to the second entity114indicating that the first entity106is engaging in suspicious activity and requesting verification. Processor122comprises one or more processors operably coupled to the memory120. The processor122is any electronic circuitry including, but not limited to, state machines, one or more central processing unit (CPU) chips, logic units, cores (e.g. a multi-core processor), field-programmable gate array (FPGAs), application-specific integrated circuits (ASICs), or digital signal processors (DSPs). The processor122may be a programmable logic device, a microcontroller, a microprocessor, or any suitable combination of the preceding. The one or more processors are configured to process data and may be implemented in hardware or software. For example, the processor122may be 8-bit, 16-bit, 32-bit, 64-bit, or of any other suitable architecture. The processor122may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components. The one or more processors are configured to implement various instructions. For example, the one or more processors are configured to execute software instructions. In this way, processor122may be a special-purpose computer designed to implement the functions disclosed herein. In an embodiment, the processor122is implemented using logic units, FPGAs, ASICs, DSPs, or any other suitable hardware. The processor122is configured to operate as described inFIGS.1-2. For example, the processor122may be configured to perform the steps of method200as described inFIG.2. Memory120may be volatile or non-volatile and may comprise a read-only memory (ROM), random-access memory (RAM), ternary content-addressable memory (TCAM), dynamic random-access memory (DRAM), and static random-access memory (SRAM). Memory120may be implemented using one or more disks, tape drives, solid-state drives, and/or the like. Memory120is operable to store software instructions, account information110, entity device information112, entity account data126, and/or any other data or instructions. The software instructions may comprise any suitable set of instructions, logic, rules, or code operable to execute the processor122. As illustrated, the server104may further comprise a network interface128. Network interface128is configured to enable wired and/or wireless communications (e.g., via communication network108). The network interface128is configured to communicate data between the server104and other devices (e.g., first entity device102), databases, systems, or domain(s). For example, the network interface128may comprise a WIFI interface, a local area network (LAN) interface, a wide area network (WAN) interface, a modem, a switch, or a router. The processor122is configured to send and receive data using the network interface128. The network interface128may be configured to use any suitable type of communication protocol as would be appreciated by one of ordinary skill in the art. The communication network108may facilitate communication within the system100. This disclosure contemplates the communication network108being any suitable network operable to facilitate communication between the first entity device102and the server104. Communication network108may include any interconnecting system capable of transmitting audio, video, signals, data, messages, or any combination of the preceding. Communication network108may include all or a portion of a public switched telephone network (PSTN), a public or private data network, a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), a local, regional, or global communication or computer network, such as the Internet, a wireline or wireless network, an enterprise intranet, or any other suitable communication link, including combinations thereof, operable to facilitate communication between the components. In other embodiments, system100may not have all of the components listed and/or may have other elements instead of, or in addition to, those listed above. Example Operation of the System for Security Management of Application Information FIG.2is a flow diagram illustrating an example method200of the system100ofFIG.1. The method200may be implemented using the first entity device102, the server104, and the second entity device116ofFIG.1. The method200may begin at step202where the first entity106(referring toFIG.1) may send the transmission124(referring toFIG.1) from the first entity device102comprising an application associated with the first entity106and entity device information112(referring toFIG.1) associated with the first entity device102, wherein the application comprises one or more data fields of application information110(referring toFIG.1). Without limitations, the entity device information112may comprise at least one of an interne protocol address used by the first entity device102and a browser cookie. Once the server104receives the transmission124from the first entity device102, the processor122(referring toFIG.1), associated with the server104, may instruct the memory120(referring toFIG.1) to store the received application information110and entity device information112. At step204, the processor122of the server104may determine whether a portion of one or more data fields of the application information110associated with the first entity106corresponds to a portion of data fields of entity account data126(referring toFIG.1) associated with the second entity114or with one or more additional entities. If there is a determination that a portion of one or more data fields of the application information110associated with the first entity106corresponds to a portion of data fields of entity account data126associated with the second entity114or with one or more additional entities, the method200proceeds to step206. Otherwise, the method200proceeds to end. At step206, in response to determining that the portion of data fields of the application information110associated with the first entity102corresponds to the portion of data fields of the entity account data126associated with the second entity114or with one or more additional entities, the processor122of the server104may determine whether a portion of the entity device information112associated with the first entity device102corresponds to a portion of the entity device information112associated with the second entity device116(referring toFIG.1) that is associated with the second entity114. If there is a determination that a portion of the entity device information112associated with the first entity device102corresponds to a portion of the entity device information112associated with the second entity device116, the method200proceeds to step208. Otherwise, the method200proceeds to end. At step208, the processor122of the server104may determine that the first entity106is associated with suspicious indicators, wherein suspicious indicators may comprise a category within data classification utilized by the server104to categorize received data by the processor122. In embodiments, one of the suspicious indicators may correspond to a determination that the portion of data fields of the application information110associated with the first entity106corresponds to the portion of data fields of the entity account data126associated with the second entity114. Another one of the suspicious indicators may correspond to a determination that the portion of the entity device information112associated with the first entity device102corresponds to the portion of the entity device information112associated with the second entity device116. For example, the processor122may determine that the first entity106is associated with suspicions indicators if the application information110associated with the first entity106comprises an equivalent domain name and/or entity category code as the entity account data126associated with the second entity114. In these embodiments, suspicious indicators may signal to the server104that the first entity106may engage in suspicious activity. The server104may be further operational to verify whether suspicious activity has occurred. At step210, the processor122of the server104may transmit an alert118(referring to FIG.) to the second entity device116indicating that the server104received application information110from the first entity106that is associated with the second entity114and that the first entity106is associated with suspicious indicators. The processor122may be further operable to send a request to the second entity114, via the second entity device116, to verify that the second entity114was not authorized to act on behalf of the first entity106. At step212, the processor122of the server104may receive a response signal from the second entity device116indicating whether or not the second entity114is authorized to act on behalf of the first entity106. The processor122may be operable to determine whether the second entity114was authorized to act on behalf of the first entity106. If the second entity114was authorized to act on behalf of the first entity106, the method200proceeds to step214. Otherwise, the method200proceeds to step216. At step214, the processor122of the server104may be operable to approve the application comprising the application information110submitted by the first entity106. The processor122may send a request to the memory120to store the received application information110as entity account data126associated with the first entity106after approving the application, wherein the entity account data126associated with the first entity106may include the entity account data126associated with the second entity114. Upon verification that the second entity114is authorized to act on behalf of the first entity106, the processor122of the server104may monitor further actions and operations of the second entity114to determine whether the second entity114is performing other unauthorized activities. In these embodiments, an indicator of suspicious activity may be when a given entity (for example, the second entity114) that has an existing profile or account with the server104(for example, entity account data126) attempts to create an additional profile or account with the server104as another entity (for example, the first entity106). Suspicious activity may be attempted to be conducted through the additional profile or account rather than through the existing profile or account. After identifying the second entity114as associated with an existing profile or account (for example, entity account data126associated with the second entity114), the processor122of the server104may be operable to monitor the second entity114based on the attempt to create an additional profile or account. In these embodiments, the processor122of the server104may monitor an Internet Protocol (IP) address, cookies, domain, log-in frequency, information associated with one or more interactions with users (for example, credit card transactions), digital footprint of the second entity device116, and any combinations thereof that are associated with the second entity114for suspicious activity. If the processor122determines that there is suspicious activity, the processor122may be operable to inhibit further actions or operations by the second entity114. The method200may then proceed to end. With reference back to step216, the processor122of the server104may send a request to the first entity device102to initiate an authentication session between the first entity device102and the server104. In the authentication session, the server104may receive, from the first entity device102, data associated with the first entity106that is not contained within the application information110associated with the first entity106. Once the server104receives the data from the first entity106, the authentication session may end. The processor122may be operable to compare the received data to entity account data126associated with the second entity114stored in the memory120. The processor122may be further operable to compare a remaining portion of data fields of the application information110associated with the first entity102to a remaining portion of data fields of the entity account data126associated with the second entity114. If the remaining portion of data fields of the application information110associated with the first entity102are equivalent to a remaining portion of data fields of the entity account data126associated with the second entity114and the received data from the first entity106does not match the entity account data126associated with the second entity114stored in the memory120, the processor122may determine that the first entity106is attempting suspicious activity. At step218, in response to a determination that the first entity106is attempting suspicious activity, the processor122of the server104may be operable to deny the application comprising the application information110submitted by the first entity106. The method200then proceeds to end. Example System for Security Management of a Plurality of Invalid Interactions FIG.3illustrates a schematic diagram of an example system300for security management of a plurality of interactions, such as credit card transactions, between an entity, such as a merchant, and an organization, such as a bank. The system300may include the first entity device102, the server104, and the communication network108, as previously described with reference toFIG.1. The system300may be communicatively coupled to the communication network108and may be operable to transmit data between the first entity device102and the server104through the communication network108. In general, the server104may perform an authentication process with the first entity device102. In particular embodiments, this process utilizes interaction information between the first entity device102and one or more users302to determine whether the first entity device102is engaging in or associated with suspicious activity. For example, in a particular embodiment, one or more users302may make purchases using the first entity device102. The one or more users302may present a payment card304, individually associated with each one of the one or more users302, to first entity device102to make the purchase, wherein each payment card304provides information to authenticate the user302. In conventional processes, once this information is used to identify and authenticate one or more users302, the purchase is granted. However, identification and authentication based on this information may not be very reliable. For example, the information on the card is static and does not indicate whether the one or more users302is the user identified by the information on payment card304. In this example, suspicious activity may ensue if the payment card304of one of the one or more users302, that is not authenticated, is used to complete a purchase or interaction with a large value and/or used to complete a greater number of purchases or interactions than the user302would typically complete. For example, suspicious activity may occur where the payment card304is used two hundred times within a day where, in contrast, the average number of payment card304transactions in which that user302normally uses the payment card304for an interaction is two times per day. Detecting the anomaly between a high volume of payment card304transactions and the normal number of payment card304transactions may indicate suspicious activity. The present disclosure provides security management within the system300using information, such as interaction information between the first entity device102and the one or more users302, to determine suspicious activity by the first entity device102. The server104may be operable to monitor the values and volume of a plurality of interactions for one or more users302and transmit an alert306to the one or more users indicating when there is a determination of suspicious activity by an entity (for example, the first entity106inFIG.1). As previously described, the first entity device102may be any suitable device for initiating an interaction. For example, first entity device102may be a cash register, a tablet, a phone, a laptop, a personal computer, a payment terminal, a kiosk, etc. associated with a first entity106, such as a merchant. The first entity device102may be operable to receive information from one or more users302via the payment card304when a purchase is requested. The first entity device102may then proceed to process the requested purchase. The first entity device102may include any appropriate device for communicating with components of system100over the communication network108. As an example and not by way of limitation, first entity device102may include a computer, a laptop, a wireless or cellular telephone, an electronic notebook, a personal digital assistant, a tablet, or any other device capable of receiving, processing, storing, and/or communicating information with other components of system300. This disclosure contemplates first entity device102being any appropriate device for sending and receiving communications over communication network108. The first entity device102may also include a user interface, such as a display, a microphone, keypad, or other appropriate terminal equipment usable by the one or more users302and/or an entity associated with the first entity device102(for example, the first entity106inFIG.1). In some embodiments, an application executed by first entity device102may perform the functions described herein. Payment card304may be any suitable card presented by the one or more users302to initiate and complete a purchase, such as for example, a credit or debit card. Payment card304may include information that is used to identify and authenticate the one or more users302. For example, payment card304may include a name of customer102and/or a unique card number. The server104is generally a suitable server (e.g., including a physical server and/or virtual server) operable to store data in the memory120and/or provide access to application(s) or other services. The server104may be a backend server associated with a particular organization, such as a bank in one embodiment, that facilitates conducting interactions between entities and one or more users. Details of the operations of the server104of system300are described in conjunction withFIG.4. Memory120includes software instructions that, when executed by the processor122, cause the server104to perform one or more functions described herein. The processor122may be communicatively coupled to the memory120and may access the memory120. For example, the server104may be a database operable to receive a transmission308from the first entity device102comprising a plurality of interactions between the first entity device102and one or more users302, wherein each one of the plurality of interactions comprises interaction information between the first entity device102and that one of the one or more users302. Once the server104receives the transmission308from the first entity device102, the processor122, associated with the server104, may determine a threshold value of the plurality of interactions and a threshold volume of the plurality of interactions for each one or more users302. The processor122may be further operable to determine whether one of the values of the plurality of interactions is greater than the determined threshold value; whether the volume of interactions for one of the one or more users302is greater than the determined threshold volume; and whether the first entity device102is engaging in or associated with suspicious activity based on these determinations. Processor122comprises one or more processors operably coupled to the memory120. The processor122is any electronic circuitry including, but not limited to, state machines, one or more central processing unit (CPU) chips, logic units, cores (e.g. a multi-core processor), field-programmable gate array (FPGAs), application-specific integrated circuits (ASICs), or digital signal processors (DSPs). The processor122may be a programmable logic device, a microcontroller, a microprocessor, or any suitable combination of the preceding. The one or more processors are configured to process data and may be implemented in hardware or software. For example, the processor122may be 8-bit, 16-bit, 32-bit, 64-bit, or of any other suitable architecture. The processor122may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components. The one or more processors are configured to implement various instructions. For example, the one or more processors are configured to execute software instructions. In this way, processor122may be a special-purpose computer designed to implement the functions disclosed herein. In an embodiment, the processor122is implemented using logic units, FPGAs, ASICs, DSPs, or any other suitable hardware. The processor122is configured to operate as described inFIGS.3-4. For example, the processor122may be configured to perform the steps of method400as described inFIG.4. Memory120may be volatile or non-volatile and may comprise a read-only memory (ROM), random-access memory (RAM), ternary content-addressable memory (TCAM), dynamic random-access memory (DRAM), and static random-access memory (SRAM). Memory120may be implemented using one or more disks, tape drives, solid-state drives, and/or the like. Memory120is operable to store software instructions, entity device information112, entity account data126, user account data310, and/or any other data or instructions. The software instructions may comprise any suitable set of instructions, logic, rules, or code operable to execute the processor122. As illustrated, the server104may further comprise the network interface128. Network interface128is configured to enable wired and/or wireless communications (e.g., via communication network108). The network interface128is configured to communicate data between the server104and other devices (e.g., first entity device102), databases, systems, or domain(s). For example, the network interface128may comprise a WIFI interface, a local area network (LAN) interface, a wide area network (WAN) interface, a modem, a switch, or a router. The processor122is configured to send and receive data using the network interface128. The network interface128may be configured to use any suitable type of communication protocol as would be appreciated by one of ordinary skill in the art. The communication network108may facilitate communication within the system100. This disclosure contemplates the communication network108being any suitable network operable to facilitate communication between the first entity device102and the server104. Communication network108may include any interconnecting system capable of transmitting audio, video, signals, data, messages, or any combination of the preceding. Communication network108may include all or a portion of a public switched telephone network (PSTN), a public or private data network, a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), a local, regional, or global communication or computer network, such as the Internet, a wireline or wireless network, an enterprise intranet, or any other suitable communication link, including combinations thereof, operable to facilitate communication between the components. In other embodiments, system100may not have all of the components listed and/or may have other elements instead of, or in addition to, those listed above. Example Operation of the System for Security Management of a Plurality of Invalid Interactions FIG.4is a flow diagram illustrating an example method400of the system300ofFIG.3. The method400may be implemented using the first entity device102and the server104ofFIG.3. The method400may begin at step402where the transmission308(referring toFIG.3) may be sent from the first entity device102to the server104comprising the plurality of interactions between the first entity device102and one or more users302(referring toFIG.3), wherein each interaction comprises interaction information associated with one of the one or more users302and the first entity device102. In embodiments, the interaction information may comprise at least a value of the interaction. At step404, once the server104receives the transmission308from the first entity device102, the processor122(referring toFIG.3), associated with the server104, may instruct the memory120(referring toFIG.3) of the server104to store the data provided by the received transmission308. In embodiments, the memory120may be operable to store a portion of the interaction information as user account data310(referring toFIG.3) for each of the one or more users302. The memory120may be further operable to store a remaining portion of the interaction information as entity account data126that is associated with the first entity106(referring toFIG.1) that is associated to the first entity device102. The received transmission may further comprise entity device information112for the first entity device102, and the memory120may be operable to store the entity device information112received from the transmission308. At step406, the processor122of the server104may determine a threshold value of the plurality of interactions and a threshold volume of the plurality of interactions based on the interaction information from the received transmission308. At step408, the processor122of the server104may determine whether a value of each one of the received plurality of interactions from the transmission308has exceeded the threshold value of the plurality of interactions. If there is a determination that a value of one of the received plurality of interactions from the transmission308has exceeded the threshold value of the plurality of interactions, the method400proceeds to end. Otherwise, the method400proceeds to step410. At step410, in response to determining that a value of each of the received plurality of interactions has not exceeded the threshold value of the plurality of interactions, the processor122of the server104may determine whether the number of the plurality of interactions has exceeded the threshold volume of the plurality of interactions. If there is a determination that the number of the plurality of interactions has exceeded the threshold volume of the plurality of interactions, the method400proceeds to end. Otherwise, the method400proceeds to step412. At step412, the processor122of the server104may determine a number of instances wherein one or more of the plurality of interactions were invalid. In embodiments, an interaction may be invalid wherein the payment card304(referring toFIG.3) has been cancelled before the interaction, the interaction has been disputed, or combinations thereof. The memory120of the server104may be operable to store a threshold number of interactions that are invalid. At step414, the processor122of the server104may determine whether a number of instances wherein one or more of the plurality of interactions were invalid is greater than the threshold stored in the memory120. For example, there may be eighty instances of interactions that were invalid, and the threshold may be fifty. If there is a determination that the number of instances wherein one or more of the plurality of interactions were invalid is not greater than the threshold stored in the memory120, the method400proceeds to end. Otherwise, the method400proceeds to step416. At step416, the processor122of the server104may determine that the first entity device102is associated with suspicious indicators. As described above with respect toFIG.2, the suspicious indicators may comprise a category within data classification utilized by the server104to categorize received data by the processor122. In embodiments, suspicious indicators may correspond to a determination that the values of the received plurality of interactions from the transmission308has not exceeded the threshold value of the plurality of interactions and to a determination that the number of the plurality of interactions has not exceeded the threshold volume of the plurality of interactions. Another one of the suspicious indicators may correspond to a determination that the number of instances wherein one or more of the plurality of interactions were invalid is greater than the threshold stored in the memory120. For example, the processor122may determine that the first entity106is associated with suspicions indicators if there are eighty instances of interactions that were invalid, where the threshold is fifty, and the values of the received plurality of interactions from the transmission308has not exceeded the threshold value of the plurality of interactions and the number of the plurality of interactions has not exceeded the threshold volume of the plurality of interactions. In these embodiments, suspicious indicators may signal to the server104that the first entity106has engaged in suspicious activity with the one or more users302. The server104may be further operational to verify whether suspicious activity has occurred. At step418, the processor122of the server104may transmit the alert306(referring toFIG.3) to the one or more users302indicating that the server104received the plurality of interactions associated with the first entity device102and that the first entity device102is associated with suspicious indicators. At step420, the processor122of the server104may send a request to the first entity device102to initiate an authentication session between the first entity device102and the server104. In the authentication session, the server104may receive, from the first entity device102, entity device information112(referring toFIG.3) associated with the first entity device102. Once the server104receives the entity device information112, the authentication session may end. The processor122may be operable to analyze the received entity device information112to determine whether the first entity106, via the first entity device102, has engaged in suspicious activity. The processor122may further be operable to send a request to each one of the one or more users302to verify that one of the plurality of interactions was authorized by that one of the one or more users302and receive a response signal indicating that the one of the plurality of interactions was or was not authorized by that one of the one or more users302. If the received response signal indicates that the one of the plurality of interactions was authorized, the processor122may send a request to the memory120to store the received interaction information in the user account data310for that one of the one or more users302. If the received response signal indicates that the one of the plurality of interactions was not authorized, the processor122may be operable to determine a location of that one of the one or more users302and a location of the first entity device102. In embodiments, the processor122may be configured to determine the location based, at least in part, on the received entity device information112from the first entity device102during the authentication session. The processor122may be operable to determine that the location of that one of the one or more users302is not located within a distance threshold from the location of the first entity device102. For example, the processor122may determine that the first entity device102is associated with a physical storefront at a given location. In this example, one of the one or more users302may be at a location three hundred miles away from the location of the first entity device102at about the time of the interaction between that user302and the first entity device102. In this example, the distance threshold between the first entity device102and one of the one or more users302may be twenty miles. As the distance between that one of the one or more user302is greater than the distance threshold, the processor122may determine that the one of the plurality of interactions that was not authorized by this user302is invalid. The processor122may be operable to inhibit interactions associated with the first entity device102from processing. The method400then proceeds to end. While several embodiments have been provided in this disclosure, it should be understood that the disclosed system and method might be embodied in many other specific forms without departing from the spirit or scope of this disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented. In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of this disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein. To aid the Patent Office, and any readers of any patent issued on this application in interpreting the claims appended hereto, applicants note that they do not intend any of the appended claims to invoke 35 U.S.C. § 112(f) as it exists on the date of filing hereof unless the words “means for” or “step for” are explicitly used in the particular claim. | 46,661 |
11943260 | DETAILED DESCRIPTION The following discussion is presented to enable any person skilled in the art to make and use the technology disclosed and is provided in the context of a particular application and its requirements. Various modifications to the disclosed implementations will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other implementations and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the implementations shown but is to be accorded the widest scope consistent with the principles and features disclosed herein. The detailed description of various implementations will be better understood when read in conjunction with the appended drawings. To the extent that the figures illustrate diagrams of the functional blocks of the various implementations, the functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., modules, processors, or memories) may be implemented in a single piece of hardware (e.g., a general purpose signal processor or a block of random access memory, hard disk, or the like) or multiple pieces of hardware. Similarly, the programs may be stand-alone programs, may be incorporated as subroutines in an operating system, may be functions in an installed software package, and the like. It should be understood that the various implementations are not limited to the arrangements and instrumentality shown in the drawings. The processing engines and databases of the figures, designated as modules, can be implemented in hardware or software, and need not be divided up in precisely the same blocks as shown in the figures. Some of the modules can also be implemented on different processors, computers, or servers, or spread among a number of different processors, computers, or servers. In addition, it will be appreciated that some of the modules can be combined, operated in parallel or in a different sequence than that shown in the figures without affecting the functions achieved. The modules in the figures can also be thought of as flowchart steps in a method. A module also need not necessarily have all its code disposed contiguously in memory; some parts of the code can be separated from other parts of the code with code from other modules or other functions disposed in between. Cloud Applications Cloud applications108are network services that can be web-based (e.g., accessed via a uniform resource locator (URL)) or native, such as sync clients. The cloud applications108can be cloud storage applications, cloud computing applications, hosted services, news websites, blogs, video streaming websites, social media websites, collaboration and messaging platforms, and customer relationship management (CRM) platforms. The cloud applications108can be provided as software-as-a-service (SaaS) offerings, platform-as-a-service (PaaS) offerings, and infrastructure-as-a-service (IaaS) offerings, as well as internal enterprise applications that are exposed via URLs. Examples of common cloud applications today include Box™, Dropbox™, Google Drive™, Amazon AWS™, Google Cloud Platform (GCP)™, Microsoft Azure™, Microsoft Office 365™, Google Workspace™, Workday™, Oracle on Demand™, Taleo™, Jive™, Concur™, YouTube™, Facebook™, Twitter™, Google™, LinkedIn™, Wikipedia™, Yahoo™, Baidu™, Amazon™, MSN™, Pinterest™, Taobao™, Instagram™ Tumblr™, eBay™, Hotmail™, Reddit™, IMDb™, Netflix™, PayPal™, Imgur™, Snapchat™, Yammer™, Skype™, Slack™, HipChat™, Confluence™, TeamDrive™, Taskworld™, Chatter™, Zoho™, ProsperWorks™, Gmail™, and Salesforce.com™. The cloud applications108provide functionality to users that is implemented in the cloud and that is the target of policies, e.g., logging in, editing documents, downloading bulk data, reading customer contact information, entering payables, deleting documents, in addition to the offerings of a simple website and ecommerce sites. Note that some consumer facing websites, e.g., Facebook™ and Twitter™, which offer social networks, are the type of cloud applications considered here. Some cloud applications, e.g., Gmail™, can be a hybrid with some free users using the application generally while other corporations use it as an enterprise subscription. Note that a cloud application can be supported by both web browser clients and application clients that use URL-based APIs (application programming interfaces). Thus, using Dropbox™ as an example, user activity on the Dropbox™ website, as well as activity of the Dropbox™ client on the computer could be monitored. The cloud applications108often publish their APIs to allow a third party to communicate with them and utilize their underlying data. An API refers to a packaged collection of code libraries, routines, protocols methods, and fields that belong to a set of classes, including its interface types. The API defines the way that developers and programmers can use the classes for their own software development, just by importing the relevant classes and writing statements that instantiate the classes and call their methods and fields. An API is a source code-based application intended to be used as an interface by software components to communicate with each other. An API can include applications for routines, data structures, object classes, and variables. Basically, an API provides an interface for developers and programmers to access the underlying data, platform capabilities, and features of web services. Non-exclusive examples of APIs include remote invocation of services to return data, web service APIs such as HTTP or HTTPs based APIs like SOAP, WSDL, Bulk, XML-RPC and JSON-RPC and REST APIs (e.g., Flickr™, Google Static Maps™, Google Geolocation™), web socket APIs, library-based APIs like JavaScript and TWAIN (e.g., Google Maps™ JavaScript API, Dropbox™ JavaScript Data store API, Twilio™ APIs, Oracle Call Interface (OCI)), class-based APIs like Java API and Android API (e.g., Google Maps™ Android API, MSDN Class Library for .NET Framework, Twilio™ APIs for Java and C#), OS functions and routines like access to file system and access to user interface, object remoting APIs like CORBA and .NET Remoting, and hardware APIs like video acceleration, hard disk drives, and PCI buses. Other examples of APIs used by the technology disclosed include Amazon EC2 API™, Box Content API™, Box Events API™, Microsoft Graph™, Dropbox API™, Dropbox API v2™, Dropbox Core API™, Dropbox Core API v2™, Facebook Graph API™, Foursquare API™, Geonames API™, Force.com API™, Force.com Metadata API™, Apex API™, Visualforce API™, Force.com Enterprise WSDL™, Salesforce.com Streaming API™, Salesforce.com Tooling API™, Google Drive API™, Drive REST API™, AccuWeather API™, and aggregated-single API like CloudRail™ API. Network Security System A network security system (NSS)104, also referred to herein as a policy enforcement system, intermediates network traffic that pass between clients102and the cloud applications108. The network security system104consolidates multiple types of security enforcements. Examples of the security enforcements include authentication, federated single sign-on (SSO), authorization, credential mapping, device profiling, encryption, tokenization, data leakage prevention (DLP), logging, alerting, and malware detection and prevention. Examples of the clients102include browsers, web apps, native apps, and hybrid apps. Examples of the network security system104include cloud access security brokers (CASBs), secure web gateways (SWGs), network firewalls, application firewalls, routing systems, load balancing systems, filtering systems, data planes, management planes, data loss prevention (DLP) systems, intrusion prevention systems (IPSs), zero trust network access (ZTNA), and secure access service edge (SASE). The network security system104can also be a network security stack that includes different security systems like the CASBs, the SWGs, the network firewalls, the application firewalls, the routing systems, the load balancing systems, the filtering systems, the data planes, the management planes, the DLP systems, and the IP systems. The network security system104can be implemented on-premises or can be cloud-based. Also, multiple geographically distributed points of presence of the network security system104can be implemented in a secure access service edge (SASE) network. Employees now rely on the cloud applications108to perform business-critical functions and routinely upload sensitive and regulated data to the web. The network security system104intercepts network traffic in real-time to prevent loss of sensitive data by inspecting data en route to or from the cloud applications108and data resident in the cloud applications108. The network security system104analyzes application layer traffic using APIs to deeply inspect cloud application transactions in real-time. The network security system104uses a combination of deep application programming interface inspection (DAPII), deep packet inspection (DPI), and log inspection to monitor user activity and perform data loss prevention (DLP). The network security system104uses DAPII to detect web transactions in real-time, including calls made to the cloud applications108. The cloud transactions are decomposed to identify the activity being performed and its associated parameters. In one implementation, the cloud transactions are represented as JSON (JavaScript Object Notation) objects, which identify a structure and format that allows the network security system104to both interpret what actions a user is performing in the web service as it is happening. So, for example, the network security system104can detect for an organization that “Joe from Investment Banking, currently in Japan, shared his M&A directory with an investor at a hedge fund at 10 PM.” The network security system104achieves DLP by subjecting data packets to content inspection techniques like language-aware data identifier inspection, document fingerprinting, file type detection, keyword search, pattern matching, proximity search, regular expression lookup, exact data matching, metadata extraction, and language-agnostic double-byte character inspection. The network security system104inspects data that is encoded in network packets and/or higher order encodings of the network packets such as secure sockets layer (SSL) and/or transport layer security (TLS) handshakes and Hypertext Transfer Protocol (HTTP) transactions. In some implementations, the network security system104can run on server-side as a cloud resource. In other implementations, the network security system104can run on client-side as an endpoint agent. The network security system104is also referred to herein as a “proxy.” For additional information about the network security system104, reference can be made to, for example, commonly owned U.S. patent application Ser. Nos. 14/198,499; 14/198,508; 14/835,640; 14/835,632; and 62/307,305; Cheng, Ithal, Narayanaswamy, and Malmskog. Cloud Security For Dummies, Netskope Special Edition. John Wiley & Sons, Inc. 2015; “Netskope Introspection” by Netskope, Inc.; “Data Loss Prevention and Monitoring in the Cloud” by Netskope, Inc.; “Cloud Data Loss Prevention Reference Architecture” by Netskope, Inc.; “The 5 Steps to Cloud Confidence” by Netskope, Inc.; “The Netskope Active Platform” by Netskope, Inc.; “The Netskope Advantage: Three “Must-Have” Requirements for Cloud Access Security Brokers” by Netskope, Inc.; “The 15 Critical CASB Use Cases” by Netskope, Inc.; “Netskope Active Cloud DLP” by Netskope, Inc.; “Repave the Cloud-Data Breach Collision Course” by Netskope, Inc.; and “Netskope Cloud Confidence Index™” by Netskope, Inc., which are incorporated by reference for all purposes as if fully set forth herein. Application Session An “application session” refers to a series of related client requests that emanate from a same client during a certain time period (e.g., a duration of fifteen minutes) and are directed towards a same cloud application. An application session identifier (ID) is a unique number that an application server of the cloud application assigns a specific client the duration of that client's visit (session). The application session ID can be stored as a cookie, form field (e.g., the HTTP header field set-Cookie: JSESSIOID=ABAD1D; path/1), or a URL (Uniform Resource Locator). The network security system104can use the session ID or the URL parameter to lookup session information of that application session maintained by the application server which assigns the session ID. A sequence of events occurs in the context of an application session. The main events of note are: (a) login event—provide user credentials to a cloud application to authenticate the user; (b) application transactions—execute a set of application level transactions, e.g., upload documents, download documents, add leads, define new campaigns, etc.; and (c) time-out event—this event terminates the application session with an application server of the cloud application. In this context, the application session connects these interactions for the network security system104. Deep packet inspection logic of the network security system104can identify these events and link policy evaluations to each transaction boundary enabling actions to be taken. In contrast, a “connection” refers to a high-level non-network construct (e.g., not a TCP/IP connection) but rather a series of multiple related networks requests and responses. Thus, a series of requests and responses over the course of a day could be a single connection within an application, e.g., all use of Salesforce.com within a period without logging off. One definition of a connection is to look at the application session identifier, e.g., cookie or URL parameter, used by the cloud application so that each connection corresponds to a single session identifier. A connection can include a sequence of application sessions within the boundary of a login event and a log-out event, such that application sessions in the sequence of application sessions are partitioned by time-out events. Synthetic Request A “synthetic request” is a request generated by the network security system104during an application session and separate of client requests generated by a client during the application session. The synthetic requests can be generated by the network security system104, for example, on an ad hoc basis. That is, the synthetic requests can be dynamically constructed and issued “on-the-fly” to get information when need arises. The synthetic requests may also be referred to as “synthetic URL requests.” Web transactions are typically accompanied with access tokens (e.g., embedded as request parameters or cookies). By extracting an access token for a given transaction, the network security system104can synthetically issue new requests (or transactions) to the cloud applications108. The synthetic requests can be, for example, API calls that explicitly request the metadata-of-interest. Alternatively, the synthetic requests can trigger page requests that contain the metadata-of-interest. The synthetic requests can be configured to retrieve the metadata-of-interest from the cloud applications108or from another separate metadata store. Client requests, referred to herein as “incoming requests,” emanate from the clients102and directed towards the cloud applications108but are intercepted by the network security system104for policy enforcement. The synthetic requests are issued by the network security system104, directed towards the cloud applications108, and not subjected to policy enforcement by the network security system104. The synthetic requests are issued by the network security system104as network transactions of communications protocols (e.g., FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMTP, SMTPS, SPDY and TFTP) that, for example, HTTP/HTTPS transactions specify a unified resource identifier (URI) or URL of a resource on the cloud applications108. The synthetic requests can include different configurations of a request line, request header fields and values therefor, and request methods defined by the applicable communications protocols to indicate the desired action to be performed for a given resource. Examples of request headers such as HTTP and HTTPS headers and header fields that can be used by the network security system104to construct the synthetic requests include cache-control, connection, content-encoding, content-length, content-type, date, pragma, trailer, transfer-encoding, upgrade, via, warning, accept, accept-charset, accept-encoding, accept-language, authorization, cookie, expect, from, host, if-match, if-modified-since, if-none-match, if-range, if-unmodified-since, max-forwards, proxy-authorization, range, referrer, TE, and user-agent. Additional examples and information about the request header fields can be found, e.g., at List of HTTP header fields, https://en.wikipedia.org/w/index.php?title=List_of_HTTP_header_fields&oldid=1012071227 (last visited Mar. 16, 2021), which is incorporated by reference for all purposes as if fully set forth herein. One example of request methods, HTTP and HTTPS request methods that can be used by the network security system104to transmit the synthetic requests include GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH. Additional information about the HTTP/HTTPS request methods can be found at Hypertext Transfer Protocol, https://en.wikipedia.org/w/index.php?title=Hypertext_Transfer_Protocol&oldid=1012415417 (last visited Mar. 16, 2021), which is incorporated by reference for all purposes as if fully set forth herein. The intended purpose of the synthetic requests can vary from use case-to-use case. However, opposite to the malicious intent of out-of-band requests triggered by middlemen who hijack application sessions, the synthetic requests are configured to enforce security policies, and thereby thwart data exfiltration and malicious attacks. For example, the synthetic requests can be configured to cause the cloud applications108to provide metadata to the network security system104. In another example, the synthetic requests can be configured to update a security posture of resources (e.g., files) stored at the cloud applications108. More examples follow. In the context of this application, injecting a synthetic request in an application session refers to the network security system104generating the synthetic request during an already established application session and transmitting the synthetic request to the cloud applications108within the context of the already established application session. The synthetic request injection can also include receiving a synthetic response to the synthetic request within the already established application session. Within an application session, multiple synthetic requests can be injected, in parallel or in sequence. The notion of synthetic request injection analogously applies to connections, such that synthetic requests can be injected in an already established connection across multiple application sessions. In some implementations, a synthetic request is constructed using fields, variables, events, and parameters that are part of the original client request (or incoming request). Examples of such fields, variables, events, and parameters include cookie data, fixed headers, custom headers, and other request header fields of the original client request. Synthetic Response A “synthetic response” is an answer that satisfies a corresponding synthetic request issued by the network security system104. In preferred implementations, a synthetic request is sent by the network security system104to the cloud applications108, and therefore the corresponding synthetic response is transmitted by the cloud applications108and received by the network security system104. Unlike typical server responses, synthetic responses are not subjected to policy enforcement by the network security system104. The synthetic responses are generated by the cloud applications108and received by the network security system104separately of the server responses that answer the client requests. Since the synthetic requests are network requests generated by the network security system104over a network protocol (e.g., HTTP, HTTPS), the synthetic responses can also be constructed in the like network protocols. Like a typical server response, a synthetic response can include different configurations of a status line, response header fields and values thereof, and content body. Examples of response header fields such as HTTP/HTTPS response header fields that can be found in the synthetic responses include cache-control, connection, accept-ranges, age, ETag, location, proxy-authenticate, retry-after, server, set-cookie, vary, WWW-authenticate, allow, content-deposition, content-encoding, content-language, content-length, content-location, content-MD5, content-range, content-type, expires, IM, link, pragma, preference-applied, public-key-pins, trailer, transfer-encoding, Tk, strict-transport-security, upgrade, X-frame-options, via, warning, and last-modified. Additional examples and information about the response header fields can be found at, e.g., List of HTTP header fields, https://en.wikipedia.org/w/index.php?title=List_of_HTTP_header_fields&oldid=1012071227 (last visited Mar. 16, 2021), which is incorporated by reference for all purposes as if fully set forth herein. Examples of Applicable Communication Protocols The disclosed synthetic request-response mechanism can be implemented using a variety of communication protocols. Communications protocols define the basic patterns of dialogue over computer network in proper descriptions of digital and/or analog message formats as well as rules. The Synthetic Request and Synthetic Response can be implemented in the communication protocols capable of constructing request-response messaging patterns, for example, the HTTP and HTTPS protocols. The HTTP (Hypertext Transfer Protocol), HTTPS (HTTP secure) and subsequent revisions such as HTTP/2 and HTTP/3 are the common communication protocols which function as a request-response protocol in the client-server computing model. Protocols alternative to the HTTP and the variants include the GOPHER protocol which was an earlier content delivery protocol but was displaced by HTTP in 1990s. Another HTTP alternative is the SPDY protocol which was developed by Google and superseded by HTTP/2. Other communication protocols which may support applications incorporating the use of the disclosed synthetic request-response mechanism include but not be limited to, e.g., FTP, FTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMTP, SMTPS, and TFTP. The communication protocols used to exchange files between computers on the Internet or a private network and implementable by the disclosed synthetic request-response mechanism include the FTP (File Transfer Protocol), FTPS (File Transfer Protocol Secure) and SFTP (SSH File Transfer Protocol). FTPS is also known as FTP-SSL. FTP Secure is an extension to the commonly used FTP that adds support for the TLS (Transport Layer Security), and formerly the SSL (Secure Socket Layer). The SSH File Transfer Protocol (i.e., SFTP, also Secure File Transfer Protocol) is an extension of the secure shell (SSH) protocol that provides secure file transfer capabilities and is implementable by the disclosed synthetic request-response mechanism. Another file transfer protocol, secure copy protocol (SCP) is a means of securely transferring electronic files between a local host and a remote host or between remote hosts and is implementable by the disclosed synthetic request-response mechanism. A client can send (upload) file to a server, optionally including their basic attributes (e.g., permissions, timestamps). A client can also request files or directories from a server (download). Like SFTP, SCP is also based on the Secure Shell (SSH) protocol that the application server has already authenticated the client and the identity of the client user is available to the protocol. SCP is however outdated and inflexible such that the more modern protocol like SFTP is recommended for file transfer and is implementable by the disclosed synthetic request-response mechanism. The FTP and the like provide commands which, similar to the HTTP request methods, can be used by the network security system104to transmit the synthetic requests include ACCT, ADAT, AUTH, CSID, DELE, EPRT, HOST, OPTS, QUIT, REST, SITE, XSEM. Additional information about the FTP Commands can be found, e.g., at List of FTP commands, https://en.wikipedia.org/wiki/List_of_FTP_commands (last visited Mar. 24, 2021), which is incorporated by reference for all purposes as if fully set forth herein. A simple and lightweight file transfer protocol, Trivial File Transfer Protocol (TFTP) allows clients to get a file from or put a file onto a remote host which is typically embedded device retrieving firmware, configuration, or a system image during a boot process for a tftp server. In TFTP, a transfer is initiated by issuing a client (tftp) which issues a request to read or write a file on the server. The client request can optionally include a set of parameters proposed by the client to negotiate the transfer. The tftp client supports some commands that vary by the platforms. A list of tftp commands similar to HTTP request methods such as CONNECT, GET, PUT, QUIT, TRACE can be found at https://www.ibm.com/support/knowledgecenter/ssw_aix_72/t_commands/tftp.html, which is incorporated by reference for all purposes as if fully set forth herein. The communication protocols used for retrieving email (i.e., electronic mail) messages from a mail server include the IMAP (Internet Message Access Protocol), IMAPS (secure IMAP over the TLS or former SSL to cryptographically protect IMAP connections) as well as the earlier POP3 (Post Office Protocol) and the secure variant POP3 S. In addition to IMAP and POP3 which are the prevalent standard protocols for retrieving messages, other email protocols implemented for proprietary servers include the SMTP (Simple Mail Transfer Protocol). Like HTTP and FTP protocols, email protocols such as IMAP, POP3 and SMTP are based on the client-server model over a reliable data stream channel, typically a TCP connection. An email retrieval session such as a SMTP session including 0 or more SMTP transactions consists of commands originated by a SMTP client and corresponding responses from the SMTP server, so that the session is opened, and parameters are exchanged. Like file transfer protocols, email protocols provide commands which, similar to the HTTP request methods, can be used by the network security system104to transmit the synthetic requests. Examples of the text-based commands include HELO, MAIL, RCPT, DATA, NOOP, RSET, SEND, VRFY and QUIT for SMTP protocol, and commonly used commands like USER, PASS, STAT, LIST RETR, DELE, RSET, TOP and QUIT for POP3 protocol. Additional information about email protocol commands can be found at Request for Comments (RFC Standard Track publications from the Internet Society, Internet Engineering Task Force (IETF)), e.g., RFC 2821 https://tools.ietf.org/html/rfc2821 for SMTP Commands; RFC 3501 https://tools.ietf.org/html/rfc3501for IMAP Commands; RFC 1939 https://tools.ietf.org/html/rfc1939 for POP3 (last visited Mar. 24, 2021), which are incorporated by reference for all purposes as if fully set forth herein. Another communication protocol which may support synthetic request-response paradigm is the Lightweight Directory Access Protocol (LDAP) and its secure variant LDAPS (i.e., LDAP over SSL). This communication protocol is an open, vendor neutral, industry standard application protocol for accessing and maintaining distributed directory information services over Internet network. A client starts an LDAP session by connecting to a LDAP server over a TCP/IP connection. The client then sends an operation request to the server which in turn sends a response in return. Analogous to HTTP request methods and FTP commands, a LDAP client may request from server the following operations: Bind, Search, Compare, Add, Delete, Modify, Modify DN, Unbind, Abandon, and Extended. Additional information about the LDAP protocol can be found at https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol, which is incorporated by reference for all purposes as if fully set forth herein. Real-Time Streaming Protocol (RTSP), Real-Time Messaging Protocol (RTMP) and its secure variant RTMPS (RTMP over TLS/SSL) are some proprietary protocols for real-time streaming audio, video and data over the Internet network that are implementable by the disclosed synthetic request-response mechanism. For example, the RTSP protocol is used for establishing and controlling media sessions between two endpoints. Similar in some ways to HTTP, RSTP defines control sequences (referred as commands, requests or protocol directives) useful in controlling multimedia playback. Clients of media server issue RTMP requests, such as PLAY, RECORD and PAUSE to facilitate real-time control of streaming from a client to a server (Voice Recording), while some commands travel from a server to a client (Video on Demand). Some typical HTTP requests, e.g., the OPTIONS request, are also available in RSTP and are implementable by the disclosed synthetic request-response mechanism. Additional information about the RTSP and its commands can be found at https://en.wikipedia.org/wiki/Real_Time_Streaming_Protocol; additional information about the RTMP/RTMPS can be found at https://en.wikipedia.org/wiki/Real-Time_Messaging_Protocol, which are incorporated by reference for all purposes as if fully set forth herein. In other implementations, the disclosed synthetic request-response mechanism can be implemented in real-time chat/instant messaging (IM) protocols like XMPP (Jabber), YMSG (Yahoo! Messenger), MSNP (Windows Live Messenger), Skype, IRC, etc. The disclosed synthetic request injection can be also implemented in collaborative apps such as Slack, Microsoft Teams, Cisco WebEx that also allow sharing information and messages in a channel for across companies, cross organizational collaboration. Some applications like Slack start out as HTTP/S and then fall into a web socket mode where they treat the connection as a TCP connection and use a non-HTTP/S protocol for communication. Slack, an example collaborative app, offers IRC-style (Internet Relay Chat) features in messaging protocols like XMPP, including persistent chat rooms (channels) organized by topic, private groups, and direct messaging. Content, including files, conversations, and people, is all searchable within Slack. Slack also provides an API for users to create applications and automate process, such as sending automatic notifications based on human input, sending alerts on specified conditions, and automatically creating internal support tickets. Slack is a collaboration hub, after all, designed to bring the right people together with the right information through features like search, teams, shared channels, apps and integration, and so on. Slack teams, one of Slack features, allow communities, groups, or teams to join a “workspace” via a specific URL or invitation sent by an administrator or owner of the team. Slack organizes conversations into dedicated spaces called channels. Slack's public channels, open to everyone in the workspace, allow team members to communicate without the use of email or group SMS (texting). Private channels allow for private conversations between smaller sub-groups. Direct messages allow users to send private messages to specific users rather than the whole group of the workspace. The disclosed synthetic request-response injection can be implemented in such instant messaging protocols and for collaboration apps. Additional information about such instant messaging protocols can be found at Comparison of instant messaging protocols, https://en.wikipedia.org/w/index.php?title=Comparison_of_instant_messaging_protocols&ol did=1013553466 (last visited Apr. 8, 2021), which is incorporated by reference for all purposes as if fully set forth herein. Overcoming Metadata Deficiency Portions of the specification may refer to “metadata-deficient transactions,” “metadata-deficient requests,” “metadata-deficient application sessions,” and “metadata-deficient connections.” Metadata deficiency in a transaction/request/application session/connection is characterized by the absence of target metadata required to make a policy determination, and thereby to enforce a policy. Consider, for example, a policy that requires that only corporate user credentials be used to access the cloud applications108and not private user credentials. When metadata (e.g., request header fields) that typically specify user credentials are missing from the transaction/request/application session/connection, then there exists a metadata deficiency that can be overcome by the disclosed synthetic request injection. The metadata deficiency may result for different reasons under different circumstances. Typically, CASB and SWG proxies operate in a passive mode—monitoring network traffic that pass through the proxies to extract metadata and annotate client requests that are proxied (i.e., rerouted, or intercepted). For example, some cloud applications support multiple login instances, for example, a corporate instance to access Google Drive™ like “[email protected]” and a private instance to access Google Drive™ like “[email protected].” In such cases, the proxies may want to annotate the client transactions, e.g., HTTP/HTTPS transactions, to Google Drive™ with a user instance ID used to initiate the client transactions. The proxies may be configured to persist the user instance ID for reporting purposes or to apply policies. In order to determine the user instance ID, the proxies need to process a login transaction that contains the user instance ID. If such a login transaction does not bypass the proxies and is intercepted, the proxies then need to persist the state of the login transaction, i.e., store the user instance ID extracted from the login transaction and build a mapping from the cookie and URL parameters set for that login with the instance information. Circumstances arise when metadata like the user instance ID is not accessible to the proxies. For example, the proxies may have missed the login transaction that establishes the metadata mapping. This happens, for example, when the clients102are already logged into the cloud applications108prior to rerouting of an application session to the proxies. As a result, the proxies do not capture the login transaction. Subsequent transactions, which follow the login transaction and are captured by the proxies, are not useful because they do not contain the required metadata. In another circumstance, some cloud applications, such as native mobile applications, the transaction that establishes the metadata mapping is sent once or very infrequently, and therefore sometimes missed by the proxies. The disclosed synthetic request injection enables the proxies to separately retrieve the otherwise missing metadata directly from the cloud applications108on an ad hoc basis. The proxies no longer need to be dependent on the metadata mapping transactions as the sole source of metadata. This makes the proxies self-sufficient and greatly expands their policy enforcement horizon. Actions Beyond obtaining metadata information, the disclosed synthetic request injection can also execute actions on the cloud applications108, for example, on an ad hoc basis. For example, the synthetic requests can be used by the proxies to perform actions against the cloud applications108using the original transaction's authority. In the case of inline CASBs, this can be used to implement real-time enforcement of actions without a prior authorization or a prior access grant, as required with out-of-band API CASBs. This also allows the inline CASBs to inject policy actions for unsanctioned applications for which the CASBs lack API connectors. The disclosed synthetic request injection can also execute actions on resources (e.g., objects, files, computing instances) of the cloud applications108. For example, the synthetic requests can retrieve objects from the cloud applications108. The synthetic requests can change security configuration of the objects on the cloud applications108. The synthetic requests can be used to modify the security posture of the objects, i.e., change the security configuration of the objects, either after uploading the objects to the cloud applications108, or after downloading the objects from the cloud applications108. For example, the synthetic requests can change the share settings of an object from “sharing allowed” to “sharing not allowed,” or from “sharing allowed externally” to “sharing allowed only internally.” The synthetic requests can move an object from one location to another location in the cloud applications108, or from one cloud application to another cloud application, for example, when there is an active session with another cloud application. The information generated/retrieved by the synthetic requests can be used to block transmission of the objects to and from the cloud applications108. The synthetic requests can encrypt the objects before or after the objects arrive at the cloud applications108. The synthetic requests can quarantine the objects before or after the objects arrive at the cloud applications108. The synthetic requests can extract metadata from another request or transaction, for example, to determine the activity being performed, to determine the user instance ID being used, and to determine the sensitivity tag of an object. The synthetic requests can also run inline DLP checks on the objects to determine their sensitivity in real-time, and responsively execute security actions like blocking, allowing, encrypting, quarantining, coaching, and seeking justification based on the determined sensitivity. In some implementations, transmission (or flow) of objects to or from cloud applications can be controlled/modulated (e.g., blocked) when the synthetic request/s is/are used to determine that the object being manipulated (e.g., being downloaded, moved, versioned etc.) is sensitive based on the retrieved sensitivity metadata and that the account-type from which the manipulation was initiated or attempted is an uncontrolled or private account (e.g., non-corporation instance) based on the retrieved login metadata. This way, a combination of login metadata and sensitivity metadata retrieved by use of one or more synthetic requests can be used for policy enforcement. Generally speaking, metadata of different types/formats/creation dates/creation sources/storage origins, retrieved by one or more synthetic requests or retrieved from one or more sources by one or more synthetic requests, can be analyzed in the aggregate or as a combination to make a policy enforcement decision on one or more objects. The disclosed synthetic request injection can also in turn cause the cloud applications108to execute actions. For example, the synthetic requests can cause the cloud applications108to crawl objects residing in the cloud applications108and generate an inventory of the objects and associated metadata (e.g., an audit of share settings, collaboration networks, user assignments, and sensitivity statuses of the objects). The inventory can then be provided to the network security system104by the corresponding synthetic response. The network security system104can then use the inventory for policy enforcement. Consider, for example, the Box™ storage cloud application which provides an administrative API called the Box Content API™ to provide visibility into an organization's accounts of its users. The synthetic requests can poll the administrative API to discover any changes made to any of the accounts. Alternatively, the synthetic requests can register with the administrative API to inform the network security system104of any significant events. For example, the synthetic requests can use Microsoft Office365 Webhooks API™ to learn when a file has been shared externally. In other implementations, the disclosed synthetic request-response mechanism can interface with user APIs, in addition to/instead of administrative APIs. Retrieving metadata and executing actions are some examples of target network security objectives that can be achieved using the disclosed synthetic request injection. A person skilled in the art will appreciate that the application of the disclosed concept of configuring a network intermediary or middleware like the network security system104with network request-response (or request-reply) mechanism and methods to self-generate requests to satisfy a cloud security requirement may vary from use case-to-use case, architecture-to-architecture, and domain-to-domain. The request-response mechanism and methods can be implemented in various network protocols for inter-process communications like FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMTP, SMTPS, SPDY and TFTP. Endpoint Devices An “unmanaged device” is referred to as a Bring Your Own Device (BYOD) and/or an off-network device whose traffic is not being tunneled through the network security system104. The network security system104analyzes the incoming traffic to determine whether the cloud application transactions are made within the confines of a corporate network and/or from a device with a security agent or security profile installed. A device can be classified as an unmanaged device or as a managed device based on certain device characteristics collected by an endpoint routing agent (ERC). Depending on the type of device, the ERC can be a virtual private network (VPN) such as VPN on demand or per-app-VPN that use certificate-based authentication. For example, for iOS™ devices, it can be a per-app-VPN or can be a set of domain-based VPN profiles. For Android™ devices, it can be a cloud director mobile app. For Windows™ devices, it can be a per-app-VPN or can be a set of domain-based VPN profiles. The ERC can also be an agent that is downloaded using email or silently installed using mass deployment tools like ConfigMgr™, Altris™, and Jamfr™. The ERC collects device information such as registry key, active directory (AD) membership, presence of a process, operating system type, presence of a file, AD domain, encryption check, OPSWAT check, media access control (MAC) address, IMEI number, and device serial number. Based on the collected device information, the ERC classifies the device as unmanaged or managed. Additional or different categories can be used to classify a device such as a semi-managed device category or an unknown device category. For additional information regarding how the network security system104determines whether the incoming traffic is routed from a managed device or an unmanaged device, reference can be made to, for example, commonly owned U.S. patent application Ser. Nos. 14/198,499; 14/198,508; 14/835,640; 14/835,632; and 62/307,305, which are incorporated by reference for all purposes as if fully set forth herein. Portions of the specification may make distinctions between two types of endpoint devices used by users to access the cloud applications108. The primary distinction is between the mechanisms for coupling the endpoint devices to the network security system104. In relation to endpoint devices, the term “computer” will refer to more open systems where the network security system104can more directly install software and modify the networking stack. Similarly, in relation to endpoint devices, the terms “mobile” or “tablet” will refer to more closed systems where the network security system options for modifying the network stack are more limited. This terminology mirrors the situation today where computer-endpoint devices running Mac OS X, Windows desktop versions, Android, and/or Linux can be more easily modified than mobile or tablet devices running iOS, and/or Windows Mobile. Thus, the terminology refers to how third-party operating system vendor limitations are addressed to provide access to the network security system as opposed to a fundamental technical difference between the types of endpoint devices. Further, if mobile OS vendors open their systems further, it is likely that the distinction could be eliminated with more classes of endpoint devices. Additionally, it can be the case that certain server computers and other computing devices within an organization can have the client installed to cover machine-to-machine communications. A closely related point is that some clients interface with the network security system104differently. The browser add-on clients or PAC (proxy auto-configuration) files, for example, redirect the browsers to an explicit proxy. Only the traffic needed to apply the policy to is rerouted and it is done so within the application. The traffic arriving at the network security system104can have the user identity embedded in the data or within the secure tunnel headers, e.g., additional headers or SSL client side certificates. Other clients redirect select network traffic through transparent proxies. For these connections, some traffic beyond exactly those requests needed by the policy can be routed to the network security system104. Further, the user identity information is generally not within the data itself, but rather established by the client in setting up a secure tunnel to the network security system104. The interconnection between the clients102, the network security system104, and the cloud applications108will now be described. A public network couples the clients102, the network security system104, and the cloud applications108, all in communication with each other. The actual communication path can be point-to-point over public and/or private networks. Some items, such as the ERC, might be delivered indirectly, e.g., via an application store. The communications can occur over a variety of networks, e.g., private networks, VPN, MPLS circuit, or Internet, and can use appropriate application programming interfaces (APIs) and data interchange formats, e.g., Representational State Transfer (REST), JavaScript Object Notation (JSON), Extensible Markup Language (XML), Simple Object Access Protocol (SOAP), Java Message Service (JMS), and/or Java Platform Module System. All of the communications can be encrypted. The communication is generally over a network such as the LAN (local area network), WAN (wide area network), telephone network (Public Switched Telephone Network (PSTN), Session Initiation Protocol (SIP), wireless network, point-to-point network, star network, token ring network, hub network, Internet, inclusive of the mobile Internet, via protocols such as EDGE, 3G, 4G LTE, Wi-Fi, and WiMAX. Additionally, a variety of authorization and authentication techniques, such as username/password, Open Authorization (OAuth), Kerberos, SecureID, digital certificates and more, can be used to secure the communications. Policy The term “policy,” sometimes also referred to as a policy definition or policy data or content policy refers to a machine-readable representation of flow control and content control requirements for the cloud applications108. Typically, a policy is defined by one or more administrators at a corporation, or other entity, and is enforced upon users within that corporation, or entity. It is possible for individuals to define policies for their own usage that are enforced upon them; however, corporate usage is the more common case. It is also possible for a policy to be enforced on visitors or customers of a cloud application, e.g., where a corporation hosts or subscribes to a cloud application and requires visiting customers, users, or employees to adhere to the policy for use. Of particular note is that the policies considered herein are capable of being sensitive to the semantics of a cloud application, which is to say a policy can differentiate between logging in to a cloud application from, say, editing documents on the cloud application. Context is important for understanding usage; for an entity, the collection of dozens or hundreds of individual policies (e.g., log bulk downloads, prohibit editing documents on the service, only allow bulk downloads for users who are in the “Vice President” group) is referred to singularly as one policy or one policy definition. Thus, a system supporting multiple entities will generally have one policy per entity, each made up of dozens or hundreds of individual flow control and content control policies. Similarly, the policy that is transferred to individual computers can be a subset of a full corporate policy, e.g., solely a machine-readable representation of the URLs of interest, as opposed to the full policy specification for each URL describing the flow control and/or content manipulations. A “multi-part policy” refers to a policy that specifies triggering of at least one security action when at least one condition about the transaction is met. A multi-part policy applies to a single transaction, but at least one policy condition of the multi-part policy requires evaluation of data or metadata not available in the single transaction. Also, a multi-part policy applies to a single transaction, but at least one policy condition of the multi-part policy requires evaluation of data or metadata available in an external data or metadata store. Further, a multi-part policy applies to a single transaction, but at least one policy condition of the multi-part policy requires evaluation of data or metadata generated by an external engine. A multi-part policy applies in real-time during active analysis, but at least one policy condition of the multi-part policy requires evaluation of data or metadata collected in deferred time or during non-real-time inspection. Examples of multi-part policies include “prevent user from uploading/downloading, if user is at risk as indicated by anomaly detection,” “prevent sharing of a content, if the content is sensitive,” “prevent download of a file to a device, if the device is at risk as indicated by a malware detection engine,” “prevent deletion of a virtual machine, if the virtual machine is a critical server,” and others. Metadata Retrieval FIG.1shows one implementation of the network security system104issuing a synthetic request during an application session of a cloud application to retrieve metadata that is otherwise missing from the application session. InFIG.1, a client issues an authentication request122to log into a cloud application. The authentication request122provides a metadata mapping. The metadata mapping specifies a login instance (e.g., an email identified by “from-user” or “instance-id” information) used by the client to access the cloud application. The authentication request122also includes an authentication token that the client uses to access the cloud application. Successful authentication138results in the establishment114of an application session144. As illustrated inFIG.1, the authentication request122bypasses the network security system104. The bypass can be due to a variety of reasons, some of which are discussed above. More importantly, the bypass results in the network security system104not able to capture the metadata mapping. This presents itself as a technical problem because certain security policies are based on the “from-user” or “instance-id” information. Eventually though, the client is rerouted to the network security system104, by, for example, an endpoint routing client (ERC), such as a browser add-in or an agent running on the client. When this happens, from there onwards, the application session144is intermediated by the network security system104and subsequent requests from the client are intercepted by the network security system104. In other implementations, the application session144comes under the ambit of the network security system104when an incoming request152is sent by the client towards the cloud application and rerouted to the network security system104for policy enforcement. Most commonly, the incoming request152is a client request of a communication protocol, e.g., HTTP/HTTPS client request that attempts to execute an application activity transaction on the cloud application. The communication protocols such as HTTP, HTTPS, FTP, IMAP, SNMP and SNTP define the basic patterns of dialogue which support request-response messaging patterns and commonly operate in an application layer. Upon receiving the incoming request152during the application session144, the network security system104determines154whether it has access to metadata required to enforce a security policy on the incoming request152. In one implementation, this determination is made by inspecting a transaction header (e.g., HTTP and HTTPS headers) of the incoming request152and probing whether certain fields and variables supply the required metadata. In another implementation, this determination is made by looking up a metadata mapping store (e.g., a Redis in-memory cache) and inquiring whether there already exists a metadata mapping associated with an application session identifier (“app-session-ID”) of the application session144. In the scenario illustrated inFIG.1, both these evaluations would reveal that the required metadata is missing because the authentication request122that provided the metadata mapping was never captured by the network security system104. Accordingly, when the network security system104determines that it does not have access to the required metadata for the policy enforcement, it holds164the incoming request152and does not transmit it to the cloud application. Then, the network security system104generates a synthetic request168and injects the synthetic request168into the application session144to transmit the synthetic request168to the cloud application. The synthetic request168is configured to retrieve the missing metadata from the cloud application by inducing an application server of the cloud application to generate a response that includes the missing metadata. The network security system104configures the synthetic request168with the authentication token supplied by the incoming request152so that the synthetic request168can access the cloud application. The network security system104then receives a synthetic response176to the synthetic request168from the cloud application. The synthetic response176supplies the missing metadata178to the network security system104. The network security system104then uses the metadata178for metadata-based policy enforcement184. For example, if the metadata178specifies that the login instance was from a controlled account (e.g., a corporate email), then the network security system104releases the incoming request152and transmits186it to the cloud application. In contrast, if the metadata178specifies that the login instance was from an uncontrolled account (e.g., a private email), then the network security system104blocks the incoming request152and does not transmit it to the cloud application, or, in other implementations, alerts the end user that a policy enforcement has prevented the activity from being completed. The discussion now turns to some example implementations of how, in different scenarios, the network security system104constructs the synthetic requests and retrieves metadata from the synthetic responses. A person skilled in the art will appreciate that the disclosed synthetic request injection is not limited to these example implementations. There may exist, now and in future, other ways of implementing the disclosed synthetic request injection. This disclosure may not explicitly enumerate these other ways. Still, these other ways are within the scope of this disclosure because the intended purpose of the disclosed synthetic request injection is to enforce and improve network security. Variants of this intended purpose may be realized in different ways in different network architectures without deviating from the disclosed concept of configuring a network intermediary or middleware like the network security system104with network request-response (request-reply) mechanism and methods to self-generate requests to achieve a variety of network security objectives. Synthetic Listener Mode FIG.2depicts a synthetic listener mode of injecting a synthetic request in an application session of a cloud application and extracting metadata from a corresponding synthetic response in accordance with one implementation of the technology disclosed. In some implementations of the synthetic listener mode, the network security system104uses application-specific parsers (or connectors) and synthetic templates to construct the synthetic requests and extract the metadata-of-interest from the synthetic responses. FIG.2shows many instances104A-N of the network security system104. A data flow logic202, running to the network security system104, injects the incoming request152to a processing path of a particular instance104C of the network security system104. In one implementation, the particular instance104C of the network security system104passes the incoming request152to a service thread204. The service thread204holds the incoming request152and instructs a client URL (cURL) utility thread206to initiate a cURL request (operation1). Additional information about the cURL requests can be found at, e.g., The Art Of Scripting HTTP Requests Using Curl, https://curl.se/docs/httpscripting.html (last visited Mar. 17, 2021) and CURL, https://en.wikipedia.org/w/index.php?title=CURL&oldid=1002535730 (last visited Mar. 18, 2021), which are incorporated by reference for all purposes as if fully set forth herein. The cURL utility thread206injects the cURL request into a synthetic listener mode214(operation2). The synthetic listener mode214selects an application-specific parser222that is specific to the cloud application targeted by the incoming request152(e.g., by identifying the cloud application as a resource from a URL parameter or an HTTP request header field of the incoming request152). The application-specific parser222specifies match conditions that are specific to request and response syntaxes defined for a particular application programming interface (API) of the cloud application. The application-specific parser222implements a DAPII (Deep Application Programming Interface Inspection), e.g., HTTP/DAPII request processing236, which uses a synthetic template to determine whether the metadata-of-interest is missing from the application session144. When the metadata-of-interest is found to be missing, the synthetic template constructs a synthetic request using headers, fields, values, and parameters defined with syntax that elicits the metadata-of-interest from an application server248of the cloud application. At operation3, the constructed synthetic request is sent to the application server248. At operation4, a synthetic response from the application server248is routed to the synthetic listener mode214. In some implementations, the synthetic response is processed by a service thread (not shown). The application-specific parser222implements an HTTP/DAPII response processing254to extract the metadata-of-interest from the synthetic response. The extracted metadata (e.g., the “from-user” or “instance-id” information) is stored in a metadata store264(at operation5) for use in policy enforcement. At operation6, a cURL response is sent back to the cURL utility thread206, which in turn sends back a response to the service thread204(at operation7). The service thread204then releases the held incoming request152. Synthetic Templates As discussed above, the synthetic requests are constructed by synthetic templates of application-specific parsers, according to one implementation of the technology disclosed. A particular application-specific parser of a cloud application can have a set of synthetic templates. Respective synthetic templates in the set of synthetic templates can correspond to respective activities, for example, one synthetic template for upload activities, synthetic template for download activities, and yet another synthetic template for share activities. The set of synthetic templates can include a default synthetic template for generic activities (e.g., logins, log-outs). The set of synthetic templates can also include a specialized synthetic template for native apps running on endpoints (e.g., mobiles, computers). FIG.3shows one implementation of a processing path that generates synthetic requests using application-specific parsers and synthetic templates. InFIG.3, a parser selection logic312selects a particular application-specific parser322C from a plurality of application-specific parsers322A-N. The particular application-specific parser322C is specific to the cloud application targeted by the incoming request152. The parser selection logic312determines that the incoming request152is directed to the cloud application because a resource of the cloud application is specified by a URL parameter or a request header field of the incoming request152of a communication protocol (e.g., FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMTP, SMTPS, SPDY and TFTP). The parser selection logic312then invokes the particular application-specific parser322C. In some implementations, when no application-specific parser is available for a given cloud application, the parser selection logic312can select a universal parser that, for example, applies to an entire category of cloud applications (e.g., social media sites, messenger apps, blogs). A template selection logic332selects a particular synthetic template from a set of synthetic templates 0-N of the particular application-specific parser322C. In one implementation, the template selection logic332uses if-then-else rules and match conditions to select the particular synthetic template. For example, if the metadata-of-interest is available, then the synthetic listener mode is exited and normal processing of the incoming request152resumed. If the metadata-of-interest is missing and the activity is a generic activity (e.g., a login event), then a default synthetic template is used to construct the synthetic request. If the metadata-of-interest is missing and the activity is a particular activity (e.g., an upload event), then a specialized synthetic template that is specific to the particular activity is used to construct the synthetic request. If the metadata-of-interest is missing and the cloud application is a native app, then a specialized synthetic template that is specific to native apps is used to construct the synthetic request. Also, in some implementations, the synthetic templates are defined as JSON files. Once the particular synthetic template of the particular application-specific parser322C is selected, a metadata detection logic342of the particular synthetic template can determine whether relevant fields, variables, events, or parameters of the incoming request152or a metadata-mapping store contain the metadata-of-interest. If not, a synthetic request generation logic352of the particular synthetic template can construct the synthetic request by configuring those request header fields that elicit a response from the application server of the cloud application which supplies the metadata-of-interest. In some implementations, the synthetic request generation logic352uses some of the fields, variables, events, or parameters that are part of the incoming request152to construct the synthetic request. A synthetic response parsing logic362of the particular synthetic template can parse response header fields and content body of the synthetic response to extract the metadata-of-interest. Alternatively, the metadata extraction can be done directly by the particular application-specific parser322C, without using the particular synthetic template. Processing Flow FIG.4shows one implementation of a processing flow that generates an example synthetic request. The steps illustrated inFIG.4can be executed on server-side or client-side. At step402, a client transaction requesting interaction with a cloud application is intercepted. At step412, an application-specific parser specific to the cloud application is invoked. At step422, the application-specific parser processes the client transaction and determines an activity being performed by the client transaction. At step432, a particular synthetic template is selected that is specific to the cloud application and the determined activity. At step442, the particular synthetic template determines that the metadata-of-interest required to process the client transaction is missing. At step452, the particular synthetic template is used to generate and inject a synthetic request462. The synthetic request462uses a method, e.g., HTTP GET method454, has a URL456, has a fixed header464like user-agent, and has a custom header466with cookie data468like SID, SSID, HSID, and OSID. At step472, a synthetic response is received to the synthetic request462. At step482, the metadata-of-interest492is retrieved from the synthetic response. At step494, the retrieved metadata492is used for policy enforcement on the client transaction. Object-Based Storage Services Cloud-based storage which have resources that store data, e.g., documents, is one of fundamental technology alongside cloud compute and network required in a cloud environment. Cloud storage thus can also be referred to as cloud-based data stores or cloud-based resources in the context of this application. Cloud storage services provide users functionality to persist and operate data in the cloud and that is the target of DLP policies in the context of this application, e.g., logging in, editing documents, downloading bulk data, reading customer contact information, entering payables, and deleting documents. Cloud storage services108can be a network service or application, or can be web-based (e.g., accessed via a URL) or native, such as sync clients. Examples of cloud applications which provide cloud storage services today include applications such as BOX™, DROPBOX™, GOOGLE DRIVE™ SALESFORCE.COM™, MICROSOFT ONEDRIVE 365™, platforms such as APPLE ICLOUD DRIVE™, ORACLE ON DEMAND™, and ALIBABA CLOUD™, and infrastructure such as AMAZON AWS™, GOOGLE CLOUD PLATFORM™ (GCP), and MICROSOFT AZURE™. InFIG.21, Cloud applications108offer infrastructure-as-a-service (IaaS) that provide cloud-based computation, storage, and other functionality that enable organizations and individuals to deploy applications and services on an on-demand basis and at commodity prices. Consider three example cloud applications as IaaS: AMAZON WEB SERVICES™ (AWS)108A, GOOGLE CLOUD PLATFORM™ (GCP)108B, and MICROSOFT AZURE™ 108N with cloud storage services. GCP as an example provides three main storage services for different types of storages: Persistent Disk™ for block storage, Filestore™ for network file storage, and Cloud Storage™ for object storage. However, it is understood that environment100can include any number of various cloud applications108, and is not limited to these. Block storage and file storage are the traditional storage types. Block storage operates at a lower level—the raw storage device level—and manages data as a set of numbered, fixed-size blocks. File storage operates at a higher level—the operating system level—and manages data as a named hierarchy of files and folders. Block and file storage are often accessed over a network in the form of a Storage Area Network (SAN) for block storage, using protocols such as iSCSI or Fibre Channel, or as a Network Attached Storage (NAS) file server or “filer” for file storage, using protocols such as Common Internet File System (CIFS) or Network File System (NFS). Whether directly-attached or network-attached, block or file, this kind of storage is very closely associated with the server and the operating system that is using the storage. Object storage is a non-traditional data storage architecture for large stores of unstructured data. It destinate each piece of data as an object, keeps it in a separate storehouse, and bundle it with metadata and a unique identifier for easy access and retrieval. https://cloud.google.com/learn/what-is-object-storage Object storage offers a range of benefits for managing unstructured data that does not fit easily into traditional database. Unstructured data includes email, videos, photos, webpages, audio files, sensor data and other type of web content. Common use cases include using object storage as a persistent data store for building or migrating to cloud-native applications, storing large amounts of any data type and performing big data analytics, managing machine-to-machine data efficiently and reducing costs for storing and globally distributing rich media. To accommodate a variety of potential use cases, cloud storage services offer different storage choices, examples of which include memory, message queues, storage area network (SAN), direct-attached storage (DAS), network attached storage (NAS), databases, and backup and archive. Each of these storage options differs in performance, durability, and cost, as well as in their interfaces. Combinations of storage options form a hierarchy of data storage tiers. InFIG.21, AMAZON WEB SERVICES™ (AWS)108A offers multiple cloud-based storage tiers. Each tier has a unique combination of performance, durability, availability, cost, and interface, as well as other characteristics such as file systems and APIs. AWS108A also offers an on-demand cloud computing platform called ELASTIC COMPUTE CLOUD™ (EC2), which allows AWS consumers to create and run compute instances on AWS108A. EC2 instances use familiar operating systems like Linux, Windows, or OpenSolaris. Consumer can select an instance type based on amount and type of memory and computing power needed for the application or software they plan to run on the EC2 instance. The different AWS128A storage tiers are made accessible through EC2. Examples of AWS128A cloud storage services accessible via EC2 are Amazon SIMPLE STORAGE SERVICE™ (S3) (scalable storage in the cloud), AMAZON GLACIER™ (low-cost archive storage in the cloud), Amazon ELASTIC BLOCK STORAGE™ (EBS) (persistent block storage volumes for Amazon EC2 virtual machines), Amazon EC2 INSTANCE STORAGE™ (temporary block storage volumes for Amazon EC2 virtual machines), Amazon ELASTICACHE™ (in-memory caching service), AWS IMPORT/EXPORT™ (large volume data transfer), AWS STORAGE GATEWAY™ (on-premises connector to cloud storage), Amazon CLOUDFRONT™ (global content delivery network (CDN)), Amazon SQS™ (message queue service), Amazon RDS™ (managed relational database server for MySQL, Oracle, and Microsoft SQL Server), Amazon DYNAIVIODB™ (fast, predictable, highly-scalable NoSQL data store), Amazon REDSHIFT™ (Fast, powerful, full-managed, petabyte-scale data warehouse service), and databases on Amazon EC2™ (self-managed database on an Amazon EC2 instance). For additional information about different storage options and services offered by AWS cloud storage service128A, reference can be made to J. Baron and S. Kotecha, “Storage options in the AWS cloud,” Amazon Web Services, Washington D.C., Tech. Rep., October 2013, which is incorporated by reference for all purposes as if fully set forth herein. InFIG.21, five example AWS128A storage tiers are illustrated as blocks141-145, i.e., volatile storage tier141, solid-state drive (SSD) instance storage tier142, rotating disk instance storage tier143, reliable non-volatile storage tier144, and highly reliable non-volatile storage tier145. Volatile storage tier141represents the in-memory storage of an EC2 instance, such as file caches, object caches, in-memory databases, and random access memory (RAM) disks. Volatile storage tier141has a first native file system that is an in-memory file system suitable for providing rapid access to data. Examples of first native file system are Apache Ignite™ and temporary file storage facility (tmpfs). Volatile storage tier141improves the performance of cloud-based applications by allowing data retrieval from fast, managed, in-memory caches, instead of slower disk-based databases. Although volatile storage tier141is the fastest storage tier, it has the least durability and reliability of 99.9% (three nines), making it is suitable for temporary storage such as scratch disks, buffers, queues, and caches. EC2 local instance store volumes, Amazon SQS™, Amazon ElastiCache™ (Memcached or Redis) are some examples of AWS128A offerings under the volatile storage tier141. AWS128A offers ephemeral storage called instance tier that is physically attached to an EC2 instance. The ephemeral storage uses either rotating disks or solid-state drives (SSDs). SSD volumes can be non-volatile memory express (NVMe) based or SATA based. Ephemeral storage can also be redundant array of separate disks (RAID) configured to improve performance. The illustrated SSD instance storage tier142is implemented as AWS ephemeral storage that uses SSDs as a storage medium and provides temporary block storage for an EC2 instance. This tier comprises a preconfigured and pre-attached block of disk storage on the same physical server that hosts the EC2 instance. SSD instance storage tier142has a fourth native file system that is very fast and typically best for sequential access. SSD instance storage tier, to accommodate a variety of potential use cases, is optimized for high sequential input/output (I/O) performance across very large datasets. Example applications include NoSQL databases like Cassandra™ and MongoDB™, data warehouses, Hadoop™ storage nodes, seismic analysis, and cluster file systems. Rotating disk instance storage tier143is implemented as AWS ephemeral storage that uses hard disk drives (HDDs) as a storage medium and has a fifth native file system. Throughput-Optimized HDD™ and Cold HDD™ are examples of HDD volume type storage offered by AWS128A. Throughput-Optimized HDD™ volumes are low-cost HDD volumes designed for frequent-access, throughput-intensive workloads such as big data, data warehouses, and log processing. These volumes are significantly less expensive than SSD volumes. Cold HDD™ volumes are designed for less frequently accessed workloads such as colder data requiring fewer scans per day. Cold HDD™ volumes are significantly less expensive than Throughput-Optimized HDD™ volumes. Reliable non-volatile storage tier144is implemented as AWS Elastic Block Store™ (EBS) with a second native file system. This implementation provides block level storage volumes for use with EC2 instances. This implementation provides EBS volumes that are off-instance, network-attached storage (NAS) persisting separately from the running life of an EC2 instance. After an EBS volume is mounted to an EC2 instance, it can be used as a physical hard drive, typically by formatting it with the native file system of choice and using the file I/O interface provided by the EC2 instance operating system. There is no AWS data APIs for EBS. Instead, EBS presents a block-device interface to the EC2 instance. That is, to the EC2 instance, an EBS volume appears just like a local disk drive. To write to and read data from reliable non-volatile storage tier244, the native file system I/O interfaces of the chosen operating system are used. Highly reliable non-volatile storage tier145depicts an example AWS Amazon Simple Storage Service™ (S3) with a third native file system. This tier provides object storage with a web service interface to store and retrieve huge amounts of data at very low costs and high latency. It delivers the highest level of rated durability of 99.999999999% (eleven nines), approximately. Amazon S3 provides standards-based REST and SOAP web services APIs for both management and data operations. These APIs allow access of S3 objects (files) to be stored in uniquely-named buckets (top-level folders). Buckets are a simple flat folder with no file system hierarchy. Each object can have a unique object key (file name) that serves as an identifier for the object within that bucket. The third native file system of S3 is an object-based file system that operates on the whole object at once, instead of incrementally updating portions of the objects. The third native file system uses a PUT command to write objects into S3, a GET command to read objects from S3, a DELETE command to delete objects, a POST command to add objects using HyperText Markup Language (HTML) forms, and a HEAD command to return an object's metadata but not the data itself. In other implementations, a file system hierarchy (e.g., folder1/folder2/file) can also be emulated in S3 by creating object key names that correspond to the full path name of each file. FIG.21also shows four examples of Google Cloud Platform™ (GCP)128B storage tiers as blocks151-154. This includes volatile storage tier151, reliable non-volatile storage tier252with a first storage medium, reliable non-volatile storage tier153with a second storage medium, and highly reliable non-volatile storage tier154. GCP128B allows consumers to create scalable virtual machines. Each virtual machine has access to memory in volatile storage tier151hosting a first native file system. The reliable non-volatile storage tier152offers persistent storage of data on a first storage medium (e.g., NVMe SSDs). This storage tier hosts a second native file system. The reliable non-volatile storage tier153also hosts the second native file system but offers persistent storage of data on a second storage medium (Seq. HDD). The highly reliable non-volatile storage tier154is an object store hosting a third native file system. FIG.21further illustrates three example Microsoft Azure™ (Azure) 128C storage tiers as blocks161-163, i.e., volatile storage tier161, reliable non-volatile storage tier162, and highly reliable non-volatile storage tier163. For online transactional processing (OLTP), online analytical processing (OLAP), and hybrid transaction/analytical processing (HTAP), Azure 128C allows consumers to optimize performance using in-memory storage of volatile storage tier261that hosts a first native file system. The reliable non-volatile storage tier162provides persistent storage of data using a block storage scheme and hosts a second native file system. The highly reliable non-volatile storage tier163provides object storage by storing data as blobs inside containers and hosts a third native file system. Recent developments in cloud-based object storage enable it to store any amount of data on the cloud with rich functionalities and optimized performance. Cloud service providers like AMAZON WEB SERVICES™ (AWS), GOOGLE CLOUD PLATFORM™ (GCP), and MICROSOFT AZURE™ offering infrastructure as a service (IaaS) all provide object storage solutions for the cloud. For example, Amazon Simple Storage Service™ (S3) is depicted in highly reliable non-volatile storage tier145inFIG.1with a third native file system. This AWS tier provides object storage with a web service interface to store and retrieve large amounts of data. Google Cloud Platform™ also has highly reliable non-volatile storage tier154inFIG.1which is an object store hosting a third native file system. The highly reliable non-volatile storage tier163provided by Microsoft Azure™ has object storage by storing data as blobs inside containers and hosts a third native file system. Amazon S3, for example, is durable and scalable cloud object storage that is optimized for reads and is built with an intentionally minimalistic feature set. It provides a simple and robust abstraction for file storage that frees one from many underlying details that one normally deals with in traditional storage. Instead of being closely associated with a server, Amazon S3 storage is separate of a server and is accessed over the Internet. Instead of managing data as blocks or files using SCSI, CIFS, or NFS protocols, data is managed as objects using an Application Program Interface (API) built on standard HTTP verbs. Common use cases for Amazon S3 storage include backup and archive for on-premises or cloud data; content, media, and software storage and distribution; big data analytics; static website hosting; cloud-native mobile and Internet application hosting; and disaster recovery. To support these use cases and many more, Amazon S3 offers a range of storage classes designed for various generic use cases: general purpose, infrequent access, and archive. To help manage data through its lifecycle, Amazon S3 offers configurable lifecycle policies. By using lifecycle policies, one can have their data automatically migrate to the most appropriate storage class, without modifying their application code. In order to control who has access to their data, Amazon S3 provides a rich set of permissions, access controls, and encryption options. With Amazon S3, one does not have to worry about device or file system storage limits and capacity planning—a single bucket can store an unlimited number of files. One also does not need to worry about data durability or replication across availability zones—Amazon S3 objects are automatically replicated on multiple devices in multiple facilities within a region. The same with scalability—if their request rate grows steadily, Amazon S3 automatically partitions buckets to support very high request rates and simultaneous access by many clients. Each Amazon S3 object consists of data (the file itself) and metadata (data about the file). The data portion of an Amazon S3 object is opaque to Amazon S3. This means that an object's data is treated as simply a stream of bytes—Amazon S3 does not know or care what type of data one is storing, and the service doesn't act differently for text data versus binary data. The metadata associated with an Amazon S3 object is a set of name/value pairs that describe the object. There are two types of metadata: system metadata and user metadata. System metadata is created and used by Amazon S3 itself, and it includes things like the date last modified, object size, MD5 digest, and HTTP Content-Type. User metadata is optional, and it can be specified at the time an object is created. One can use custom metadata to tag their data with attributes that are meaningful. Objects are stored in containers called buckets, and each object is identified by a unique user-specified key (filename) that serves as an identifier for the object within that bucket. An object can store virtually any kind of data in any format. Objects can range in size from 0 bytes up to 5 TB, and a single bucket can store an unlimited number of objects. This means that Amazon S3 can store a virtually unlimited amount of data. Buckets are generally used for organizing objects in Amazon S3. A bucket is associated with an AWS account that is responsible for storing and retrieving data on the bucket. The account, which owns the bucket, is charged for data transfer. Buckets are a simple flat folder (top-level folders) with no file system hierarchy, i.e., one cannot have a sub-bucket within a bucket. Buckets form the top-level namespace for Amazon S3, and bucket names are global. This means that their bucket names must be unique across all AWS accounts, much like Domain Name System (DNS) domain names, not just within their own account. Bucket names can contain up to 63 lowercase letters, numbers, hyphens, and periods. One can create and use multiple buckets; one can have up to 100 per account by default. It is a best practice to use bucket names that contain their domain name and conform to the rules for DNS names. This ensures that their bucket names are their own, can be used in all regions, and can host static websites. Buckets play a vital role in access control and pave the way for creating usage reports on S3. Even though the namespace for Amazon S3 buckets is global, each Amazon S3 bucket is created in a specific region that one chooses. This lets one control where their data is stored. One can create and use buckets that are located close to a particular set of end users or customers in order to minimize latency or located in a particular region to satisfy data locality and sovereignty concerns, or located far away from their primary facilities in order to satisfy disaster recovery and compliance needs. One can control the location of their data; data in an Amazon S3 bucket is stored in that region unless one explicitly copies it to another bucket located in a different region. Buckets and objects are also used in other cloud storage services such as MICROSOFT AZURE™, GOOGLE CLOUD PLATFORM™, and ALIBABA CLOUD STORAGE™. For example, Microsoft Azure provides software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS) like Amazon AWS. In the context of MICROSOFT AZURE™, the buckets correspond to blobs. Azure Blob storage, Microsoft's object storage solution for the cloud, is an object-based storage service made up of containers and objects. Containers are similar to prefixes in the world of Amazon S3. Azure Blob storage is optimized for storing massive amounts of unstructured data that does not adhere to a particular data model or definition (e.g., text or binary data). Users or client applications can access object in Blob storage via HTTP/HTTPS, from anywhere in the web. Objects in Azure Blob storage are accessible via the Azure Blob storage REST API, Azure PowerShell, Azure CLI, or an Azure Blob client library. https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction GOOGLE CLOUD PLATFORM™ (GCP) is a suite of cloud computing services offered by Google as a part of Google Cloud™. Alongside a set of management tools, GCP provides a series of modular cloud services including computing, data storage, data analytics and machine learning. Google's Cloud Storage™ is Google's object storage solution for the cloud, providing service for storing your objects in Google Cloud™. An object is an immutable piece of data consisting of a file of any format. Objects are stored in containers called buckets. Buckets are the basic containers that hold data in Google's Cloud Storage™ (GCS). Buckets are used to organize your data and control access to your data. Everything that you store in GCS must be contained in a bucket. Unlike directories and folders, buckets cannot be nested. All buckets are associated with a project which consists of a set of users, a set of APIs, billing, authentication, and monitoring settings for those APIs. While there is no limit to the number or buckets you can have in a project or location, there are limits to the rate you can create or delete buckets. You can have one project or multiple projects, and you can group your projects under an organization. A bucket in Google's Cloud Platform™, like buckets of Amazon S3, is created with a globally-unique name and a geographic location where the bucket and its contents are stored. Bucket names have more restrictions than object names and must be globally unique, because every bucket resides in a single Google Cloud Storage™ namespace. Also, bucket names can be used with a CNAME redirect, which means they need to conform to DNS naming conventions. The name and location of the bucket cannot be changed after creation, though you can delete and re-create the bucket to achieve a similar result. There are also optional bucket settings that can be configured during bucket creation and be changed later. Bucket labels are key:value metadata pairs that allow you to group your buckets along with other Google Cloud™ resources such as virtual machine instances and persistent disks. For example, you can use labels (metadata pairs) to create a team key that has values alpha, beta, and delta, and apply the team:alpha, team:beta, and team:delta labels to different buckets in order to indicate which team is associated with those buckets. You can apply multiple labels to each bucket, with a maximum of 64 labels per bucket. As a generally the case for bucket metadata, bucket labels are not associated with individual objects or object metadata. https://cloud.google.com/storage/docs/key-terms. Objects are the individual pieces of data stored in Google's Cloud Storage™ (GCS). There is no limit on the number of objects that can be created in a bucket. Objects have two components: object data and object metadata. Object data is typically a file that you want to store in GCS. Object metadata is a collection of name-value pairs that describe various object qualities. An object's name is treated as a piece of object metadata in GCS. Object names can contain any combination of Unicode characters (UTF-8 encoded) and must be less than 1024 bytes in length, and must be unique within a bucket. Google's Cloud Storage™ (GCS) uses a flat namespace to store objects, which means that GCS see all objects in a given bucket as separate with no hierarchical relationship. A common character used in object names is the slash (/) character as if they were stored in a virtual hierarchy. For example, you could name one object /europe/france/paris.jpg and another object/europe/france/cannes.jpg. When you list these objects, they appear to be in a hierarchical directory structure under the folders europe and France, though the objects are stored separately with no hierarchical relationship whatsoever. An object in GCS can have different versions: by default, when you overwrite an object, GCS deletes the old version and replaces it with a new version. Each object version is uniquely identified by its generation number, found in the object's metadata. When object versioning has created an older version of an object, you can use the generation number to refer to the older version. This allows you to restore an overwritten object in your bucket, or permanently delete older object versions that you no longer need. Generation numbers are also used when you include preconditions in your requests. A resource is an entity within Google Cloud™. Each storage entity like project, bucket, and object in Google Cloud™ is a resource, as are compute entities such as compute engine instances. Each resource has a unique name that identifies it, much like a filename. Buckets have a resource name in the form of projects/_/buckets/[BUCKET_NAME], where [BUCKET_NAME] is the ID of the bucket. Objects have a resource name in the form of projects/_/buckets/[BUCKET_NAME]/objects/[OBJECT_NAME], where [OBJECT_NAME] is the ID of the object. In the context of GOOGLE CLOUD PLATFORM™108B as shown inFIG.22, a resource can be a folder2212under the organization2202on or of a cloud storage service, a project2222on or of a cloud storage service, and/or a resource2232on or of a cloud storage service. Furthermore, a resource can be a computing instance2242,2262on or of a cloud storage service, a topic2252on or of a cloud storage service, a service2272on or of a cloud storage service, a bucket2282,2292on or of a cloud storage service, and/or an object within a bucket282,392. Azure Blob™ storage offers three types of resources, the storage account, a container in the storage account, and a blob in a container.FIG.23is a diagram that shows the relationship between these resources. In the context of MICROSOFT AZURE™ 108C, a resource can be an account2302on or of a cloud storage service, a container2312on or of a cloud storage service, and/or a blob2322on or of a cloud storage service. A storage account provides a unique namespace in Azure for data. Every object stored in Azure Blob storage has an address that include unique account name. The combination of the account name and the Azure blob endpoint forms the base address for the objects in the storage account. For example, if your storage account is named mystorageaccount, then the default endpoint for Blob storage is http://mystorageaccount.blob.core.windows.net. A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs. Azure Blob storage supports different types of blobs: page, block, and append. In simplest terms, block blobs store text and binary, which are made up of blocks of data that can be managed individually. Append blobs are made up of blocks like block blobs, but are optimized for append operations. Append blobs are ideal for scenarios such as logging data from virtual machines. Page blobs are used to house the virtual hard drive (VHD) files and serve as disks for Azure virtual machines. Blob storage provides programmatic access for the creation of containers and objects within the storage account. Blob storage inherits the availability and durability of the storage account it resides in. Blob storage is priced by storage consumption, data transfer, and various operations. The maximum size for an individual object is 4.7 TB for block and 8 TB for page. The maximum throughput for a single blob is 60 MB/s. FIG.24also illustrates one implementation of a storage hierarchy2402of Amazon S3 with an account2412, a bucket2422, and an object2432. Also shown is a storage hierarchy2404of Azure with a subscription2414, a storage account2424, a blob container2434, a block blob2444, and a block2454. All of the above and their constituent components and subcomponents can be considered a resource in the context of this application. https://cloud.google.com/storage/docs/introduction. Object Metadata FIG.12shows one implementation of using synthetic requests to retrieve object metadata from cloud applications. InFIG.12, an incoming request1204is intercepted by the network security system104. The incoming request1204includes an object identifier1206of a target object residing in cloud applications. The network security system104detects1208the object identifier1206from the incoming request1204(e.g., from the HTTP request header). The104proxy then configures a synthetic request with the object identifier1206and issues the synthetic request to the cloud application. The synthetic request is configured to retrieve object metadata1214about the target object from the cloud application using the object identifier1206. Examples of the object metadata1214include object name, object size, object type, and object sensitivity. Then, a synthetic response1212is received by the network security system104. The synthetic response1212supplies the object metadata1214to the network security system104. Then, the network security system104uses the supplied object metadata1214for policy enforcement1284on the held incoming request1204. In some implementations, the network security system104releases the flow1298to transmit the incoming request1204to the cloud application. In other implementations, the incoming request1204is blocked and not transmitted to the cloud application, preventing the incoming request1204, for example, to share sensitive data to unauthorized users. FIG.13shows one implementation of a succeeding synthetic request being issued to a client to convey object metadata, generated by a preceding synthetic request. InFIG.13, the object metadata1214, generated by the preceding synthetic request1210, is sent by the network security system104to the client using a succeeding synthetic request1302. One example of the object metadata1214conveyed to the client by the succeeding synthetic request1302includes notification about completion of a transaction like an upload or a download. The notification serves as a confirmation, for example, via a GUI that the requested transaction was successful. FIG.14shows one implementation of using the synthetic requests to retrieve objects from the cloud applications108. In such an implementation, the synthetic response1212supplies the target object1402to the network security system104from the cloud application. Also, the policy enforcement1412is then on the object1402itself, for example, running DLP checks on the object1402for sensitivity determination, in addition to or instead of being on the held incoming request1204. In some implementations, based on the results of the policy enforcement1412, the network security system104releases the flow1422to transmit the incoming request1204to the cloud application. In other implementations, the incoming request1204is blocked and not transmitted to the cloud application. Sensitive Metadata Data loss prevention (DLP) solutions provide capabilities to classify sensitive data in cloud apps, generally detecting sensitive data in documents and preventing unauthorized access, saving or sharing of the sensitive data. Enterprise security teams spend an enormous amount of time honing DLP for data protection and for reducing false positives. Endpoint DLP disclosed in earlier relevant patent applications has the potential to tackle security issues such as exfiltration of sensitive data that resides on cloud applications but “in-use” at the endpoints which have access to that sensitive data. Protecting such in-use sensitive data can be achieved by enforcing security policy by a Network Security System (NSS), which is interposed between client endpoints and cloud-based services monitoring of data movement between endpoints and cloud applications and among the applications. For example, cloud application admins can prohibit sensitive data to be shared with unauthorized users or moved to uncontrolled locations. To this end, the first action taken by the data loss prevention (DLP) to identify sensitive data that is mandated by central policies for protection of cloud-based services (e.g., data at rest). DLP is a very resource intensive process, e.g., string evaluation for detecting sensitive data is computationally expensive, taking up extensive memory and CPU resources. As much of the collaboration among workers has moved to the cloud, a vast majority of documents are stored there. DLP can utilize the extensive CPU and memory resources of the cloud to complete the heavy duty of centralized sensitivity classification for stored files. The sensitivity classification can be stored as and identified by the sensitivity metadata of objects which have the files as data. The sensitivity classifications can also be stored in centralized metadata store on a server. Such a centralized process for sensitivity classification is commonly referred to as the “content sensitivity scan” on server side. The content sensitivity scan can apply on endpoint system memory to identify sensitive data in-use by clients on endpoints. Regardless of the location where the data may be present, content sensitivity scan demands deep inspection of document(s) of the content and produce sensitivity classification by subjecting the document(s) to content analysis techniques like language-aware data identifier inspection, document fingerprinting, file type detection, keyword search, pattern matching, proximity search, regular expression lookup, exact data matching, metadata extraction, and language-agnostic double-byte character inspection. Content-based analysis is computationally intensive and time consuming. Not all endpoints and the Network Security System (NSS) have sufficient computational resources to perform content analysis ad hoc, which also impacts user experience. In such cases, the Network Security System (NSS) can enforce security policy by relying on previously generated sensitivity metadata, rather than by performing the computationally intensive and time consuming content sensitivity scan on the data in traffic. As used herein, phrases such “previously generated”, “proactively generated”, “generated in advance of”, and “generated prior to” refer to the sensitivity metadata being generated ahead of time in anticipation of its use in responding to the client request. For example, sensitivity metadata can be generated when the document is: (i) first stored on mass storage media like network mounted file servers and cloud storage services (known as data at rest), (ii) first transmitted over the network (known as data in motion), or (iii) first created by a user on the endpoint (known as data in use). Sensitivity metadata can be retrieved from a cloud-based metadata store populated by the content sensitivity scan. The cloud-based metadata store can be a distributed in-memory cache such as Amazon ElastiCache. Sensitivity metadata can also be retrieved from an on-premise metadata store. Or sensitivity classification can be retrieved from the sensitivity metadata of objects residing in object-based storage services. Group Membership Metadata This application uses the terms “information” and “metadata” interchangeably, in some implementations. Many cloud applications such as apps, bots, and other integrations can work with channels and groups that have members from multiple workplaces and organizations. The collaborative apps such as Slack, Microsoft Teams, Cisco WebEx, Zoom, Google Meet, and so on, provide a workplace for a group of users to share information and messages for collaboration across companies and organizations. Slack's shared channels, for example, allow users to connect across Slack workspaces, creating a direct line of communication between the user and users from another company or organization for collaboration. For example, one Slack feature called Slack Connect allows channel members to invite people from other companies to join the same Slack channel. For collaboration hubs like or similar to Slack, there exists a security issue for the cloud application to ensure the right information would be shared with the right people. The security enforcement needs to evaluate possible data loss or exfiltration if the channel is comprised of members who are unauthorized to be exposed to sensitive data in accordance with applicable security policy (e.g., members outside the organization's account on the cloud application). In some implementations, the disclosed synthetic request injection can be used for data loss prevention (DLP) in the context of sharing sensitive content with a user group that is a group of users managed as a unit by cloud service providers. A user group can be, e.g., a group of IAM users on AWS, a group of users on a network directory, a group of users sharing a folder on cloud application, a public/private/shared channel having a group of members, a team having a group of members, or a social media group. A user group may include a mix of authorized and unauthorized users in accordance with security policy settings configured by subscribers of cloud service providers. In one implementation involving making posts to a user group, when the network security system104intercepts an incoming request that attempts to make a post (or notification) to a particular group, the network security system104can first analyze the request to determine2108whether the attempted post contains sensitive data (e.g., a sensitive object or link thereto, or sensitive text or images). FIG.25shows one implementation of using synthetic requests to retrieve a group membership metadata for enforcing a security policy restricting users to share sensitive content with a user group that can be accessed by members outside the organization. In theFIG.25, the network security system104intercepts an incoming request that is directed to toward a user group hosted on a cloud application by an organization. The network security system104detects that the incoming request attempts to share content of a resource with members of the user group. The synthetic requests are used to retrieve, in a corresponding synthetic response2520, metadata about profiles of respective users of the particular user group and determine whether the user group can be accessible to users outside the organization. In one implementation, such determination can be based on the information about any members of the user group have non-corporate/private/uncontrolled emails or other login instances. In another implementation, the synthetic request can retrieve the group membership metadata by probing the data of the channel or conversation where the members thereof communicate and collaborate each other. The network security system104then enforces one or more security policies (or DLP policies) based on an analysis of the retrieved group membership metadata. This can include executing security action like blocking the posting request2598, when it is determined that the group include unauthorized users in accordance with a predefined policy2584. The network security system104can block the client request to prevent posting of the sensitive content of the resource to the particular group2598, and thereby preventing unauthorized users from having access to the sensitive content. If that is not the case (e.g., all the members in the particular group are authorized users), then following the predefined policy2184, the network security system104can allow the request and fulfill the posting of the sensitive content on the particular group2568. Other security actions include seeking user justification, notifying sensitive nature of the attempted post, encrypting the sensitive content, quarantining sensitive content, or coaching the user on the security policies. FIG.25also shows one implementation to determine whether the resource the request attempts to share with that particular user group is sensitive or not by using synthetic requests to retrieve sensitive metadata of the resource hosted by the cloud application. The network security system104can separately determine the sensitivity of the resource to be shared by subjecting the content of the resource to content analysis techniques2554(e.g., string evaluation, sub-string inspection, or language-aware data identifier inspection). The generated sensitivity classification then can be encoded in sensitivity metadata stored in a cloud-base metadata store for DLP policy enforcement in the future. In other implementations, the sensitivity of the content may be inferred from previously generated sensitivity classification stored as and identified as already available sensitive metadata, and without running a DLP analysis ad hoc. In one example, the sensitivity metadata can be retrieved from cloud-based metadata store populated by inspection services. In another example, sensitivity metadata can be retrieved from the cloud application if the post content is resided therein (i.e., data at rest). In another example, the sensitivity data can be available in the local metadata store at the endpoint if the post content is transmitted from the endpoint (i.e., data in motion or data in use). A person skilled in the art will appreciate that the scale of the content can range from a document to an entire networked storage media. A person skilled in the art will also appreciate that the type and form of the post content can range from text, image, source code, to audio or video, and so on. Upon determining that the content of the attempted post is sensitive, the network security system104then issues a synthetic request2510to retrieve metadata about the membership of the group of users. In one example, the group membership metadata can be probed to determine whether the group contains any members that are not supposed to have access to the sensitive content mandated by the security policy of the cloud application. In one implementation, the synthetic request injection can determine the group membership based on the login instances used by the members to participate in the particular channel or conversation (e.g., corporate v/s personal emails). In another implementation, the group membership metadata can be determined by members' personal profiles maintained by an application server of the cloud application. In another implementation, the group membership can be determined by the metadata of a channel or a workplace which is the dedicated space for the group of users to collaborate seamlessly. Collaborative apps such as Slack, Microsoft Teams, Cisco WebEx, and other integrations work with channels and workspaces for collaboration among a group of users that have members across from companies and organizations. In one example, Slack app can detect when a channel has members from multiple workspaces or organizations. The following is an example of data from Slack API that, or parts of which, can be retrieved by the disclosed synthetic request injection. An example data, the “is ext shared”: false field below is a flag indicating that this Slack channel is not shared externally with a remote organization. A synthetic request injection can be used to fetch the “false” value of the key-value pair, causing the network security system104to determine that the channel does not have any member of external users of a remote organization: {“ok”: true,“channel”: {“id”: “C012AB3CD“.“name”: “general”,“is_channel”: true,“is_group”: false,“is_im”: false,“created”: 1449252889,“creator”: “W012A3BCD”,“is_archived”: false,“is_general”: true,“unlinked”: 0,“name_normalized”: “general”,“is_read_only”: false,“is_shared”: false,“parent_conversation”: null,“is_ext_ shared”: false,“is_org_shared”: false,“pending_shared”: [ ],“is_pending_ext_shared”: false,“is_member”: true,“is_private”: false,“is_mpim”: false,“last_read”: “1502126650.228446”.“topic”: {“value”: “For public discussion of generalities”,“creator”: “W012A3BCD”,“last_set”: 1449709364},“purpose”: {“value”: “This part of the workspace is for fun. Make fun here.”,“creator”: “W012A3BCD”,“last_set”: 1449709364},“previous_names”: [“specifics”,“abstractions”,“etc”],“locale”: “en-US”}} Additional information about the Slack API regarding conversations can be found at conversations.info, https://api.slack.com/methods/conversations.info (last visited Apr. 9, 2021), which is incorporated by reference for all purposes as if fully set forth herein. This notion applies analogously to using synthetic request-response mechanism in the place of APIs of other cloud applications like Facebook Messenger, Zoom, and Google Chat. Bucket-Level Ownership Metadata An opportunity arises to make improved DLP solution self-sufficient to separately retrieve metadata from cloud service providers that host some of their resources in object storage, while adhering to the demanding intermediation protocols of the cloud service providers. Improved security posture and reduced risk of data loss, exposure, and exfiltration across multi-cloud environments may result. All of the above-mentioned object storage entities, e.g., projects, containers, buckets, blobs, blocks, objects and their constituent components and subcomponents are considered a cloud-based resource that has a unique name to identify it. Generally, a cloud-based resource can be identified by their names, unified resource identifiers (URIs), unified resource locators (URLs), domain names, directory addresses, IP addresses, keys, unique DNS-compliant name, region names, or any other identifiers alone or in combination thereof. For object storage entities, the resource name is much like a file name in a single global namespace of the cloud service provider, i.e., the resource name of the cloud domain is an unique file path destinated on the cloud. A resource with an unique resource name coupled with the cloud service provider's base URI constitute much like a unique file path destinated on the cloud. Such an unique URL is referred as a cloud destination in the context of this application. For example, buckets form the top-level namespace for Amazon S3, and bucket names are global within the AWS realm. One example of the resource identification is as follows: https://packtpub.s3.amazonaws.com/books/acda-guide.pdf. In this example, packtpub is the name of the S3 bucket and books/acda-guide.pdf is the key. When the resource being logged is an S3 bucket, the resource name includes “packtpub” as an entry. In Microsoft Azure™, data container such as account name provides a unique namespace in Azure for data. Every object stored in Azure Blob storage has an address that include unique account name. In one example, Azure's blob can be identified as follows:The resource URL syntax assigns each resource a corresponding base URI, which refers to the resource itself. For the storage account, the base URI includes the name of the account only: https://myaccount.blob.core.windows.net.For a container, the base URI includes the name of the account and the name of the container: https://myaccount.blob.core.windows.net/mycontainer.For a blob, the base URI includes the name of the account, the name of the container, and the name of the blob: https://myaccount.blob.core.windows.net/mycontainer/myblob.A storage account may have a root container, a default container that can be omitted from the URI. A blob in the root container can be referenced without naming the container, or the root container can be explicitly referenced by its name ($root). The following URIs both refer to a blob in the root container: https://myaccount.blob.core.windows.net/myblob, https://myaccount.blob.core.windows.net/$root/myblob.A snapshot is a read-only version of a blob stored as it was at the time the snapshot was created. You can use snapshots to create a backup or checkpoint of a blob. A snapshot blob name includes the base blob URI plus a date-time value that indicates when the snapshot was created. For example, assume that a blob has the following URI: https://myaccount.blob.core.windows.net/mycontainer/myblob. The URI for a snapshot of that blob is formed as follows: https://myaccount.blob.core.windows.net/mycontainer/myblob?snapshot=<DateTime>. Each resource provided by Google Cloud Platform™ has a unique resource name as the resource's identifier. Buckets of Google Cloud™ have a resource name in the form of projects/_/buckets/[BUCKET_NAME], where [BUCKET_NAME] is the ID of the bucket. Objects have a resource name in the form of projects/_/buckets/[BUCKET_NAME]/objects/[OBJECT_NAME], where [OBJECT_NAME] is the ID of the object. An example of a full resource name format for GCP storage bucket is //storage.googleapis.com/projects/_/buckets/[BUCKET_NAME], which is the unique bucket name prefixed with Google Cloud storage's based URI://storage.googleapis.com. A #[NUMBER] appended to the end of the resource name indicates a specific generation of the object. #0 is a special identifier for the most recent version of an object. #0 is useful to add when the name of the object ends in a string that would otherwise be interpreted as a generation number. Thus, in one implementation, a resource is a bucket on or of a cloud storage service. In another implementation, a resource is a container on or of a cloud storage service. In another implementation, a resource is a project on or of a cloud storage service. In another implementation, a resource is a blob on or of a cloud storage service. In other implementations, a resource is an object on or of a cloud storage service. All the above resources can have respective unique resource name/file path/URL as destinations on the cloud. Bucket-level resources are referred to resources which host a collection of objects, blocks of objects, such as Amazon S3 buckets, GCP buckets and Azure blobs. As used herein, a “resource-level transaction” is referred as an operation or actions on the data itself that causes manipulation of data and data objects in a cloud-based resource by merely referencing the cloud-based resource. Some example transactions include copying, moving, or syncing a cloud-based resource from a source location to a destination location by merely naming the cloud-based resource. Another example includes copying, moving, or syncing a cloud-based resource from a source location to a destination location by merely referencing the cloud-based resource, via, e.g., a link or hyperlink of the cloud resource destination or URL. Bucket-level resources, in the context of this application, refer to, e.g., Amazon S3 buckets, GCP buckets and Azure blobs that are high-level logical constructs within which data is assembled and organized as of object-based storage. Bucket-level resources, e.g., Amazon S3 buckets form the top-level namespace for Amazon S3, and bucket names are global across all AWS accounts. A bucket-level resource name combining base URI (universal resource identifier) of cloud services can form a unique cloud destination like universal resource location URL specifying the resource in the cloud. Resource-level transactions manipulate data stored in object-based storage without identifying data stored in the resources. This is because, for example, the data portion of an Amazon S3 object is opaque to Amazon S3. This means that an object's data is treated as simply a stream of bytes which range in size from 0 bytes up to 5 TB. Amazon S3 does not know or care what type of data is stored in the object, and the transactions doesn't act differently for text data versus binary data. For example, one can use a “cp” or “syn” command in AWS to move an S3 bucket from a corporate organization account to another organization account without identifying the objects or data of the S3 bucket. Additional details about the “cp” command for AWS can be found here: https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html, which is incorporated herein by reference. Details about GCP's “cp” command can be found here: https://cloud.google.com/storage/docs/gsutil/commands/cp, which is incorporated herein by reference. In other implementations, Google Cloud Transfer Service can be used for data transfer, additional details about which can be found here: https://cloud.google.com/storage-transfer/docs/how-to, which is incorporated herein by reference. The following are resource-level transaction examples that use an AWS CLI (Command Line Interface) command to copy S3 buckets across AWS accounts:aws s3 sync s3://SOURCE-BUCKET-NAME s3://DESTINATION-BUCKET-NAME--source-region SOURCE-REGION-NAME--region DESTINATION-REGION-NAMEaws s3 sync s3://sourcebucket s3://destinationbucket The technical problem here is when such resource-level transactions cause data objects propagated out of an organization's account, the transactions themselves do not contain any content onto which data loss prevention (DLP) analysis can prevent data egress which manipulate data on a cloud storage against security policy. As a result, such transactions are not detected by a DLP engine, which is configured to look for sensitive content in network traffic to and from the cloud storage services. In some implementations, synthetic request injection mechanism can be used for data loss prevention (DLP) in the context of moving sensitive content from a controlled location to an uncontrolled location in the cloud. The technology disclosed teaches a network security system (NSS) using synthetic requests to retrieve resource-level metadata from cloud applications to prevent exfiltration of sensitive data resulting from user-made resource-level transactions (user-made transaction operated on the resources of a cloud application). Resource-level metadata is referred to the properties, tags, or labels configured and stored as metadata associated with the respective cloud resource. Most resource-level metadata are created and configured at the time the resource is created or launched. Additional tags like user created tags can be added on during the lifecycle of the resource. Example resource-level metadata are the metadata associated with a cloud resource such as a bucket and object in object-based storage services. Object metadata of a GCP object is a collection of name-value pairs that describe various object qualities. In the like fashion, metadata associated with an Amazon S3 object is a set of name/value pairs that describe the object. Buckets, the objects' containers also have properties which are configured when they are created. The properties of an Amazon S3 bucket include settings for versioning, tags, default encryption, logging, notification, and more. For example, a tag is key-value pair that represents a label assigned to the bucket. The default encryption is another example resource-level metadata which provides AWS consumers with automatic server-side encryption. Buckets created in Google's Cloud Storage™ have associated metadata which identifies properties of the bucket and specifies how the bucket should be handled when it's accessed. Some metadata exists as key: value pairs. For example, the name of a bucket is represented by the metadata entry name: my-bucket-name for the JSON API. The XML API presents such metadata as <elements></elements>, such as <LocationConstraint>US</LocationConstraint> for the bucket location. FIG.26shows one implementation, where the network security system104, interposed between clients and cloud applications, intercepts a client request that attempts to make a resource-level transaction (e.g., move, copy, backup, clone, version a content residing in a cloud application) which causes a content of a cloud application propagated from a first cloud resource location (referred as a transmitting cloud destination, e.g., a first Amazon S3 bucket, GCP bucket, or Azure blob) to a second cloud resource location (referred as a receiving cloud destination, e.g., a second Amazon S3 bucket, GCP bucket, or Azure blob). The transmitting cloud destination, in some implementations, hosts the resource which are subjected to the action of the incoming request (referred as the target resource). The data/content of the resource can be a file, a folder, a node, or a cloud-based resource such as an object or the entirety of the first Amazon S3 bucket, GCP bucket, or Azure blob itself being moved to the second Amazon S3 bucket, GCP bucket or Azure blob. A person skilled in the art will appreciate that the scale of cloud-based resources can range from a file to an entire network drive. Also, a person skilled in the art will appreciate that the type and form of content can range from text, images, audio, video to source code, or computing instance, and so on. Once the network security system104establishes that the content (data of the target resource) is sensitive2654, either by running a DLP inspection on the data or by inferring sensitivity from the properties of the target resource, the network security system104then determines whether the receiving cloud destination is a controlled location or an uncontrolled location based on resource-level metadata, such as the metadata describing the ownership of the resource. The NSS104uses a synthetic request to retrieve resource-level metadata from the cloud application. A controlled location can be, for example, a cloud resource that is owned by an organization which also owns the transmitting cloud destination, a cloud resource that is owned by a same organization to which a user making the client request also belongs to, or a cloud resource that is generally under the ambit of policy enforcement by the network security system104. In one implementation, the synthetic requests can be configured to retrieve, in corresponding synthetic responses2620, information that identifies ownership of the receiving cloud destination (e.g., a destination bucket, a destination container). The following is an example of request syntax specific to AWS HeadBucket API that can be used to construct a synthetic request to retrieve ownership information (“ExpectedBucketOwner”) about a particular Amazon S3 bucket:HEAD/HTTP/1.1Host: Bucket.s3.amazonaws.comx-amz-expected-bucket-owner: ExpectedBucketOwner The corresponding resource metadata in the key: value pairs include x-amz-expected-bucket-owner-header key and x-amz-expected-source-bucket-owner-header key where the value thereof is the account ID of the expected bucket owner. Additional information about request syntaxes and parameters for verifying bucket owner that can be included in the synthetic requests can be found at HeadBucket https://docs.aws.amazon.com/AmazonS3/latest/APPAPI_HeadBucket.html (last visited Nov. 20, 2021), which is incorporated by reference for all purposes as if fully set forth herein. If the receiving cloud destination is a controlled location, then following a predefined policy2684, the network security system104can allow the request and fulfill transmission/delivery of the sensitive content to the receiving cloud destination. However, if the receiving cloud destination is an uncontrolled location that does not qualify as a controlled location, then following the predefined policy, the network security system104can block the request and prevent transmission/delivery of the sensitive content to the receiving cloud destination, thereby preventing/mitigating the risk of data exfiltration. In some implementations, the Network Security System (NSS)104uses a synthetic request to retrieve sensitivity metadata of the target resource to determine whether there is sensitive content subjected to the action of the request. In some implementations, the Network Security System (NSS)104uses a synthetic request to retrieve the target resource data and run a DLP inspection thereto to determine if the content of the target resource is sensitive or not. In another implementation, the proxy can issue a synthetic request to add/edit/modify the sensitivity metadata of the resource according the result of the sensitivity scan over the target resource. Bucket-Level Security Posture Metadata In other implementations, in addition to or instead of conditioning the transmission/delivery of the sensitive content to the receiving cloud destination on the receiving cloud destination being a controlled location, the transmission/delivery of the sensitive content to the receiving cloud destination can be further or alternatively contingent on the security configuration of the receiving cloud destination.FIG.27shows one implementation that following a predefined policy, the network security system104can allow an incoming request and fulfill transmission/delivery of the sensitive content to the receiving cloud destination if the receiving cloud destination is encrypted or configured to encrypt its hosted content (i.e., data at rest). However, if the receiving cloud destination is not setup for encryption of data at rest, then following the predefined policy2784, the network security system104can block2798the request and prevent transmission/delivery of the sensitive content to the receiving cloud destination, thereby mitigating the risk of data exfiltration. In one implementation, the synthetic requests can be configured to retrieve, in corresponding synthetic responses2720, bucket-level metadata that indicate the security posture about the receiving cloud destination. Example security posture metadata include, for example, metadata specifying whether the receiving cloud destination is configured to encrypt data at rest, metadata specifying whether the receiving cloud destination is configured to encrypt new object by default, metadata specifying whether the receiving cloud destination is publicly accessible, and so on. Such security posture metadata is retrievable by the synthetic responses2720from the cloud storage services like Amazon S3, Azure Blob™, Google's Cloud Storage™, and so on. In some examples, it may not be possible to obtain metadata associated with a bucket in the receiving cloud destination. For instance, the receiving cloud destination may not support the provision of bucket-level metadata, NSS104may not have permission to request bucket-level metadata, or an error may prevent the receiving cloud destination from providing the metadata. When it is not possible to obtain bucket-level metadata, request2704may be treated as a request that would cause data to leave the organization boundary. Policy2784may, therefore, direct NSS104to block request2704when it is not possible to obtain bucket-level metadata and thereby prevent exfiltration of data from the organization. The following is an example of request syntax specific to AWS that can be used to construct a synthetic request for retrieving security posture information/metadata of a particular Amazon S3 bucket:GET/?encryption HTTP/1.1Host: Bucket.s3.amazonaws.comx-amz-expected-bucket-owner: ExpectedBucketOwner Additional information about the URI request parameters and other similar request syntaxes that can be included in the synthetic requests can be found at GetBucketEncryption, https://docs.aws.amazon.com/AmazonS3/latest/APPAPI_GetBucketEncryption.html (last visited Nov. 9, 2021), which is incorporated by reference for all purposes as if fully set forth herein. Instance Metadata Cloud services like Amazon Web Services™ (AWS), Google Cloud Platform™ (GCP), Microsoft Azure™ and Alibaba Cloud™ as Infrastructure as a service (IaaS) provide consumers fundamental cloud computing resources with convenient, on-demand network access to a shared pool of configurable resources (e.g., networks, servers, storages, operating systems, applications and services). Each storage entity such as bucket and object provided in, e.g., Google Cloud™ and AWS is a resource in the context of this application. Likewise, compute entities providing cloud computing, such as AWS EC2 and other compute engine instances, are also resources in the cloud infrastructure, conventionally referred to as compute resources. In theFIG.2, AWS128A also offers an on-demand cloud computing platform called ELASTIC COMPUTE CLOUD™ (EC2), which allows users202to create and run compute instances and access different storage tiers on AWS128A. EC2 of AWS, a type of compute resource is a resource in the context of this application. Instance metadata is data about your instance that you (e.g., a corporate account admin?) can use to configure or manage the running instance. Instance metadata is divided into categories, for example, host name, events, and security groups. You can also use instance metadata to access user data that you specified when launching your instance. For example, you can specify parameters for configuring your instance, or include a simple script. You can build generic AMIs (Amazon Machine Images) and use user data to modify the configuration files supplied at launch time. For example, if you run web servers for various small businesses, they can all use the same generic AMI and retrieve their content from the Amazon S3 bucket that you specify in the user data at launch. To add a new customer at any time, create a bucket for the customer, add their content, and launch your AMI with the unique bucket name provided to your code in the user data. EC2 instances can also include dynamic data, such as an instance identity document which provides information about the instance itself, e.g., the ID of the instance, instance type of the instance, the ID of AWS account that launches the instance, and the Region in which the instance is running, etc. You can use the instance identity document to validate the attributes of the instances. Instance identity document is generated when the instance is launched, stopped, and started, or restarted. The instance identity document is exposed (in plaintext JSON format) through the Instance Metadata Service (IMDS). The instance identity document can be retrieved from a running instance at any time. The following is example output by IMDS: {“devpayProductCodes” : null,“marketplaceProductCodes” : [ “1abc2defghijklm3nopqrs4tu” ],“availabilityZone” : “us-west-2b”,“privateIp” : “10.158.112.84”,“version” : “2017-09-30”,“instanceId” : “i-1234567890abcdef0”,“billingProducts” : null,“instanceType” : “t2.micro”,“accountId” : “123456789012”,“imageId” : “ami-5fb8c835”,“pendingTime” : “2016-11-19T16:32:11Z”,“architecture” : “x86_64”,“kernelId” : null,“ramdiskId” : null,“region” : “us-west-2”} Parallel Synthetic Requests FIG.18shows one implementation of issuing multiple synthetic requests during an application session. InFIG.18, during the application session144, multiple incoming requests1810A-N can be held and unheld by the network security system104in parallel or in sequence in response to the network security system104issuing multiple corresponding synthetic requests1812A-N in parallel or in sequence, and/or the network security system104receiving multiple corresponding synthetic responses1814A-N in parallel or in sequence. Synthetic Requests for Future Incoming Requests FIG.19shows one implementation of issuing a synthetic request to synthetically harvest/generate/garner metadata for policy enforcement on yet-to-be received future incoming requests. InFIG.19, a first incoming request1952is intercepted by the network security system104. The network security system104determines1954that the first incoming request1952fails to supply the metadata required for policy enforcement. Despite this, the network security system104does not hold the first incoming request1952and sends1958it to the cloud application. To make the metadata available for future incoming requests, the network security system104generates a synthetic request1968and injects1964it into the application session144to transmit the synthetic request1968to the cloud application. In response, the network security system104receives a synthetic response1976that supplies the required metadata1978. From there onwards, when the network security system104receives subsequent incoming requests1982A-N, the network security system104uses the synthetically harvested/generated/garnered metadata1978to perform policy enforcement1984on the subsequent incoming requests1982A-N. Cloud Security Posture Management Cloud Security Posture Management (CSPM) is a market segment for IT security tools that are designed to identify misconfiguration issues and compliance risks in the could. Gartner, the IT research and advisory firm that coined the term, describes CSPM as a new category of security products that can help automate security and provide compliance assurance in the cloud. CSPM tools work by examining and comparing a cloud environment against a defined set of best practices and known security risks. Some CSPM tools will alert the cloud customer when there is a need to remediate a security risk, while other more sophisticated CSPM tools will use robotic process automation (RPA) to remediate issues automatically. CSPM is typically used by organizations that have adopted a cloud-first strategy and want to extend their security best practices to hybrid cloud and multi-cloud environments. CSPM tools are designed to automate cloud security management for like, e.g., Infrastructure as a Service (IaaS) cloud services. CSPM can also be used across diverse infrastructure to minimize configuration mistakes and reduce compliance risks in Software as a Service (SaaS), platform as a Service (PaaS) cloud environment. Cloud Security Posture Management tools are designed to detect and remediate issues caused by cloud misconfigurations. A specific CSPM tool may only be able to use defined best practices according to a specific cloud environment or service, however, so it is important to know what tools can be used in each specific environment. For example, some tools may be limited to being able to detect misconfigurations in an Amazon AWS or Microsoft Azure environment. Some CSPM tools can automatically remediate issues by combining real-time continuous monitoring with automation features that can detect and correct issues, such as improper account permissions. Continuous compliance can also be configured according to a number of standards, including HIPAA. Other CSPM tools can be used in tandem with Cloud Access Security Broker (CASB) that safeguards the flow of data between on-premises IT infrastructure and a cloud provider's infrastructure. Misconfigurations are most often caused by customer mismanagement of multiple connected resources, with cloud-based services to keep track of and manage with many moving pieces. Misconfiguration can be easily made, specially with API-driven approaches to integration, opening an organization to the possibility of a data breach, as it only takes a few misconfigurations in the cloud to leave an organization vulnerable to attack. A specific CSPM can operate comprehensive protection of data, users, and configurations real-time by enforcing consistent policies across cloud applications, e.g., CASB and SSPM (SaaS Security Posture Management). Example use cases of CSPM further include detecting misconfiguration, preventing configuration drift, maintain compliance and governance, and so on. CSPM is typically designed to discover risky configurations and overly permissive user access by verifying against predefined best practice rules and industry standards, continuously monitor cloud applications for a robust security posture and to prevent configuration drift. CSPM can maintain compliance and governance by simplifying audits and quickly prove governance with pre-built and customizable compliance frameworks. CSPM can be further integrated with Advanced Analytics to discover managed and rogue applications to enforce correct cloud configurations, and seamlessly send alerts via Cloud Ticket Orchestrator and build custom workflows to analyze alerts via Representational Transfer (REST) API. FIG.28shows one implementation of using synthetic requests to retrieve/generate/harvest security posture information of a resource hosted on a cloud application for security policy enforcement on an incoming request. InFIG.28, an incoming request2804targeted toward a resource, e.g., a storage entity like Amazon S3 or a compute entity of an EC2 instance hosted on a cloud application. The network security system104detects2854a resource identifier of the resource from the incoming request and determines if the incoming request2804fails to supply the metadata required for security posture policy enforcement on the request. The network security system104, then, generates a synthetic request2810with the resource identifier2812and injects the synthetic request into the application session144to transmit the synthetic request to the cloud application. The synthetic request2810is configured to retrieve the missing security posture information from the cloud application by inducing an application server of the cloud application to generate a response that includes the missing information. The network security system104then receives a synthetic response2820to the synthetic request2810from the cloud application. The synthetic response2820supplies the missing security posture information2822to the network security system104. The network security system104then uses the information for security posture policy enforcement on the incoming request. FIG.29shows one implementation of using synthetic requests to synthetically retrieve/generate/harvest security posture information of a resource hosted on a cloud application for policy enforcement on yet-to-be received future incoming requests. InFIG.29, a first incoming request2904targeted toward a resource, e.g., a storage entity like Amazon S3 or a compute entity of an EC2 instance hosted on a cloud application. The network security system104detects2954a resource identifier of the resource from the incoming request and determines if the incoming request2904fails to supply the metadata required for security posture policy enforcement on the request. Despite this, the network security system104does not hold the first incoming request2952and sends2958it to the cloud application. In another implementation, the network security system104blocks the incoming request if the determination indicates the incoming request2952fails to supply the metadata required for policy enforcement. To make the metadata available for future incoming requests, the network security system104generates a synthetic request2968with the resource identifier2970and injects2964it into the application session144to transmit the synthetic request2968to the cloud application. In response, the network security system104receives a synthetic response2976that supplies the required security posture information2978. The network security system locally stores the supplied information in a metadata/information store264for policy enforcement on future incoming requests that share an application session with the incoming request, thereby obviating generation of further synthetic requests during the application session From there onwards, when the network security system104receives subsequent incoming requests2982A-N, the network security system104uses the previously synthetically harvested/generated/garnered information2978to perform policy enforcement2984on the subsequent incoming requests2982A-N. Computer System FIG.20shows an example computer system2000that can be used to implement the technology disclosed. Computer system2000includes at least one central processing unit (CPU)2072that communicates with a number of peripheral devices via bus subsystem2055. These peripheral devices can include a storage subsystem2010including, for example, memory devices and a file storage subsystem2036, user interface input devices2038, user interface output devices2076, and a network interface subsystem2074. The input and output devices allow user interaction with computer system2000. Network interface subsystem2074provides an interface to outside networks, including an interface to corresponding interface devices in other computer systems. In one implementation, the network security system104is communicably linked to the storage subsystem2010and the user interface input devices2038. User interface input devices2038can include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system2000. User interface output devices2076can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem can include an LED display, a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem can also provide a non-visual display such as audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system2000to the user or to another machine or computer system. Storage subsystem2010stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. These software modules are generally executed by processors2078. Processors2078can be graphics processing units (GPUs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), and/or coarse-grained reconfigurable architectures (CGRAs). Processors2078can be hosted by a deep learning cloud platform such as Google Cloud Platform™, Xilinx™, and Cirrascale™. Examples of processors2078include Google's Tensor Processing Unit (TPU)™, rackmount solutions like GX4 Rackmount Series™, GX20 Rackmount Series™, NVIDIA DGX-1™, Microsoft' Stratix V FPGA™, Graphcore's Intelligent Processor Unit (IPU)™, Qualcomm's Zeroth Platform™ with Snapdragon Processors™, NVIDIA's Volta™, NVIDIA's DRIVE PX™, NVIDIA's JETSON TX1/TX2 MODULE™, Intel's Nirvana™, Movidius VPU™, Fujitsu DPI™, ARM's DynamicIQ™, IBM TrueNorth™, Lambda GPU Server with Testa V100s™, and others. Memory subsystem2022used in the storage subsystem2010can include a number of memories including a main random access memory (RAM)2032for storage of instructions and data during program execution and a read only memory (ROM)2034in which fixed instructions are stored. A file storage subsystem2036can provide persistent storage for program and data files, and can include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations can be stored by file storage subsystem2036in the storage subsystem2010, or in other machines accessible by the processor. Bus subsystem2055provides a mechanism for letting the various components and subsystems of computer system2000communicate with each other as intended. Although bus subsystem2055is shown schematically as a single bus, alternative implementations of the bus subsystem can use multiple busses. Computer system2000itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, a widely-distributed set of loosely networked computers, or any other data processing system or user device. Due to the ever-changing nature of computers and networks, the description of computer system2000depicted inFIG.20is intended only as a specific example for purposes of illustrating the preferred implementations of the present invention. Many other configurations of computer system2000are possible having more or less components than the computer system depicted inFIG.20. Particular Implementations 1) Synthetic Request Injection to Retrieve Group Membership Metadata for Cloud Policy Enforcement The technology disclosed configures network security systems with the ability to trigger synthetic requests during application sessions of cloud applications. The technology disclosed can be practiced as a system, method, or article of manufacture. One or more features of an implementation can be combined with the base implementation. Implementations that are not mutually exclusive are taught to be combinable. One or more features of an implementation can be combined with other implementations. This disclosure periodically reminds the user of these options. Omission from some implementations of recitations that repeat these options should not be taken as limiting the combinations taught in the preceding sections—these recitations are hereby incorporated forward by reference into each of the following implementations. The implementations described in this section can be combined as features. In the interest of conciseness, the combinations of features are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in the implementations described in this section can readily be combined with sets of base features identified as implementations in other sections of this application. These implementations are not meant to be mutually exclusive, exhaustive, or restrictive; and the technology disclosed is not limited to these implementations but rather encompasses all possible combinations, modifications, and variations within the scope of the claimed technology and its equivalents. The technology disclosed, in particularly, the clauses disclosed in this section, can be practiced as a system, method, or article of manufacture. One or more features of an implementation can be combined with the base implementation. Implementations that are not mutually exclusive are taught to be combinable. One or more features of an implementation can be combined with other implementations. This disclosure periodically reminds the user of these options. Omission from some implementations of recitations that repeat these options should not be taken as limiting the combinations taught in the preceding sections—these recitations are hereby incorporated forward by reference into each of the following implementations. One or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of a computer product, including a non-transitory computer readable storage medium with computer usable program code for performing the method steps indicated. Furthermore, one or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps. Yet further, in another aspect, one or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) hardware module(s), (ii) software module(s) executing on one or more hardware processors, or (iii) a combination of hardware and software modules; any of (i)-(iii) implement the specific techniques set forth herein, and the software modules are stored in a computer readable storage medium (or multiple such media). The clauses described in this section can be combined as features. In the interest of conciseness, the combinations of features are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in the clauses described in this section can readily be combined with sets of base features identified as implementations in other sections of this application. These clauses are not meant to be mutually exclusive, exhaustive, or restrictive; and the technology disclosed is not limited to these clauses but rather encompasses all possible combinations, modifications, and variations within the scope of the claimed technology and its equivalents. Other implementations of the clauses described in this section can include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the clauses described in this section. Yet another implementation of the clauses described in this section can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the clauses described in this section. In one implementation, the technology disclosed describes a system. The system comprises a network security system interposed between clients and cloud applications. The network security system configured to receive an incoming request from a client during an application session. The incoming request is directed towards a user group of a cloud application, i.e., a target group of users assembled by the cloud application and targeted by the incoming request, and includes a resource. The network security system analyzes the incoming request and detects that the request attempts to execute an action that causes content of the resource accessible by the target group of users. The network security system is further configured to determine whether the resource content is sensitive or not. If the network security system determine that the resource content is sensitive, the network security system is further configured to detect an identifier of the user group which identifies the user group in the cloud application. Holding the incoming request to hold off the execution of the action on the resource, the network security system is further configured to generate a synthetic request with the user group identifier and inject the synthetic request into the application session to transmit the synthetic request to the cloud application. The synthetic request is configured to retrieve metadata about the user group using the user group identifier. The network security system is further configured to receive a response to the synthetic request from the cloud application. The network security system is further configured to determine whether the action is executable on the resource based on the evaluation of the metadata supplied by the response against the security policy, and enforce the policy on the incoming request. In one implementation of the system, the metadata about the user group includes group membership metadata that is, individually or collectively, indicative as to whether the user group has at least one unauthorized user against the security policy. In one implementation of the system, the network security system is further configured to fulfill the incoming request if the network security system determines the resource content is not sensitive. In one implementation of the system, the network security system is further configured to block the incoming request, if the network security system determines the group membership metadata indicates that the user group has at least one unauthorized user. In one implementation of the system, the network security system is further configured to fulfill the incoming request if the group membership metadata indicates that all the members in the user group are authorized users against the security policy. In some implementations of the system, the resource is a file, a folder, a node, or an object. In some implementation of the system, the user group is like a user group on a network directory, a user group shared a folder, members of a channel, or members of a team, organized by the cloud application for collaboration. In some implementation of the system, the cloud application is a collaboration app like Slack™, Microsoft Teams™, and Webex™. In one implementation of the system, the network security system is further configured to determine the sensitivity of the resource by generating a synthetic request with the resource to retrieve the sensitivity metadata of the resource, and receive a response to the synthetic request, The response supplies the sensitivity metadata of the resource. In one implementation of the system, the network security system is further configured to determine the sensitivity of the resource by detect an identifier of the resource, generating a synthetic request with the resource identifier to retrieve the sensitivity metadata of the resource, and receive a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation of the system, the network security system is further configured to determine the sensitivity of the resource by conducting content sensitivity scan, e.g., using data loss prevention (DLP) analysis (e.g., text analysis) over the resource, and generate sensitivity metadata that specify whether the resource is sensitive or not. In one implementation of the system, the generated sensitivity metadata is stored as object metadata of the resource or stored in a cloud-based metadata store which the network security system has access to. In one implementation of the system, the network security system is further configured to determine the sensitivity of the resource by inferring from a previously generated sensitivity metadata stored in a cloud-based metadata store which the network security system has access to. In one implementation of the system, the synthetic request is further configured to store the group membership metadata of the user group to a cloud-based metadata store which the network security system has access to. In one implementation of the system, the synthetic request is further configured to retrieve the group membership metadata of the user group from a cloud-based metadata store which the network security system has access to. In one implementation of the system, the network security system is further configured to extract an authentication token from the incoming request, and to configure the synthetic request with the authentication token to access the cloud application. In another implementation, the technology disclosed describes a computer-implemented method. The computer-implemented method includes a network security system receiving an incoming request from a client during an application session. The network security system is interposed between clients and cloud applications. The incoming request is directed towards a user group of a could application, i.e., a target group of users organized by the cloud application and targeted by the incoming request, and includes a resource. The computer-implemented method further includes the network security system analyzing the incoming request and detect the incoming request attempting to execute an action causing content of the resource accessible by the target group of users. The computer-implemented method further includes the network security system determining whether the resource content is sensitive or not. The computer-implemented method further includes the network security system detecting an identifier which identifies the user group in the cloud application. The computer-implemented method further includes the network security system holding the incoming request, generating a synthetic request with the user group identifier and injecting the synthetic request into the application session to transmit the synthetic request to the cloud application. The synthetic request is configured to retrieve metadata about the user group using the user group identifier. The computer-implemented method further includes the network security system receiving a response to the synthetic request from the cloud application, wherein the response supplies the metadata about the user group. The computer-implemented method further includes the network security system evaluating the metadata about the user group to enforce the security policy on the incoming request. In one implementation of the computer-implemented method, the metadata about the user group includes group membership metadata that is, individually or collectively, indicative as to whether the user group has at least one unauthorized user against the security policy. In one implementation, the computer-implemented method further includes the network security system fulfilling the incoming request if the network security system determines the resource content is not sensitive. In one implementation, the computer-implemented method further includes the network security system blocking the incoming request, if the network security system determines the group membership metadata indicates that the user group has at least one unauthorized user. In one implementation, the computer-implemented method further includes the network security system fulfill the incoming request if the group membership metadata indicates that all the members in the user group are authorized users against the security policy. In some implementations of the computer-implemented method, the resource is a file, a folder, a node, or an object. In some implementation of the computer-implemented method, the user group is like a user group on a network directory, a user group shared a folder, members of a channel, or members of a team, organized by the cloud application for collaboration. In some implementation of the computer-implemented method, the cloud application is a collaboration app like Slack™, Microsoft Teams™, and Webex™. In one implementation, the computer-implemented method further includes the network security system determining the sensitivity of the resource by generating a synthetic request with the resource to retrieve the sensitivity metadata of the resource and receiving a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation, the computer-implemented method further includes the network security system determining the sensitivity of the resource by detect an identifier of the resource, generating a synthetic request with the resource identifier to retrieve the sensitivity metadata of the resource, and receive a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation, the computer-implemented method further includes the network security system determining the sensitivity of the resource by conducting content sensitivity scan, e.g., using data loss prevention (DLP) analysis (e.g., text analysis) over the resource, and generate sensitivity metadata that specify whether the resource is sensitive or not. In one implementation of the computer-implemented method, the generated sensitivity metadata is stored as object metadata of the resource or stored in a cloud-based metadata store which the network security system has access to. In one implementation, the computer-implemented method further includes the network security system determining the sensitivity of the resource by inferring from a previously generated sensitivity metadata stored in a cloud-based metadata store which the network security system has access to. In one implementation of the computer-implemented method, the synthetic request is further configured to store the group membership metadata of the user group to a cloud-based metadata store which the network security system has access to. In one implementation of the computer-implemented method, the synthetic request is configured to retrieve the group membership metadata of the user group from a cloud-based metadata store which the network security system has access to. In one implementation of the computer-implemented method further includes the network security system extracting an authentication token from the incoming request, and to configure the synthetic request with the authentication token to access the cloud application. Other implementations of the computer-implemented method disclosed herein can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the computer-implemented method described above. Yet other implementations of the computer-implemented method disclosed herein can include a non-transitory computer readable storage medium impressed with computer program instructions to enforce policies, the instructions, when executed on a processor, implement a method described above. In yet another implementation, a non-transitory computer readable storage medium impressed with computer program instructions to enforce policies is described. The instructions, when executed on a processor, implement a method comprising a network security system receiving during an application session an incoming request from a client. The incoming request is directed to access a user group on a cloud application, and includes a resource. The network security system is interposed between clients and cloud applications. The method further includes the network security system analyzing the incoming request and detect the incoming request attempting to execute an action causing content of the resource accessible by the target group of users. The method further includes the network security system determining whether the resource content is sensitive or not. The method further includes the network security system detecting an identifier which identifies the user group in the cloud application. The method further includes the network security system holding the incoming request, generating a synthetic request with the user group identifier and injecting the synthetic request into the application session to transmit the synthetic request to the cloud application. The synthetic request is configured to retrieve metadata about the user group using the user group identifier. The method further includes the network security system receiving a response to the synthetic request from the cloud application. The response supplies the metadata about the user group. The method further includes the network security system evaluating the metadata about the user group to enforce the security policy on the incoming request. In one implementation, the non-transitory computer readable storage medium further includes the network security system fulfilling the incoming request if the network security system determines the resource content is not sensitive. In one implementation, the non-transitory computer readable storage medium further includes the network security system blocking the incoming request, if the network security system determines the group membership metadata indicates that the user group has at least one unauthorized user. In one implementation, the non-transitory computer readable storage medium further includes the network security system fulfill the incoming request if the group membership metadata indicates that all the members in the user group are authorized users against the security policy. In some implementations of the non-transitory computer readable storage medium, the resource is a file, a folder, a node, or an object. In some implementation of the computer-implemented method, the user group is like a user group on a network directory, a user group shared a folder, members of a channel, or members of a team, organized by the cloud application for collaboration. In some implementation of the computer-implemented method, the cloud application is a collaboration app like Slack™, Microsoft Teams™, and Webex™. In one implementation, the non-transitory computer readable storage medium further includes the network security system determining the sensitivity of the resource by generating a synthetic request with the resource to retrieve the sensitivity metadata of the resource, and receiving a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation, the non-transitory computer readable storage medium further includes the network security system determining the sensitivity of the resource by detect an identifier of the resource, generating a synthetic request with the resource identifier to retrieve the sensitivity metadata of the resource, and receive a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation, the non-transitory computer readable storage medium further includes the network security system determining the sensitivity of the resource by conducting content sensitivity scan, e.g., using data loss prevention (DLP) analysis (e.g., text analysis) over the resource, and generate sensitivity metadata that specify whether the resource is sensitive or not. In one implementation of the non-transitory computer readable storage medium, the generated sensitivity metadata is stored as object metadata of the resource or stored in a cloud-based metadata store which the network security system has access to. In one implementation, the non-transitory computer readable storage medium further includes the network security system determining the sensitivity of the resource by inferring from a previously generated sensitivity metadata stored in a cloud-based metadata store which the network security system has access to. In one implementation of the non-transitory computer readable storage medium, the synthetic request is further configured to store the group membership metadata of the user group to a cloud-based metadata store which the network security system has access to. In one implementation of the non-transitory computer readable storage medium, the synthetic request is configured to retrieve the group membership metadata of the user group from a cloud-based metadata store which the network security system has access to. In one implementation of the non-transitory computer readable storage medium further includes the network security system extracting an authentication token from the incoming request, and to configure the synthetic request with the authentication token to access the cloud application. In one implementation, the technology disclosed describes a system. The system comprises a network security system interposed between clients and cloud applications. The network security system is configured to receive from a client an incoming request to access a cloud application in an application session, i.e., a target cloud application targeted by the incoming request. 2) Synthetic Request Injection to Retrieve Bucket-Level Ownership Metadata for Cloud Policy Enforcement In one implementation, the technology disclosed describes a system of enforcing data loss prevention policy. The system comprises a network security system, interposed between clients and cloud applications. The network security system is configured to receive, during an application session, an incoming request from a client. The incoming request includes a resource identifier of a resource which resides at a transmitting cloud destination (i.e., a first cloud resource location, e.g., an Amazon S3 bucket, GCP bucket or Azure blob) in a cloud application. The network security system analyzes the incoming request and identify a receiving cloud destination (i.e., a second cloud resource location, e.g., an Amazon S3 bucket, GCP bucket or Azure blob). The network security system is further configured to detect the request attempting to execute an action causing content of the resource propagated from the transmitting cloud destination to the receiving cloud destination. The network security system determines whether the resource contains sensitive content or not (e.g., a sensitive object or link thereto). If the resource contains sensitive content, the network security system holds the incoming request, generates a synthetic request with the receiving cloud destination, injects the synthetic request into the application session and transmits the synthetic request to the cloud application. The synthetic request is configured to retrieve bucket-level ownership metadata of the receiving cloud destination from the cloud application. The network security system is further configured to receive a response to the synthetic request from the cloud application. The response supplies the bucket-level ownership metadata, which is, individually or collectively, indicative the ownership of the receiving cloud destination. evaluate the retrieved bucket-level ownership metadata and enforce a security policy on the incoming request. In one implementation of the system, the action of the incoming request includes, e.g., moving, copying, backing up, cloning, and versioning the content of the resource. In one implementation, the network security system is further configured to fulfill the incoming request, if the bucket-level ownership metadata indicates that the receiving cloud destination is to be determined a controlled location against the security policy. In one implementation, the network security system is further configured to block the incoming request, if the bucket-level ownership metadata indicates that the receiving cloud destination is not a controlled location against the security policy. In one implementation, the network security system is further configured to block a part of incoming request where the part of incoming request involves a sensitive object, if the bucket-level ownership metadata indicates that the receiving cloud destination is not a controlled location. In one implementation of the network security system, the receiving cloud destination is a controlled location when the resource subjected to the action is owned by a consumer account which also owns the receiving cloud destination. In one implementation of the network security system, the receiving cloud destination is a controlled location when an organization, which the client making the incoming request belongs to, also owns the receiving cloud destination. In one implementation of the network security system, the receiving cloud destination is a controlled location when the receiving cloud destination is generally under the ambit of the enforcement of the security policy. In some implementations of the network security system, the data of the resource includes, e.g., a file, a folder, a node, a resource, or a cloud resource location itself (e.g., an Amazon S3 bucket, GCP bucket, or Azure blob). In one implementation, the network security system is further configured to determine the sensitivity of the resource by generating a synthetic request with the resource identifier to retrieve the sensitivity metadata of the resource, and receiving a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation, the network security system is further configured to determine the sensitivity of the resource by conducting content sensitivity scan, e.g., using data loss prevention (DLP) analysis (e.g., text analysis, string or sub-string inspection) over the resource, and generate sensitivity metadata that specify whether the resource is sensitive or not. In one implementation of the network security system, the generated sensitivity metadata is stored as the resource metadata of the resource (e.g., object metadata, bucket metadata, or blob metadata). In one implementation of the network security system, the generated sensitivity metadata is stored in a cloud-based metadata store which the network security system has access to. In one implementation, the network security system is further configured to determine the sensitivity of the resource by inferring from a previously generated sensitivity metadata stored in a cloud-based metadata store which the network security system has access to. In one implementation of the network security system, the synthetic request is further configured to retrieve the bucket-level ownership metadata from a cloud-based metadata store which the network security system has access to. In one implementation of the network security system, the security policy is attached to the resource (e.g., a bucket and an object), called resource-based policy. In another implementation, the technology disclosed describes a computer-implemented method. The computer-implemented method includes a network security system receiving an incoming request from a client during an application session. The network security system is interposed between clients and cloud applications. The incoming request includes a resource identifier of a resource which resides at a transmitting cloud destination (i.e., a first cloud resource location, e.g., an amazon S3 bucket, GCP bucket or Azure blob) in a cloud application. The computer-implemented method further includes the network security system analyzing the incoming request and identifying a receiving cloud destination (i.e., a second cloud resource location, e.g., an Amazon S3 bucket, GCP bucket or Azure blob). The computer-implemented method further includes the network security system detecting the request attempting to execute an action causing content of the resource propagated from the transmitting cloud destination to the receiving cloud destination. The computer-implemented method further includes the network security system determining whether the resource contains sensitive content or not (e.g., a sensitive object or link thereto). If the resource contains sensitive content, the computer-implemented method further includes the network security system holding the incoming request, generating a synthetic request with the receiving cloud destination, injecting the synthetic request into the application session and transmitting the synthetic request to the cloud application. The synthetic request is configured to retrieve bucket-level ownership metadata of the receiving cloud destination from the cloud application. The computer-implemented method further includes the network security system receiving a response to the synthetic request from the cloud application. The response supplies the bucket-level ownership metadata, which is, individually or collectively, indicative the ownership of the receiving cloud destination. The computer-implemented method further includes the network security system evaluating the retrieved bucket-level ownership metadata and enforcing a security policy on the incoming request. In one implementation of the computer-implemented method, the action of the incoming request includes, e.g., moving, copying, backing up, cloning, and versioning the content of the resource. In one implementation, the computer-implemented method further includes the network security system fulfilling the incoming request, if the bucket-level ownership metadata indicates that the receiving cloud destination is to be determined a controlled location against the security policy. In one implementation, the computer-implemented method further includes the network security system blocking the incoming request, if the bucket-level ownership metadata indicates that the receiving cloud destination is not a controlled location against the security policy. In one implementation, the computer-implemented method further includes the network security system blocking one or more parts of incoming request where the parts of incoming request involves at least one sensitive object, if the bucket-level ownership metadata indicates that the receiving cloud destination is not a controlled location. In one implementation of the computer-implemented method, the receiving cloud destination is a controlled location when the resource subjected to the action is owned by a consumer account which also owns the receiving cloud destination. In one implementation of the computer-implemented method, the receiving cloud destination is a controlled location when an organization, which the client making the incoming request belongs to, also owns the receiving cloud destination. In one implementation of the computer-implemented method, the receiving cloud destination is a controlled location when the receiving cloud destination is generally under the ambit of the enforcement of the security policy. In some implementations of the computer-implemented method, the data of the resource includes, e.g., a file, a folder, a node, a resource, or a cloud resource location itself (e.g., an Amazon S3 bucket, GCP bucket, or Azure blob). In one implementation, the computer-implemented method further includes the network security system determining the sensitivity of the resource by generating a synthetic request with the resource identifier to retrieve the sensitivity metadata of the resource, and receiving a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation, the computer-implemented method further includes the network security system determining the sensitivity of the resource by conducting content sensitivity scan, e.g., using data loss prevention (DLP) analysis (e.g., text analysis, string or sub-string inspection) over the resource, and generate sensitivity metadata that specify whether the resource is sensitive or not. In one implementation of the computer-implemented method, the generated sensitivity metadata is stored as the resource metadata of the resource (e.g., object metadata, bucket metadata, or blob metadata). In one implementation of the computer-implemented method, the generated sensitivity metadata is stored in a cloud-based metadata store which the network security system has access to. In one implementation, the computer-implemented method further includes the network security system determining the sensitivity of the resource by inferring from a previously generated sensitivity metadata stored in a cloud-based metadata store which the network security system has access to. In one implementation of the computer-implemented method, the synthetic request is further configured to retrieve the bucket-level ownership metadata from a cloud-based metadata store which the network security system has access to. In one implementation of the computer-implemented method, the security policy is attached to the resource (e.g., a bucket and an object), called resource-based policy. Other implementations of the computer-implemented method disclosed herein can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the computer-implemented method described above. Yet other implementations of the computer-implemented method disclosed herein can include a non-transitory computer readable storage medium impressed with computer program instructions to enforce policies, the instructions, when executed on a processor, implement a method described above. In yet another implementation, a non-transitory computer readable storage medium impressed with computer program instructions to enforce policies is described. The instructions, when executed on a processor, implement a method comprising a network security system receiving an incoming request from a client during an application session. The network security system is interposed between clients and cloud applications. The incoming request includes a resource identifier of a resource which resides at a transmitting cloud destination (i.e., a first cloud resource location, e.g., an amazon S3 bucket, GCP bucket or Azure blob) in a cloud application. The method further includes the network security system analyzing the incoming request and identifying a receiving cloud destination (i.e., a second cloud resource location, e.g., an Amazon S3 bucket, GCP bucket or Azure blob). The method further includes the network security system detecting the request attempting to execute an action causing content of the resource propagated from the transmitting cloud destination to the receiving cloud destination. The method further includes the network security system determining whether the resource contains sensitive content or not (e.g., a sensitive object or link thereto). If the resource contains sensitive content, the method further includes the network security system holding the incoming request, generating a synthetic request with the receiving cloud destination, injecting the synthetic request into the application session and transmitting the synthetic request to the cloud application, wherein the synthetic request is configured to retrieve bucket-level ownership metadata of the receiving cloud destination from the cloud application. The method further includes the network security system receiving a response to the synthetic request from the cloud application. The response supplies the bucket-level ownership metadata, which is, individually or collectively, indicative the ownership of the receiving cloud destination. The method further includes the network security system evaluating the retrieved bucket-level ownership metadata and enforcing a security policy on the incoming request. In one implementation of the non-transitory computer readable storage medium, the action of the incoming request includes, e.g., moving, copying, backing up, cloning, and versioning the content of the resource. In one implementation, the non-transitory computer readable storage medium further includes the network security system fulfilling the incoming request, if the bucket-level ownership metadata indicates that the receiving cloud destination is to be determined a controlled location against the security policy. In one implementation, the non-transitory computer readable storage medium further includes the network security system blocking the incoming request, if the bucket-level ownership metadata indicates that the receiving cloud destination is not a controlled location against the security policy. In one implementation, the non-transitory computer readable storage medium further includes the network security system blocking one or more parts of incoming request where the parts of incoming request involves at least one sensitive object, if the bucket-level ownership metadata indicates that the receiving cloud destination is not a controlled location. In one implementation of the non-transitory computer readable storage medium, the receiving cloud destination is a controlled location when the resource subjected to the action is owned by a consumer account which also owns the receiving cloud destination. In one implementation of the non-transitory computer readable storage medium, the receiving cloud destination is a controlled location when an organization, which the client making the incoming request belongs to, also owns the receiving cloud destination. In one implementation of the non-transitory computer readable storage medium, the receiving cloud destination is a controlled location when the receiving cloud destination is generally under the ambit of the enforcement of the security policy. In some implementations of the non-transitory computer readable storage medium, the data of the resource includes, e.g., a file, a folder, a node, a resource, or a cloud resource location itself (e.g., an Amazon S3 bucket, GCP bucket, or Azure blob). In one implementation, the non-transitory computer readable storage medium further includes the network security system determining the sensitivity of the resource by generating a synthetic request with the resource identifier to retrieve the sensitivity metadata of the resource, and receiving a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation, the non-transitory computer readable storage medium further includes the network security system determining the sensitivity of the resource by conducting content sensitivity scan, e.g., using data loss prevention (DLP) analysis (e.g., text analysis, string or sub-string inspection) over the resource, and generate sensitivity metadata that specify whether the resource is sensitive or not. In one implementation of the non-transitory computer readable storage medium, the generated sensitivity metadata is stored as the resource metadata of the resource (e.g., object metadata, bucket metadata, or blob metadata). In one implementation of the non-transitory computer readable storage medium, the generated sensitivity metadata is stored in a cloud-based metadata store which the network security system has access to. In one implementation, the non-transitory computer readable storage medium further includes the network security system determining the sensitivity of the resource by inferring from a previously generated sensitivity metadata stored in a cloud-based metadata store which the network security system has access to. In one implementation of the non-transitory computer readable storage medium, the synthetic request is further configured to retrieve the bucket-level ownership metadata from a cloud-based metadata store which the network security system has access to. In one implementation of the non-transitory computer readable storage medium, the security policy is attached to the resource (e.g., a bucket and an object), called resource-based policy. 3) Synthetic Request Injection to Retrieve Bucket-Level Security Posture Metadata for Cloud Policy Enforcement In one implementation, the technology disclosed describes a system of enforcing data loss prevention policy. The system comprises a network security system, interposed between clients and cloud applications. The network security system is configured to receive, during an application session, an incoming request from a client. The incoming request includes a resource identifier of a resource which resides at a transmitting cloud destination (i.e., a first cloud resource location, e.g., an Amazon S3 bucket, GCP bucket or Azure blob) in a cloud application. The network security system analyzes the incoming request and identify a receiving cloud destination (i.e., a second cloud resource location, e.g., an Amazon S3 bucket, GCP bucket or Azure blob). The network security system is further configured to detect the request attempting to execute an action causing content of the resource propagated from the transmitting cloud destination to the receiving cloud destination. The network security system determines whether the resource contains sensitive content or not (e.g., a sensitive object or link thereto). If the resource contains sensitive content, the network security system holds the incoming request, generates a synthetic request with the receiving cloud destination, injects the synthetic request into the application session and transmits the synthetic request to the cloud application. The synthetic request is configured to retrieve bucket-level security posture metadata of the receiving cloud destination from the cloud application. The network security system is further configured to receive a response to the synthetic request from the cloud application. The response supplies the bucket-level security posture metadata, which is, individually or collectively, indicative the ownership of the receiving cloud destination. evaluate the retrieved bucket-level security posture metadata and enforce a security policy on the incoming request. In one implementation of the system, the action of the incoming request includes, e.g., moving, copying, backing up, cloning, and versioning the content of the resource. In one implementation, the network security system is further configured to fulfill the incoming request, if the bucket-level security posture metadata indicates that the receiving cloud destination is to be determined a secured location against the security policy. In one implementation, the network security system is further configured to block the incoming request, if the bucket-level security posture metadata indicates that the receiving cloud destination is not a secured location against the security policy. In one implementation, the network security system is further configured to block a part of incoming request where the part of incoming request involves a sensitive object, if the bucket-level security posture metadata indicates that the receiving cloud destination is not a secured location. In one implementation of the network security system, the receiving cloud destination is a secured location where the receiving cloud destination is configured to encrypt data at rest hosted therein. In one implementation of the network security system, the receiving cloud destination is a secured location where the receiving cloud destination is configured to encrypt new objects added thereto. In one implementation of the network security system, the receiving cloud destination is a secured location where the receiving cloud destination is configured not accessible by any unauthorized users. In some implementations of the network security system, the data of the resource includes, e.g., a file, a folder, a node, a resource, or a cloud resource location itself (e.g., an Amazon S3 bucket, GCP bucket, or Azure blob). In one implementation, the network security system is further configured to determine the sensitivity of the resource by generating a synthetic request with the resource identifier to retrieve the sensitivity metadata of the resource, and receiving a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation, the network security system is further configured to determine the sensitivity of the resource by conducting content sensitivity scan, e.g., using data loss prevention (DLP) analysis (e.g., text analysis, string or sub-string inspection) over the resource, and generate sensitivity metadata that specify whether the resource is sensitive or not. In one implementation of the network security system, the generated sensitivity metadata is stored as the resource metadata of the resource (e.g., object metadata, bucket metadata, or blob metadata). In one implementation of the network security system, the generated sensitivity metadata is stored in a cloud-based metadata store which the network security system has access to. In one implementation, the network security system is further configured to determine the sensitivity of the resource by inferring from a previously generated sensitivity metadata stored in a cloud-based metadata store which the network security system has access to. In one implementation of the network security system, the synthetic request is further configured to retrieve the bucket-level security posture metadata from a cloud-based metadata store which the network security system has access to. In one implementation of the network security system, the security policy is attached to the resource (e.g., a bucket and an object), called resource-based policy. In another implementation, the technology disclosed describes a computer-implemented method. The computer-implemented method includes a network security system receiving an incoming request from a client during an application session. The network security system is interposed between clients and cloud applications. The incoming request includes a resource identifier of a resource which resides at a transmitting cloud destination (i.e., a first cloud resource location, e.g., an amazon S3 bucket, GCP bucket or Azure blob) in a cloud application. The computer-implemented method further includes the network security system analyzing the incoming request and identifying a receiving cloud destination (i.e., a second cloud resource location, e.g., an Amazon S3 bucket, GCP bucket or Azure blob). The computer-implemented method further includes the network security system detecting the request attempting to execute an action causing content of the resource propagated from the transmitting cloud destination to the receiving cloud destination. The computer-implemented method further includes the network security system determining whether the resource contains sensitive content or not (e.g., a sensitive object or link thereto). If the resource contains sensitive content, the computer-implemented method further includes the network security system holding the incoming request, generating a synthetic request with the receiving cloud destination, injecting the synthetic request into the application session and transmitting the synthetic request to the cloud application. The synthetic request is configured to retrieve bucket-level security posture metadata of the receiving cloud destination from the cloud application. The computer-implemented method further includes the network security system receiving a response to the synthetic request from the cloud application. The response supplies the bucket-level security posture metadata, which is, individually or collectively, indicative the ownership of the receiving cloud destination. The computer-implemented method further includes the network security system evaluating the retrieved bucket-level security posture metadata and enforcing a security policy on the incoming request. In one implementation of the computer-implemented method, the action of the incoming request includes, e.g., moving, copying, backing up, cloning, and versioning the content of the resource. In one implementation, the computer-implemented method further includes the network security system fulfilling the incoming request, if the bucket-level security posture metadata indicates that the receiving cloud destination is to be determined a secured location against the security policy. In one implementation, the computer-implemented method further includes the network security system blocking the incoming request, if the bucket-level security posture metadata indicates that the receiving cloud destination is not a secured location against the security policy. In one implementation, the computer-implemented method further includes the network security system blocking one or more parts of incoming request where the parts of incoming request involves at least one sensitive object, if the bucket-level security posture metadata indicates that the receiving cloud destination is not a secured location. In one implementation of the computer-implemented method, the receiving cloud destination is a secured location where the receiving cloud destination is configured to encrypt data at rest hosted therein. In one implementation of the computer-implemented method, the receiving cloud destination is a secured location where the receiving cloud destination is configured to encrypt new objects added thereto. In one implementation of the computer-implemented method, the receiving cloud destination is a secured location where the receiving cloud destination is configured not accessible by any unauthorized users. In some implementations of the computer-implemented method, the data of the resource includes, e.g., a file, a folder, a node, a resource, or a cloud resource location itself (e.g., an Amazon S3 bucket, GCP bucket, or Azure blob). In one implementation, the computer-implemented method further includes the network security system determining the sensitivity of the resource by generating a synthetic request with the resource identifier to retrieve the sensitivity metadata of the resource, and receiving a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation, the computer-implemented method further includes the network security system determining the sensitivity of the resource by conducting content sensitivity scan, e.g., using data loss prevention (DLP) analysis (e.g., text analysis, string or sub-string inspection) over the resource, and generate sensitivity metadata that specify whether the resource is sensitive or not. In one implementation of the computer-implemented method, the generated sensitivity metadata is stored as the resource metadata of the resource (e.g., object metadata, bucket metadata, or blob metadata). In one implementation of the computer-implemented method, the generated sensitivity metadata is stored in a cloud-based metadata store which the network security system has access to. In one implementation, the computer-implemented method further includes the network security system determining the sensitivity of the resource by inferring from a previously generated sensitivity metadata stored in a cloud-based metadata store which the network security system has access to. In one implementation of the computer-implemented method, the synthetic request is further configured to retrieve the bucket-level security posture metadata from a cloud-based metadata store which the network security system has access to. In one implementation of the computer-implemented method, the security policy is attached to the resource (e.g., a bucket and an object), called resource-based policy. Other implementations of the computer-implemented method disclosed herein can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the computer-implemented method described above. Yet other implementations of the computer-implemented method disclosed herein can include a non-transitory computer readable storage medium impressed with computer program instructions to enforce policies, the instructions, when executed on a processor, implement a method described above. In yet another implementation, a non-transitory computer readable storage medium impressed with computer program instructions to enforce policies is described. The instructions, when executed on a processor, implement a method comprising a network security system receiving an incoming request from a client during an application session. The network security system is interposed between clients and cloud applications. The incoming request includes a resource identifier of a resource which resides at a transmitting cloud destination (i.e., a first cloud resource location, e.g., an amazon S3 bucket, GCP bucket or Azure blob) in a cloud application. The method further includes the network security system analyzing the incoming request and identifying a receiving cloud destination (i.e., a second cloud resource location, e.g., an Amazon S3 bucket, GCP bucket or Azure blob). The method further includes the network security system detecting the request attempting to execute an action causing content of the resource propagated from the transmitting cloud destination to the receiving cloud destination. The method further includes the network security system determining whether the resource contains sensitive content or not (e.g., a sensitive object or link thereto). If the resource contains sensitive content, the method further includes the network security system holding the incoming request, generating a synthetic request with the receiving cloud destination, injecting the synthetic request into the application session and transmitting the synthetic request to the cloud application, wherein the synthetic request is configured to retrieve bucket-level security posture metadata of the receiving cloud destination from the cloud application. The method further includes the network security system receiving a response to the synthetic request from the cloud application. The response supplies the bucket-level security posture metadata, which is, individually or collectively, indicative the ownership of the receiving cloud destination. The method further includes the network security system evaluating the retrieved bucket-level security posture metadata and enforcing a security policy on the incoming request. In one implementation of the non-transitory computer readable storage medium, the action of the incoming request includes, e.g., moving, copying, backing up, cloning, and versioning the content of the resource. In one implementation, the non-transitory computer readable storage medium further includes the network security system fulfilling the incoming request, if the bucket-level security posture metadata indicates that the receiving cloud destination is to be determined a secured location against the security policy. In one implementation, the non-transitory computer readable storage medium further includes the network security system blocking the incoming request, if the bucket-level security posture metadata indicates that the receiving cloud destination is not a secured location against the security policy. In one implementation, the non-transitory computer readable storage medium further includes the network security system blocking one or more parts of incoming request where the parts of incoming request involves at least one sensitive object, if the bucket-level security posture metadata indicates that the receiving cloud destination is not a secured location. In one implementation of the non-transitory computer readable storage medium, the receiving cloud destination is a secured location where the receiving cloud destination is configured to encrypt data at rest hosted therein. In one implementation of the non-transitory computer readable storage medium, the receiving cloud destination is a secured location where the receiving cloud destination is configured to encrypt new objects added thereto. In one implementation of the non-transitory computer readable storage medium, the receiving cloud destination is a secured location where the receiving cloud destination is configured not accessible by any unauthorized users. In some implementations of the non-transitory computer readable storage medium, the data of the resource includes, e.g., a file, a folder, a node, a resource, or a cloud resource location itself (e.g., an Amazon S3 bucket, GCP bucket, or Azure blob). In one implementation, the non-transitory computer readable storage medium further includes the network security system determining the sensitivity of the resource by generating a synthetic request with the resource identifier to retrieve the sensitivity metadata of the resource and receiving a response to the synthetic request. The response supplies the sensitivity metadata of the resource. In one implementation, the non-transitory computer readable storage medium further includes the network security system determining the sensitivity of the resource by conducting content sensitivity scan, e.g., using data loss prevention (DLP) analysis (e.g., text analysis, string or sub-string inspection) over the resource, and generate sensitivity metadata that specify whether the resource is sensitive or not. In one implementation of the non-transitory computer readable storage medium, the generated sensitivity metadata is stored as the resource metadata of the resource (e.g., object metadata, bucket metadata, or blob metadata). In one implementation of the non-transitory computer readable storage medium, the generated sensitivity metadata is stored in a cloud-based metadata store which the network security system has access to. In one implementation, the non-transitory computer readable storage medium further includes the network security system determining the sensitivity of the resource by inferring from a previously generated sensitivity metadata stored in a cloud-based metadata store which the network security system has access to. In one implementation of the non-transitory computer readable storage medium, the synthetic request is further configured to retrieve the bucket-level security posture metadata from a cloud-based metadata store which the network security system has access to. In one implementation of the non-transitory computer readable storage medium, the security policy is attached to the resource (e.g., a bucket and an object), called resource-based policy. CLAUSES The technology disclosed, in particularly, the clauses disclosed in this section, can be practiced as a system, method, or article of manufacture. One or more features of an implementation can be combined with the base implementation. Implementations that are not mutually exclusive are taught to be combinable. One or more features of an implementation can be combined with other implementations. This disclosure periodically reminds the user of these options. Omission from some implementations of recitations that repeat these options should not be taken as limiting the combinations taught in the preceding sections—these recitations are hereby incorporated forward by reference into each of the following implementations. One or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of a computer product, including a non-transitory computer readable storage medium with computer usable program code for performing the method steps indicated. Furthermore, one or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps. Yet further, in another aspect, one or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) hardware module(s), (ii) software module(s) executing on one or more hardware processors, or (iii) a combination of hardware and software modules; any of (i)-(iii) implement the specific techniques set forth herein, and the software modules are stored in a computer readable storage medium (or multiple such media). The clauses described in this section can be combined as features. In the interest of conciseness, the combinations of features are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in the clauses described in this section can readily be combined with sets of base features identified as implementations in other sections of this application. These clauses are not meant to be mutually exclusive, exhaustive, or restrictive; and the technology disclosed is not limited to these clauses but rather encompasses all possible combinations, modifications, and variations within the scope of the claimed technology and its equivalents. Other implementations of the clauses described in this section can include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the clauses described in this section. Yet another implementation of the clauses described in this section can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the clauses described in this section. | 209,751 |
11943261 | Certain implementations will now be described more fully below with reference to the accompanying drawings, in which various implementations and/or aspects are shown. However, various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein; rather, these implementations are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Like numbers in the figures refer to like elements throughout. Hence, if a feature is used across several drawings, the number used to identify the feature in the drawing where the feature first appeared will be used in later drawings. DETAILED DESCRIPTION Example embodiments described herein provide certain systems, methods, and devices for migration of mainframe workloads into a computing resource service provider and ensuring the security of such workloads on the computing resource service provider. Example embodiments described herein provide certain systems, methods, and devices for managing security policies on behalf of an operating system using an external policy management service to control access to OS-level resources. Example embodiments described herein provide certain systems, methods, and devices for managing security policies on behalf of a database system using an external policy management service to control access to database-level resources. Example embodiments described herein provide certain systems, methods, and devices for analyzing security policies of a mainframe workload implemented in the context of a computing resource service provider and mathematically proving whether the mainframe workload complies with various regulatory constraint, certification requirement, etc. Techniques described herein may be used to efficiently describe, enforce, manage, and monitor the security of computing services in a way that allows for end-to-end management of workloads using a policy management service. The policy management service supports a flexible security definition that can be used to provide for finer-grained security, as well as enforcement that is not otherwise natively supported by operating systems, according to various embodiments. According to various embodiments, an external policy management service is used to control access to operating system (OS)-level resources. Various Linux and Unix-based operating systems have shortcomings in their supported security models. For example, in Linux/Unix, a command such as chmod may be used to set the permissions on a resource managed by the OS, such as a file. However, chmod is limited to specifying three different sets of permissions on a file resource—for a user, a group, and others. This may result in a difficult-to-implement security model where generating customized permissions for different users to access is cumbersome. In contrast, various embodiments described herein are used to provide customized security policies for any number of users, groups, roles, or other types of principals via an external policy management service. For example, policy management service may be used to provide multiple user-specific security policies and/or multiple group-specific security policies on a file or other OS-managed resource. An operating system is provisioned with a kernel-mode authorization interceptor, according to various embodiments. When an application attempts to utilize various OS-level resources, such as accessing a file, the application's file access attempt may be routed to the kernel via a system call. The system call may be evaluated in kernel space by an authorization intercept that determines whether to authorize access to OS-level resources using security policies that are maintained in a policy database managed by an external entity, such as policy management services described in greater detail below. In various embodiments, the security policy format of the policy management service is used to define security policies that apply an effect for a given principal (e.g., user), resource, action, and (optionally) condition. The system call may be parsed to extract a request context, and this request context may be evaluated against a set of applicable security policies to determine whether access to the requested OS-level resource should be granted. In some embodiments, a policy evaluation request is sent from the kernel-mode authorization interceptor to a policy management service to perform the policy evaluation. However, in other cases, the security policies managed by the policy management service may be cached in the kernel to improve system performance and responsiveness. Accordingly, in such embodiments, policy evaluation may be performed directly in kernel space. Techniques described herein may be used to efficiently describe, enforce, manage and monitor the security of database services (relational database management systems) in a way that allows an end-to-end management of security. Security policies for a database system are managed externally by a policy management service that can be used to provide singular view onto the entire security of a workload that utilizes multiple resources and/or types of resources of a service provider. In various embodiments, the techniques for managing database resources using an external service can be used to provide for finer-grain security (time-wise and feature-wise) as well as an enforcement on database management system (DBMS) features that are usually not individually secured as such. According to at least one embodiment, DBMS security permissions are defined in a policy database managed by a policy management service. Existing APIs may be used to define the database-related security policies, but is applied as a new type of service provider resource with new types of actions that can be performed. For example, supported actions on a database tables may include DELETE, INSERT, REFERENCES, SELECT, and UPDATE. These definitions may use regular expressions as needed on the resources to simplify the definition (e.g., “Deny Table schemaX.*” may indicate to deny access to any part of the database schema “schemaX.” Security enforcement in the DBMS is achieved through the use of security hooks that reprogram and extend the original security features of the DBMS. For example, all security code calls are wrapped, according to at least one embodiment, and external security policies from a policy database are used to determine whether access to various database resources should be granted. The above descriptions are for purposes of illustration and are not meant to be limiting. Numerous other examples, configurations, processes, etc., may exist, some of which are described in greater detail below. Example embodiments will now be described with reference to the accompanying figures. FIG.1illustrates a computing environment100in which a mainframe workload is migrated to a cloud-based service provider, according to at least one embodiment. In at least one embodiment, mainframe environment102refers to a combination of computer hardware and software that is typically owned and operated by an organization to run mainframe workloads. These workloads may be executed on-premises, or in a hybrid cloud environment in which some aspects of a mainframe workload are offloaded (e.g., to a cloud service provider) but important and/or sensitive portions of the workload continue to be executed on local machines. Mainframe environment102may be used to run workloads that are mission-critical and process highly valuable and confidential data to its owners or customers thereof. Mainframe environment may comprise authentication, authorization and traceability/auditability components that may be important for sensitive applications, which may be used in the context of highly regulated industries such as financial services industry or related fields. Security mechanisms and their observability may be implemented to satisfy regulatory and/or compliance needs. Examples of mainframes may include zOS-based mainframe systems. Security on mainframes is ensured in most cases by one of IBM's RACF (Resource Access Control Facility), TopSecret, and ACF2. Each of these is used to ensure “technical” security (access control to files, databases, files, computing resources, etc.) and also some functional security (rights to execute some transactions, to access some specific system services, etc.). As described in greater detail below, various techniques described herein are implemented in various embodiments to migrate mainframe workloads from a mainframe environment102into a computing resource service provider106environment. Computing resource service provider may be a service that can be accessed via communications network104and provides various computing resource related capabilities. A computing resource service provider may host or otherwise provide capabilities related to various computing resources, such as database resources, compute resources, data storage resources, and the like. Customers of a computing resource service provider can, accordingly, utilize the various services provided by the computing resources service provider106without onerous investment into computer servers and hardware. Rather, these resources are provided as services to customers of the computing resources service provider, which manages such resources on behalf of the customers. Mainframe workloads may be migrated from an on-premises mainframe environment102to a highly available computing resource service provider106. In various embodiments, mainframe workloads are migrated to computing resource service provider106and the computing resource service provider106utilizes compute service108to provision and run virtual machine instances that are loaded with operating systems that are customized with kernel-mode components that use a policy management service114to implement the OS-level security model. A database service110likewise uses policy management service114to implement an authorization system that grants and/or revokes access to database-level resources, such as tables and procedures. Logging and auditing service112may likewise be implemented for traceability/auditability and to maintain an activity log of the migrated workloads. Policy analyzer116is, in some embodiments, implemented as a component or feature of policy management service114. Policy analyzers may utilize various mathematical models to mathematically prove various security aspects of migrated workloads. A Satisfiability Modulo Theories (SMT)-based solver may be used to prove that certain security assurances are met by the computing resource service provider106. For example, policy analyzer116can mathematically prove whether a reference policy (e.g., representing a regulatory constraint or a certification requirement) is satisfied by the migrated workload environment. This may be accomplished by using policy analyzer to prove that the security policies implemented in policy management service114are either equivalent to or less permissive than the reference policy representing a constraint or requirement. Networking service118may be a service offered by computing resource service provider that implements virtual private cloud (VPC) capabilities. Networking service118may provide web-based frontend capabilities. In some embodiments, networking service118is used to provision a logically isolated section of computing resource service provider106wherein customers are able to access various resources within a VPC using an IPsec based virtual private network (VPN). Security policies that facilitate the access and use of networking service118may be stored in a policy database and managed by policy management service114. It should be noted that components described in connection withFIG.1and example implementations utilizing such components are described in greater detail below. FIG.2illustrates a computing environment200in which service-based security enforcement of a compute instance is implemented, in accordance with one or more example embodiments of the present disclosure. In at least one embodiment, compute instance202refers to a virtual machine instance that is executing in the context of a computing resource service provider. Compute instance202may be used to launch various software applications, provide compute-related functionality, and so forth. A compute instance202may be a virtual machine instance that is provisioned and launched in the context of a compute service of a computing resource service provider. A compute instance202may be provisioned and launched on behalf of a client. A client of computing resource service provider may launch a compute instance to run a mainframe workload or other software-based functionality. In various embodiments, compute-service is provisioned with an operating system. An operating system may be used to run various software programs and applications, such as program208depicted inFIG.2. Program208may be launched as a process in user space204or user mode. User space204may refer to executable code that runs outside of the operating system's kernel. Executable code running in user space204may include various programs, libraries, services, etc. that interact with the kernel. Processes running in user space204may make application programming interface (API) calls that transition from user mode to kernel mode to access CPU, memory, hardware devices, and so forth. Kernel space206may refer to the core of the operating system. The kernel may be used to control hardware resources (e.g. I/O, memory, Cryptography) via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. Executable code running in kernel mode may run in a supervisor mode that is trusted to interact with and control hardware components, such as CPU, memory, and other hardware devices. Program208may refer to a software-based program that runs in user mode. A user-mode application may be an executable program that is launched by a user (e.g., administrator or other user) of the operating system and may be used to implement various functionality, such as running mainframe workloads. Program208may access various computing resources. For example, program208may access or attempt to access files in the operating system. Files may be stored locally (e.g., on the same logical and/or physical disk as the operating system) or may be stored remotely (e.g., on a network drive). In various embodiments, access to computing resources is granted or denied based on a security model. An operating system may be able to specify various operating-system level permissions for resources under the control of the operating system. For example, an operating system may support setting permissions that restrict or define a user's ability to access a file. A user may be allowed to have various combinations of read, write, and execute privileges. An operating system may allow users to be part of a group. In various embodiments, techniques described herein may be used to enforce external security policies. As described herein, an external security policy may refer to a policy that is not managed natively by an operating system to control access to resources of the operating system—rather, an external policy may be managed by a separate computing entity, such as policy management service214depicted inFIG.2, in accordance with at least one embodiment. A system call210(e.g., Linux syscall) may refer to a command, operation, interface etc. that a user mode computer program requests a service from the kernel of the operating system on which it is executed. A program may invoke system call210to access a file stored on a hard disk drive, launch a child process (e.g., using a CPU), utilize other hardware, and so forth. A system call210may be used to transition execution of code from user space204to kernel space206. System call210may transfer control from user space to kernel space using an architecture-specific feature. For example, software interrupts or traps may be used to transfer control to an operating system kernel. Authorization interceptor212may refer to a kernel-mode component that an operating system is provisioned with when the operating system is launched. The operating system running on compute instance202may be different from a stock version in at least this respect. In various embodiments, authorization interceptor212runs in kernel space and cannot be deactivated by user mode operations. In some embodiments, an elevated privilege is required to access or otherwise configure authorization interceptor212once it has been provisioned. One such example may be where a token or other permission grant is provided by policy management service214that allows for the authorization interceptor212to be accessed on compute instance202. One such example of an authorization interceptor212that runs in kernel space206is extended Berkeley Packet Filter (eBPF). In various embodiments, authorization interceptor212is a kernel mode software component (e.g., executable or library) that runs opaquely, such that user mode processes such as program208are not able to determine its existence. Authorization interceptor212may be used in place of—or in addition to—an operating system's native security features to provide authorization functionality in the context of the operating system. In at least one embodiment, authorization interceptor212bypasses an operating system's native security features and instead uses policy management service214to perform authorization checks to determine whether and/or how to process system calls. In various embodiments, authorization of system calls involves determining a request context and providing the request context to policy management service214. Policy management service214may be a service of a computing resource service provider that is used to bind policies to principals, and may be used to determine whether access to computing resources should be granted. In at least some embodiments, policy management service214is extended to manage access to not only computing resources that are controlled and/or managed by the computing resource service provider, but also computing resources that are controlled and/or managed by an operating system running on compute instance202. Policy management service214described in connection withFIG.2may be used to store security policies that control the access to various resources controlled by operating systems, database management systems, and so forth. In at least one embodiment, authorization interceptor212is used to determine whether and/or how the kernel processes system call210. In at least one embodiment, authorization interceptor212detects system call210, determines a request context, and submits the request context to policy management service214for evaluation. Policy management service214may evaluate the request context, determine whether the request should be authorized, and then provide an indication of whether to grant or deny access. Authorization interceptor212may receive a response from policy management service214indicating whether system call210should be fulfilled or denied, and system call210may be processed according to such an indication. A request context may be determined by authorization interceptor212. A request context may include information regarding a system call or other access request that may be used by policy management service214to evaluate whether a grant of access to the requested resources is permitted. For example, a request context may include various information, such as a user or principal that is making the system call, one or more OS-level resources which would be accessed in fulfillment of the system call, the nature of the access (e.g., whether read, write, and/or execute privileges would be needed), and so forth. In some embodiments, a system call may involve access to multiple OS-level resources. For example, if a system call is made to delete a directory and files/sub-directories, then permissions to write each file and sub-directory (and files contained therein) may be required to successfully delete the directory. In various embodiments, determining a request context comprises performing a mapping between OS-level privilege concepts and computing resource service provider-level privilege and concepts. For example, a file within the context of an operating system may be described by a full or relative filepath—e.g., “/home/ExampleUser/data/ExFile.dat” in Linux or “C:\users\ExampleUser\data\Exfile.dat” in Windows. Within the context of a computing resource service provider, these OS-level paths may be decorated with additional information specifying a workload or execution context that further disambiguates the resource from other resources of other users. Likewise, an OS user may be associated with a service provider-level user. For example, an operating system user name may be mapped to service provider resource name that corresponds to the operating system user. In some embodiments, a service provider user name may be decorated with a use context (e.g., workload associated with compute instance202) that uniquely resolves to a specific OS-level user. In some embodiments, the user name may be described in the following format:rn:web-service:identity-manager::account-id:user/user-name The user name may be nested or otherwise encode a use context indicating that the service provider-level user name is associated with an OS-level user that is used to perform workloads on a compute instance. In some embodiment, the OS-level use is encoded as part of the user name, indicating a resource or type of resource (e.g., compute instance) that the user name is used for:rn:web-service:identity-manager::account-id:user/compute-instance/OS-user-name In some embodiments, the principal includes or otherwise encodes information regarding the workload that the operating system is being used for, such as a mainframe workload. A request context may comprise principal information, resource information, action information, etc. In some embodiments, a request context includes additional information such as request time stamp, location information, and so forth, which may be used as part of an authorization and/or authentication process. Principal information may refer to a caller entity, which may be a user, role, group, etc. that is making a system call. Resource information may refer to one or more OS-level resources for which access is being requested via the system call. Action information may refer to a type of access to the resource that is being requested by the system call, such as access to read, write, or execute a file. A bitmask may be used to specify different combinations of actions that may be requested. A bit value of 1 may denote a grant permission whereas a bit value of 0 may denote a denial of permission. Bitmask positions may refer to different actions (e.g., read, write, execute). Actions may be described as operating system or compute instance level actions, for example, a wild card “OS:*” action may indicate that all OS-level permissions should be allowed. Authorization interceptor212may transmit a request context to policy management service214. As noted before, compute instance202—which authorization interceptor executes on—as well as policy management service214may be running within the context of a computing resource service provider that hosts and/or otherwise provides various computing-related services. Computing resource service provider may be used to run mainframe application workloads in the context of an off-premises computing resource service provider. A policy management service214may provide access to, and administration of, policies applicable to requests for access to computing resources (e.g., web service application programming interface requests). For example, the policy management service may receive information sufficient for selecting policies applicable to pending requests. In some embodiments, the information may be copies of the requests, or may be information generated based at least in part on the requests. For example, a service frontend may receive a request for access to resources and may generate a query to the policy management service based at least in part on information specified by the request. In some embodiments, authorization interceptor comprises or otherwise operates in concert with an authorization module that is used to compare the request to the one or more permissions in the policy to determine whether service may satisfy the request (i.e., whether fulfillment of the request is authorized). For example, the authorization module may compare an API call associated with the request against permitted API calls specified by the policy to determine if the request is allowed. If the authorization module is not able to match the request to a permission specified by the policy, the authorization module may execute one or more default actions such as denying the request, and causing the denied request to be logged in the logging and auditing service218. If the authorization module matches the request to one or more permissions specified by the policy, the authorization module may resolve this by selecting the least restrictive response (as defined by the policy) and by providing an indication of whether the fulfillment of the request is authorized (i.e., complies with applicable policy) based on that selected response. In some embodiments, the authorization module is a separate service provided by the computing resource service provider and the authorization interceptor212of compute instance202may communicate with the authorization module over a network. Accordingly, if authorization interceptor212or another appropriate kernel-mode component determines that the system call210should be fulfilled, then service218may be invoked. In this context, service218may refer to kernel-mode software that provides a service or implements functionality related to the compute instance and/or operating system. For example, file system related services may be implemented by service218whereby an OS-level user may access files with various privileges, such as read-only privileges, read-write privileges, execute privileges, and so forth. Logging and auditing service216may refer to a service of a computing resource service provider that performs logging and auditing services. While not depicted inFIG.2, other embodiments may be implemented with logging and auditing performed by separate services, or implemented with one but not the other. In various embodiments, each permissions request that is sent by authorization interceptor212and each response to such request may be recorded or otherwise logged by logging and auditing service216. Logging and auditing service216may be a single point in which permission requests across multiple computing resources and/or types of computing resources within a computing resource service provider are collected and aggregated. For example, a mainframe workload may involve file access requests on an operating system (e.g., running on a compute instance of a computing resource service provider) and database query requests to access records stored in a database (e.g., database service offered by computing resource service provider), and so forth. Each of these requests may be recorded and aggregated together by logging and auditing service216. A technical benefit of aggregating activity logs across all resources used in a workload (e.g., mainframe workload) is that comprehensive auditing may be performed—for example, tracing data lineages and following the flow of data from one subsystem to another subsystem. This may be used to ensure the confidentiality and/or security of data is preserved throughout an entire workload, for performing forensic analysis of where data is available or has been made available, and so forth. FIG.2depicts an authorization interceptor212implemented in the context of a compute service in which system calls of importance are intercepted and an external policy management service is used to provide fine-grained access management. Compute services described herein may include virtual machine instances, containers, serverless compute resources, and so forth. Generally speaking, the compute instance may be any suitable type of compute resource in which a kernel of an operating system can be re-programmed or modified to intercept system calls. FIG.3illustrates a computing environment300in which access to an OS-level resource (e.g., a file) is authorized using a policy management service, in accordance with one or more example embodiments of the present disclosure. In at least one embodiment, operating system302refers to software running on a hardware machine, virtual machine, etc. Operating system302may run on virtualized hardware or physical hardware. For example, operating system302may be provisioned and launched on a virtual machine instance of a compute service that is hosted or otherwise operated by a computing resource service provider. Operating system302may be system software that manages computer hardware (e.g., physical or virtualized), software resources, and provides common services for computer programs. Operating system may include a user space304and kernel space306. User space304may refer to code that runs outside of an operating system's kernel. Programs and applications, such as mainframe workloads, may be executed in user space304. Typically, applications running in user space304have their own virtual memory space and, unless explicitly allowed, cannot access memory of other processes. The segregation of processes may be the basis of memory protection, as it prevents processes from viewing the memory contents and state of other processes, unless explicitly allowed (e.g., through shared memory or inter-process communications). Programs running in user space304, such as mainframe applications, may make use of operating system resources. For example, a program can read from a file stored on a disk drive that is managed by the file system of operating system302. OS-level resource request308may refer to a request to access a file, which may include data and/or executable code, for example. OS-level resource request308may be a user-mode function or application programming interface (API). In various embodiments, a kernel-mode transition is used to process a user-mode requests. A system call310may refer to a command or function that is programmatically invoked by a program to request a service from the kernel of an operating system. OS resource312may refer to a resource managed by an operating system, such as a file, hardware resources, and so forth. System call310may encode or otherwise indicate an operation to perform on OS resource312. For example, system call310may indicate that an application is requesting to open a file for read-only access, read-write access, and so forth. System call310may include information relating to a user account that submits OS-level resource request308. Authorization interceptor314may be used to determine whether and/or how to process system call310. Authorization interceptor314may be used in place of OS-level permissions316in various embodiments. OS-level permissions316may refer to permissions or policies relating to resources managed by operating system302. OS-level permissions316may provide different levels of access to OS resource312. In various embodiments, operating system302allows for different levels of permissions to be set based on user, group, and other. For example, in Linux, files and directories are OS-level resources that are owned by a user (e.g., the user that created a file), and may be assigned to a group, which defines the resource's group class, and another class that encompasses users that are not the owner and also not a member of the group. According to at least one embodiment, when system calls are made, authorization interceptor314is used to perform authorization checks within kernel space306to determine whether the system call310should be successfully fulfilled. In some embodiments, all system calls require authorization interceptor314to successfully verify that the system call is authorized. Authorization to access OS-level resources may be performed using an external policy management service320. For example, operating system302may be running on a virtual machine instance of a compute service of a computing resource service provider that also provides identity and/or access management capabilities via policy management service320. Policy management service320may be in accordance with those described elsewhere, such as those discussed in connection withFIG.16. In various embodiments, authorization interceptor314uses system call310to determine a request context. Applicable security policies managed by policy managements service320and stored in policy database322may be used to evaluate whether the user making the system call has sufficient rights. Request context may be used to determine applicable policies, which may be evaluated in the context of the system call310to determine whether or not access to OS resource312should be granted. Authorization interceptor314may have an internal policy cache318that replicates the OS-specific security policies that are managed by policy management service320. To obtain policies applicable to the request, the authorization interceptor314may transmit a query to a policy database322managed by a policy management service320. The query to the policy database322may be a request comprising information sufficient to determine a specific operating system context, such as an identifier associated with a mainframe workload that the operating system is being used to run. Authorization interceptor314may receive the security policies pertinent to the specific execution context and store them in policy cache318. For example, a policy managed by policy management service320may indicate that the user making the OS-level resource request308has read-only privileges to OS resource312. Accordingly, in this example, if OS-level resource request308is made by the user to read the resource, the request will be granted; however, if the user requests read-write access, the request will be denied. Operating system302may be provisioned such that authorization interceptor314is used to perform access and security related checks that determine whether and/or how access to resources managed by operating system302are granted. In various embodiments, a customer of a computing resource service provider may specify whether compute instances should use native security authorization systems or use policy management service320to perform authorization processes. For example, a client of a computing resource service provider may indicate (e.g., as a setting in a management dashboard) that security of operating system302should be centrally managed, then compute instances may be launched with a customized version of the operating system (e.g., operating system302depicted inFIG.3) that is pre-loaded with authorization interceptor314in the kernel. In some embodiments, operating system302is configured with OS-level permissions that deny access to all resources. In various embodiments, users of operating system302—even including administrators—are not sufficiently privileged to perform operations that change the behavior of authorization interceptor314. For example, in customized operating system302pre-loaded with authorization interceptor314, it would not be possible to remove authorization interceptor314or change the behavior of the kernel to perform authorization checks using the local operating system's security policies rather than by using policy management service320. In various embodiments, operating system302is configured in this manner to prevent policy management service320from being bypassed to allow for provably security tools to be used. These tools may be used to mathematically prove various assertions that pertain to resources—such as OS-level resources. For example, an SMT-based solver is used, in at least one embodiment, to review security policies managed by policy management service320to mathematically prove that the security policy database322are equivalent to a reference policy. This type of mathematically sound policy analysis may be used to prove, in a mathematical sense, that the security policies implemented in policy management service320are either equivalent to or less permissive than a reference policy, which can be used to comply with data security requirements, for example, as required for auditing, certification, or other needs. FIG.4illustrates a computing environment400in which security policies for an operating system are externally managed by a policy management service, in accordance with one or more example embodiments of the present disclosure. In various embodiments, operating system402comprises various components, such as a kernel that is used by applications to interface with hardware (e.g., physical hardware or virtualizations thereof). Operating systems typically implement a security model and manage access to resources managed by the operating system. For example, an operating system may be used to manage permissions for a file stored on a hard disk drive. For example, a file such as file404depicted inFIG.4may have OS-level file permissions406that indicate how various entities within operating system402can access the file. For example, a Linux-based environment may allow for privileged users to use a command (e.g., chmod) to set read, write, and execute permissions on a file for a user, group, and others. This type of security model may have various shortcomings. For example, in Linux, chmod is a command that a privileged user can use to specify permissions for a file. The file may be configured with different permissions for a file owner (e.g., a specific user), a group, and others. As an example, an owner of a file may have full access rights to read, write, and execute the file, a specific group such as an audit group may be given read-only permission to view the contents of the file but not modify it. Additionally, access to other users may be denied, unless otherwise specified (e.g., a deny-by-default stance). As discussed above, an operating system such as Linux may have various shortcomings with respect to its security model. One such shortcoming may be that Linux allows only one user and one group to own a file. Linux/Unix commands such as chmod are limited to specifying three different sets of permissions on a file resource—for a user, a group, and others. This may result in a difficult-to-implement security model where generating customized permissions for different users to access is cumbersome. In contrast, various embodiments described herein can be implemented so that operating system402is able to provide customized security policies for any number of users, groups, roles, or other types of principals via an external policy management service412. For example, policy management service410may be used to provide multiple user-specific security policies and/or multiple group-specific security policies on a file or other OS-managed resource. A role may be an identity within a computing environment with permission policies that may be used to determine a set of capabilities that the role is capable of performing within the context of the computing environment. The permissions may include rights to access data resources (e.g., rights to create, read, update, and delete a file) as well as access to services (e.g., rights to make an API request and have the request fulfilled by a service) as well as other rights (e.g., administrative rights in a computer system to grant or revoke permissions to users). However, a role is not necessarily uniquely associated with an underlying user or identity and may be assumable by anyone that has a permission to assume the role. In some cases, access keys are dynamically created when a role is assumed. Roles may be used as a mechanism to facilitate access to resources with temporary security credentials for the role session. This means that the time-oriented granularity of security is improved: a user will receive credentials on some mainframe resource only for the duration of the role session rather than forever. This is an improvement on standard mainframe security where privileges are usually granted for an indefinite period of time. In at least one embodiment, a security model of an operating system is implemented using an external policy management service, as depicted inFIG.4. In various embodiments, authorization to access file404is managed using an external service, such as policy management service410depicted inFIG.4. For example, if an application running within the context of operating system402attempts to access file404, a file access request may be processed using authorization interceptor408. Authorization interceptor408may be a software component that is always running in the kernel of operating system402and intercepts system calls to the kernel of operating system402. When system calls or other user-to-kernel calls are made, those calls may be routed to authorization interceptor408and then authorization interceptor is used to determine whether access to various resources managed by operating system402should be granted. Authorization interceptor408may be used to process system calls from applications in user space. When a user-mode application makes various API calls, a kernel mode transition may be facilitated via system calls. Once in kernel mode, system calls may be routed to authorization interceptor408, and then authorization interceptor408uses policy management service410to validate whether access to various resources needed to fulfill the system call are permitted, according to at least one embodiment. Authorization interceptor408may submit an API request to policy management service410within the context of a computing resource service provider that retrieves one or more applicable security policies from policy database412, which may be used to evaluate whether a grant of access should be permitted to operating system402resources as part of fulfilling the system call. In various embodiments, policy database412is used to store security policies that pertain to a resource. Policy database412may be used to manage security policies for resources directly managed by a computing resource service provider (e.g., compute services, data storage services, database services, etc. offered by computing resource service provider to customers). In at least one embodiment, policy database412stores security policies for OS-level resources, such as files of operating system402running on a virtual machine instance hosted by computing resource service provider, as depicted inFIG.4 Security policies of policy database412may be used to define permissions for how resources can be accessed. For example, multiple security policies414A-D may be applicable to file404, which is an OS-level resource. A security policy may comprise a principal, resource, action, and effect. In some embodiments, a security policy further comprises a condition that indicates when the security policy is applicable—for example, certain security policies may be applicable only at certain times (e.g., weekdays, during standard business hours, and so forth). Multiple security policies may be defined to control access to an OS-level resource. For example, a first security policy414A may apply to a first OS user that grants all privileges to that user. For example, these privileges may include OS-level read, write, and execute privileges. A second security policy414B may apply to a second OS user and grant that user a different set of privileges, such as read and write access to the resource. Likewise, security policies may be defined for groups or other types of principals, such as roles. Security policy414C may be applicable to an audit group that grants users of that group read-only access to file404. Additionally, security policy414D may grant another group a different set of OS-level privileges to a different group. In this way, security policies can be customized to provide multiple users and/or multiple groups with different levels of access to OS-level resources and may be used to extend the security capabilities of operating system402. In various embodiments, OS-level privileges are gathered as groups and granted as a whole through the group instead of individually, which may be done to increase efficiency of security management and ensure consistent application of security policies across a set of users. A privilege group can be designed and ordered to further improve security, wherein a second privilege can only be used after a first privilege is used—this means if access to the second privilege is improperly obtained (e.g., by a hacker or malicious user), that access to sensitive resources is still denied. FIG.5shows an illustrative example of a process500for managing security policies on behalf of an operating system, in accordance with one or more example embodiments of the present disclosure. In at least one embodiment, some or all of the process500(or any other processes described herein, or variations and/or combinations thereof) is performed under the control of one or more computer systems that store computer-executable instructions and may be implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, software, or combinations thereof. The code, in at least one embodiment, is stored on a computer-readable storage medium in the form of a computer program comprising a plurality of computer-readable instructions executable by one or more processors. The computer-readable storage medium, in at least one embodiment, is a non-transitory computer-readable medium. In at least one embodiment, at least some of the computer-readable instructions usable to perform the process500are not stored solely using transitory signals (e.g., a propagating transient electric or electromagnetic transmission). A non-transitory computer-readable medium does not necessarily include non-transitory data storage circuitry (e.g., buffers, caches, and queues) within transceivers of transitory signals. Process500may be implemented in the context of various systems and methods described elsewhere in this disclosure, such as those discussed in connection withFIGS.2-4and17. In at least one embodiment, process500or a portion thereof is implemented by a computing resource service provider. In at least one embodiment, process500comprises a step to detect502an OS-level request to access a computing resource of an operating system environment. The OS-level request may be detected in the kernel space of an operating system, for example, by an authorization interceptor module that runs as a critical component of the operating system. In various embodiments, an authorization interceptor receives and processes system calls from applications in user space to determine whether to grant access to various resources managed by the operating system. In at least one embodiment, process500comprises a step to determine504a request context. Determining a request context may involve determining a principal, a resource, and an action. A principal may refer to the entity that is requesting a specific action to be performed. The principal may be an OS-level user, a role that a user may assume by receiving a time-sensitive token from a computing resource service provider that has certain rights associated with it, and so forth. A resource may refer to an OS-level resource that is managed by the operating system, such as files and directories. An action may refer to a type of action that is being requested. This may involve read, write, execute, etc. rights, and different users may have different permissions based on their needs. For example, an audit team within a business organization may need read access rights in order to determine compliance with various internal and external rules and regulations. In various embodiments, OS-level and/or application-level indicators are determined based on the context in which a system call is made. For example, when a file request is made, an authorization interceptor may determine the user that is making the request, a filepath for the request, and a type of action that is being performed (e.g., open file for read-write access). This information may be used to determine a corresponding service provider-level user name, resource name, and the like. Service-provider resource identifiers may include information identifying a specific web service, account identifier, etc. Resource identifiers for OS-level resources may be decorated with information indicating the type of resource (e.g., OS-level user or resource) and may include information that can be resolved to a specific workload that a customer is running. In this way, and others, a naming convention may be established to uniquely identify OS-level resources, users, actions, etc. in the context of a service provider that may manage many such workloads for a single customer or for multiple customers. In a multi-tenant compute service, an authorization interceptor detects the tenant who is making the request and then applies the security policy for that specific tenant even if the various tenants correspond to independent customers are running. In at least one embodiment, process500comprises a step to determine506one or more applicable security policies managed by a policy management service. A policy management service external to the operating system may manage a policy database that defines the security policies applicable to various resources, including OS-level resources. These OS-level resources may have resource names that are decorated or otherwise specify what type of resource they are. In various embodiments, a policy management service manages the security policies applicable to the operating system and the operating system's kernel stores a local and protected cache of the security policies that are externally managed. These security policies may be used to determine one or more applicable security policies to the resource for which access is being requested. In some embodiments, security definitions are obtained by the system performing process500from policy management service and cached locally (memory and/or storage) to minimize traffic with the policy management service. This may be used in systems where the OS-level security policies do not frequently change. In some embodiments, a local cache of the OS-level security policies is maintained and updated (e.g. rebuilt) periodically, or in response to notifications from policy management service when updates have been made to the OS-level security policies. In some embodiments, an authorization interceptor running in kernel space of an operating system sends a web service API request to a policy management service requesting evaluation of the request context to determine whether the access should be granted. In various embodiments, a policy management service receives a request context and performs a policy evaluation by retrieving applicable security policies from a policy database. In various embodiments, process500comprises a step to perform508a policy evaluation using the request context and the one or more applicable security policies that were determined. If no security policies are found, then a deny-by-default stance may be taken, whereby access to resources is denied by default, unless there is an explicit grant of permission. However, if applicable security policies are found, they may be used to determine whether access to OS-level resources should be granted. For example, security policies may specify one or more actions that can be performed, such as reading, writing, and executing files. Policy management service may perform an evaluation of any applicable security policies against the request context and then provide an evaluation result indicating whether access should be denied or granted. Accordingly, a system performing process500may use security policies managed by a policy management service to determine whether to grant access to the computing resource of the operating system. In some embodiments, an authorization service of the operating system obtains applicable security policies from the policy management service and performs the policy evaluation locally within the kernel. In at least one embodiment, process500comprises a step to determine whether510is to grant or deny access to an OS-level resource based on an evaluation determined using security policies managed externally from the operating system. The indication may be used to fulfill the request512using resources accessible to the kernel if the policy evaluation is in the affirmative or to deny514the request if the policy evaluation is in the negative. FIG.6illustrates a computing environment600in which service-based security enforcement of a database management system is implemented, in accordance with one or more example embodiments of the present disclosure. In various embodiments, database management system602is implemented as a database service that is offered by a computing resource service provider to customers. A customer may interact with database management system602via web service API calls that are routed through a service frontend. In various embodiments, a database system comprises components such as a database management system602. A database management system (DBMS) may refer to software that interacts with the underlying database itself. For example, database management system602may comprise a query processor604for receiving requests from users to interact with the underlying database in certain well-defined ways. Database management system602is implemented as a relational database management system (RDBMS) such as PostgreSQL, according to various embodiments contemplated in the scope of this disclosure. In a multi-tenant RDBMS service, the authorization interceptor determines the tenant who is making the database request and then is able to either apply the security policy for the specific tenant (e.g., when policy evaluation is performed in the RDBMS) or the request context sent to a policy management service encodes the tenant information, even if the various tenants correspond to independent customers. A database management system may comprise a query processor604. Query processor604may be a component of a database management system that is used to interpret requests (e.g., queries) received from end users via an application program into instructions. It also executes the user request that is received from a data manipulation language (DML) compiler. SQL is an example of a data manipulation language that is designed for managing data in relation database management systems. Query processor604is used to convert requests from an application into function calls that can be executed by the database system, according to at least one embodiment. In various embodiments, query processor604uses a data dictionary to determine the architecture of the database and make use of this information to determine how to convert the query into instructions and crafting a strategy for processing the query. Database manager606may be referred to as a run-time database manager, and may be used to process application requests. Queries may be received by database manager606and processed using various sub-components, such as an authorization control, command processor, transaction manager, buffer manager, and so forth. Database manager606may use DBMS security hooks608to authorize queries. If a query is allowed, based on the query context, then the underlying database610storing data records may be accessed to process the user's query. In various embodiments, database manager606comprises an authorization control module. The authorization control module may be used by the DBMS to determine the authorization of users and determine whether and/or how to process query requests. For example, an administrator or suitably privileged user may use a “GRANT” statement to allow access to various database resources, such as records and tables. For example, the following SQL statement may be used to grant ExUser the ability to select data from the ExTable:GRANT SELECT ON ExTable TO ExUser Database grants are used to provide users of a database with various privileges on tables and views, according to various embodiments. Conversely, database revokes may be used to revoke various permission grants that may have been previously issued. Database systems may implement a deny-by-default stance whereby access to database resources is denied by default, unless there is an explicit grant of permission. When a query is received by database management system602, the query statement may be parsed by query processor604. As part of query processing, a set of database resources may be identified as being needed to fulfill the request and a type of access that is requested. Resources which can have permissions attached to them may be referred to as securables or securable resources. In various embodiments, a database is a securable resource that includes some or more of the following actions that can be granted: BACKUP DATABASE, BACKUP LOG, CREATE DATABASE, CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE, and CREATE VIEW. In various embodiments, a scalar function is a securable resource that includes some or more of the following actions that can be granted: EXECUTE and REFERENCES. In various embodiments, a table-value function is a securable resource that includes some or more of the following actions that can be granted: DELETE, INSERT, REFERENCES, SELECT, and UPDATE. In various embodiments, a stored procedure is a securable resource that includes some or more of the following actions that can be granted: EXECUTE. In various embodiments, a table is a securable resource that includes some or more of the following actions that can be granted: DELETE, INSERT, REFERENCES, SELECT, and UPDATE. In various embodiments, a view is a securable resource that includes some or more of the following actions that can be granted: DELETE, INSERT, REFERENCES, SELECT, and UPDATE. Policy management service612may be a service of a service provider that is external to database management system602. Policy management service612may manage and facilitate access to security policies for database management systems in a policy database. In various embodiments, DBMS security hook608refers to a software library, module, plug-in, or other executable code that may be used to route authorization processes of database management service602to policy management service612. In various embodiments, a database query context is determined by DBMS security hook608and transmitted to policy management service612for evaluation against one or more applicable security policies. Policy management service612may determine a set of applicable security policies from a policy database and evaluate any applicable security policies against the database query context to determine whether the query is authorized. For example, if a user attempts to run a stored procedure (sproc) but does not have the EXECUTE permission (or a permission such as ALL that includes the EXECUTE permission), then authorization to run the sproc should be denied. It should be noted that in various embodiments, policy management service612is used in place of a database management system's own authorization or security subsystems. There are various benefits to using an external policy management service612to perform authorization checks on behalf of resource managed by the database management system602, including the ability to use provably security tools. In various embodiments, users of database management system602—even including administrators—are not sufficiently privileged to perform operations that change the behavior of DBMS security hook608. For example, in customized database management system602pre-loaded with DBMS security hook608, it would not be possible to remove DBMS security hook608or change the behavior of database manager606to perform authorization checks using the local operating system's security policies rather than by using policy management service612. In various embodiments, database management system602is configured to prevent policy management service612from being bypassed to allow for provably security tools to be used to evaluate the conditions under which the underlying database resources can be used. These tools may be used to mathematically prove various assertions that pertain to resources—such as database-level resources. For example, a SMT-based solver is used, in at least one embodiment, to review security policies managed by policy management service612to mathematically prove that the security policy database are equivalent to a reference policy. This type of mathematically sound policy analysis may be used to prove, in a mathematical sense, that the security policies implemented in policy management service612are either equivalent to or less permissive than a reference policy, which can be used to comply with data security requirements, for example, as required for auditing, certification, or other needs. For example, a mainframe workload may be migrated to a computing resource service provider that uses various compute services (e.g., running applications on an operating system), database resources (e.g., executable routines/transactions, data records), and so forth to perform the mainframe workloads. By defining and managing various internal resource permissions using an external policy management service, a policy analyzer may mathematically prove that a mainframe workload that is exported or migrated to run on a computing resource service provides the same—or better—security assurances as the off-premises mainframe counterparts. Logging and auditing service614may refer to a service of a computing resource service provider that performs logging and auditing services. While not depicted inFIG.6, other embodiments may be implemented with logging and auditing performed by separate services, or implemented with one but not the other. In various embodiments, each permissions request that is sent by DBMS security hook608and each response to such request may be recorded or otherwise logged by logging and auditing service614. Logging and auditing service614may be a single point in which permission requests across multiple computing resources and/or types of computing resources within a computing resource service provider are collected and aggregated. For example, a mainframe workload may involve file access requests on an operating system (e.g., running on a compute instance of a computing resource service provider) and database query requests to access records stored in a database (e.g., database service offered by computing resource service provider), and so forth. Each of these requests may be recorded and aggregated together by logging and auditing service614. A technical benefit of aggregating activity logs across all resources used in a workload (e.g., mainframe workload) is that comprehensive auditing may be performed—for example, tracing data lineages and following the flow of data from one subsystem to another subsystem. This may be used to ensure the confidentiality and/or security of data is preserved throughout an entire workload, for performing forensic analysis of where data is available or has been made available, and so forth. Logging and auditing service614can be used to establish a set of baseline policies and enforce principles of least privilege. In at least one embodiment, a security policy creator can be defined to facilitate creation definition: on a reference machine with no initial security, the workloads to be authorized in run for a period of time long enough to see all features triggered. The corresponding security authorization logs that are collected are then analyzed to produce a sanitized version of the minimum set of security policies that are needed to ensure such accesses to all principals active in this workload on the basis of the principle of least privilege In various embodiments, a policy evaluation is performed wherein a database query context is compared against a set of applicable security policies to determine whether the query should be permitted. In various embodiments, such as those in accordance withFIG.6, policy management service612determines whether the query should be allowed and sends an indication of yes/no to indicate the result of the policy evaluation. In other embodiments (not depicted inFIG.6), DBMS security hook608or another suitable component of database manager606perform the policy evaluation by obtaining, from the policy management service612, a set of applicable security policies and performs the policy evaluation against the query entirely within database management system602. If the policy evaluation results in an affirmative evaluation, then the query may be fulfilled and executed according to any determined query strategies that may be used to optimize access to data of database610. Otherwise, an indication that access is denied may be provided in response to the query. FIG.7illustrates a computing environment700in which security policies for a database system are externally managed by a policy management service, in accordance with one or more example embodiments of the present disclosure. In at least one embodiment, database management system702is implemented as a database service offered by a computing resource service provider. A computing resource service provider may provide various cloud-based services, such as cloud-based database services, compute services, workload services, and so forth. Identity and access management services may also be provided by a policy management service. Database management system702comprises a query processor704and a database manager706, according to at least one embodiment. A database manager706may further comprise DBMS security hook708, command processor710, transaction manager712, data manager714, combinations thereof, and so forth. Query requests submitted by a client of DBMS702may be parsed by a query processor704and then sent to database manager706to determine how the request should be fulfilled. Database716may refer to the underlying database that is used to store structured data of the database system. In various embodiments, database manager comprises a DBMS security hook708component that is used to authorize access to various database resources. These resources may be referred to as securables or securable resources. In various embodiments, a database client submits a query request (e.g., a SQL query statement) that is parsed by query processor704to identify one or more database resources that are to be accessed as part of fulfilling the query. DBMS security hook708may be used to determine a database query context from the user query. In at least one embodiment, query statements are parsed to identify resources and actions for which a grant of access may be required to fulfill the query. Additionally, a database user who made the database query request may be identified. DBMS security hook708may determine a database query context and use a policy cache724to perform policy evaluation. In various embodiments, the database-related security policies of policy management service718are replicated in policy cache724to provide a local view of the security policies. This local caching may be done for performance reasons. In policy cache724may be used to perform a policy evaluation and determine whether the requested database operation should be fulfilled. Policy cache724may comprise security policies of policy database720that pertain to the specific database instance. Policy database720may be in accordance with those described elsewhere in this disclosure, including those described in connection withFIGS.2-4. Policy database720may comprise records that correspond to security policies that pertain the computing resources directly managed by a computing resource service provider, as well as those that are managed by resources within the service provider. For example, security policies database records of a database service hosted by a computing resource service provider may be stored in policy database720and used to perform policy evaluations on behalf of the database system. Policy management service718may manage security policies on behalf of database management system702to control access to internal resources of the database management service702. For example, security policies722A-C may pertain to internal resources of database management system702. A first security policy722A may correspond to a grant statement that allows an engineering group “Eng_Group” (e.g., users thereof) to perform SELECT and INSERT commands on an “ExTable” table of the database. A second security policy722B may be generated and stored in policy database720as a corresponding grant that allows a group of users “Audit_Grp” to perform SELECT operation on various database resources (e.g., table-valued functions, tables, views, and any other securables that support SELECT functionality). Third security policy722C may correspond to a specific user ExUser that is granted privileges to perform any command on a specific table ExTable. Although not depicted inFIG.7, security policies for database resources may be decorated to include additional information that identifies the principal, resource, action, etc. as being specific to a database resource or even to a specific database instance that is being hosted by a computing resource service provider. In some embodiments, the principal and/or resource name may be decorated with information indicating that the principal and/or resource is an internal database resource. In some embodiments, the principal and/or resource name includes an identifier that uniquely identifies a specific database instance, thereby allowing a security policy to identify a particular user of a particular database instance, a particular resource of a particular database instance, and so forth. Actions may be specified as database action types, which may include actions that can be granted on a database, including but not limited to: ACKUP DATABASE, BACKUP LOG, CREATE DATABASE, CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE TABLE, CREATE VIEW, EXECUTE, DELETE, INSERT, REFERENCES, SELECT, and UPDATE. In some cases, the resource type may dictate the types of operations that can be performed. For example, the EXECUTE command is not applicable to a table, but would be applicable to a stored procedure, according to at least one embodiment. In various embodiments, each GRANT statement of a database system is stored in policy database720as a corresponding security policy record. Policy management service718may be used to perform policy evaluation tasks for a database query request by determining a set of applicable security policies and then evaluating whether the database resources needed to fulfill the request have corresponding grants. If access to all resources needed to fulfill the grant are granted, then the request may be fulfilled and the query may be fulfilled using command processor710, transaction manager712, data manager714, access database716, or some combination thereof. FIG.8illustrates a computing environment800in which a permission grant for a database system is forwarded to an externally managed by a policy management service using a database management system (DBMS) security hook, in accordance with one or more example embodiments of the present disclosure. In various embodiments, database management system802provides an application interface to users such as user804to interact with a database. User804may be able to submit database queries according to a data manipulation language, such as SQL. In various embodiments, user804is able to manage permissions for the database system by performing GRANT and REVOKE commands. A grant may refer to a request to extend the privileges of a user, whereas a revoke may be used to remove permissions that a user has. A permission grant806as depicted inFIG.8may be processed by database management system802such that an external policy management service is used to manage the security policies used to control access to various internal resources of the database system. Permission grant806may be a database statement according to a data manipulation language such as SQL that is intended to modify user permissions. For example, a grant statement may be in the following syntax:GRANT [permission] ON [securable resource] TO [principal] Permissions may correspond to various actions that may be performed. Each type of resource may have a predefined list of permissions that are applicable to the resource type. For example, a database table may allow for delete, insert, references, select, and update actions, but does not allow execute actions on the database table. Likewise, a scalar function may allow for execute and references, but not select, insert, delete, and update. Permissions grant806may be received by database management system802via a service frontend and the request may be sent to a query processor808. In various embodiments, query processor808is used to extract securable resources810specified in permission grant806. In some cases, the securable resources810are determined using another component of database management system802and provided to database manager812. In various embodiments, database manager812comprises DBMS security hook814that overrides the database management system's default behavior when handling grant and revoke commands. Typically, a database management system manages its own grants and revokes them internally. In accordance withFIG.8and in at least one embodiment, DBMS security hook814overrides the default database management system behavior as it relates to grants and revokes and submits a web service API command to policy management service to update permissions816. In various embodiments, the command determines the database context of the permission grant and maps the database user, database permission, and database securable to correspond web service user, web service action, and web service resource. In various embodiments, the corresponding web service identifiers are decorated with additional information that uniquely identifies specific database management system which the security policy pertains to. Policy management service818may receive an update permission816request from database management system802and update policy database820to reflect the requested permission grant. For example, policy database820may be updated with a new record that corresponds to a security policy that can be used to grant access to securable resources810of database822. FIG.9illustrates a computing environment900in which a permission grant for a database system is forwarded to an externally managed by a policy management service using a database management system (DBMS) security hook, in accordance with one or more example embodiments of the present disclosure. In some cases, it is possible to modify a database management system to incorporate additional functionality, such as DBMS security hooks, as described in connection withFIG.8. This may be possible when the source code for such database systems is available, for example, when open source database such as PostgreSQL is used as the underlying database system. In contrast, embodiments discussed in connection withFIG.9may involve the use of databases in which such modifications are not possible—for example, where the source code for the database system is propriety or not otherwise easy to modify to incorporate a DBMS security hook.FIG.9illustrates various embodiments in which a database service902nonetheless is able to use an external policy management service to implement various security features. Database service902may refer to a service that is provided by a computing resource service provider. Database service902may host various types of database systems, such as open source databases (e.g., PostgreSQL) and proprietary databases. A customer of database service902may choose from several different types of database offerings and select the database system that is best suited for the customer's needs. For example, a customer that is migrating a mainframe workload from an on-premises database system that uses SQL server may choose to use an analogous SQL server offered by the computing resource service provider so as to reduce the complexity of such migrations. InFIG.9, a first user904A of database service902is depicted as submitting an API request to grant a permission. A first permission grant906A submitted by first user904A via database service902may be used to modify the privileges of database users. A grant is typically used to expand a user's access privileges, whereas a revoke is typically used to reduce a user's access rights. While a first permission grant906A is depicted inFIG.9, techniques described herein may be similarly applicable to revocations and any other types of commands that modify users' database privileges. A service frontend908of database service902may receive a permission grant from first user904A. The permission grant may be encoded in a data manipulation language, for example, as a SQL statement. When service frontend908detects a request to modify permissions, that request may be forward to policy management service910. Policy management service910, in at least one embodiment, includes a software component that is used to map the database permission grant906A to a first policy912A that is stored in policy database914. First policy912A may comprise encode a principal, an action, a resource, and an effect. In various embodiments, a second user904B may directly interact with policy management service910to modify the permissions on database management system916. For example, second user904B can submit a second policy912B to grant or revoke access to database resources managed by database management system916, such as database tables and views. Policy management service910may receive second policy912B and store second policy914in policy database914 Policy management service910may, in addition to writing policies such as first policy912A and second policy912B to policy database914, may also control the security of database management system916. In various embodiments, users are not able to circumvent policy management service910when causing changes to the permission of database management system916. Policy management service910, in various embodiments, determines grants920that reflect the database-related policies that are in policy database914. For example, if first policy912A indicates that User_A should have read access to Table_X and second policy912B indicates that User_B should have full access to Table_Y, then grants920may comprise SQL statements to provide such grants:GRANT SELECT ON Table_X TO User_AGRANT*ON Table_Y TO User_B In such embodiments, the database management system916receives permission grants920exclusively from policy management service910, and policy management service910is able to ensure that the permission grants in database management system916reflect the security policies that it has stored in policy database914. In this way, mathematically sound policy analysis can be performed using the database-related policies stored in policy database914, as they accurately reflect the permission grants of database management system916. In various embodiments, policy management service supports custom and fine-grained security policies for database systems. For example, instead of granting full system administration privileges to a user, the policy management service can define authorization to a given user for a specific utility (example, authorization to reorg a table in place for better access but not to backup it on disk). In this way, a first set of administration privileges may be granted to a user while a second set of administration privileges are not. In some embodiments, a security policy grants a user access to a first portion of a database table but not to a second portion (e.g., certain entries can be marked as sensitive and not accessible without elevated privileges). FIG.10shows an illustrative example of a process1000for performing authorization checks of a database system using an external policy management system, in accordance with one or more example embodiments of the present disclosure. In at least one embodiment, some or all of the process1000(or any other processes described herein, or variations and/or combinations thereof) is performed under the control of one or more computer systems that store computer-executable instructions and may be implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, software, or combinations thereof. The code, in at least one embodiment, is stored on a computer-readable storage medium in the form of a computer program comprising a plurality of computer-readable instructions executable by one or more processors. The computer-readable storage medium, in at least one embodiment, is a non-transitory computer-readable medium. In at least one embodiment, at least some of the computer-readable instructions usable to perform the process1000are not stored solely using transitory signals (e.g., a propagating transient electric or electromagnetic transmission). A non-transitory computer-readable medium does not necessarily include non-transitory data storage circuitry (e.g., buffers, caches, and queues) within transceivers of transitory signals. Process1000may be implemented in the context of various systems and methods described elsewhere in this disclosure, such as those discussed in connection withFIGS.6-9and17. In at least one embodiment, process1000or a portion thereof is implemented by a computing resource service provider. In at least one embodiment, process1000comprises a step to receive1002a database query request. The request may be formatted as a data manipulation language (DML) statement or a data definition language (DDL) statement, such as a SQL query statement. DDL statements may be used to define data structures, and may be used to manage the database system by creating, altering, or dropping data structures in a database. DML statements may affect information stored in the database, and may be used to insert, update, and update records in the database. In various embodiments, the statement is generated by a client application and transmitted to a database service for fulfillment. The query request may be received by a database management system that uses a query processor to determine a set of database resources that will or may be needed to fulfill the request. The query processor output may be provided to a database manager that is used to perform database runtime operations. The database manager may comprise an authorization component, which may be implemented as a DBMS security hook that uses a policy management service external to the database system to determine whether various database queries should be fulfilled. In at least one embodiment, process1000comprises a step to determine1004a principal, action, and one or more database resources to be accessed to fulfill the request. The query may be parsed to extract a database query context, which is used for policy evaluation in various embodiments. For example, a database user making a query may be mapped to a service provider user identifier, a database table may be decorated to uniquely identify the specific table in the database from other database tables used in other contexts, and so forth. In at least one embodiment, process1000comprises a step to determine1006a request context. In various embodiments, a DBMS security hook maps the database query to a security policy format within the context of a computing resource service provider, with a policy management service of the service provider. In various embodiments, security policies managed by the policy management service are replicated in a local cache of the DBMS. In at least one embodiment, process1000comprises a step to determine1008, based on applicable security policies managed by a policy management service, an indication of whether to grant access to the one or more database resources. A request context from the DBMS security hook of a database system used to determine a set of applicable security policies, and policy evaluation may be performed to determine whether access should be granted by evaluating the request context against the set of applicable security policies. For example, the request context may indicate one or more database resources (e.g., securables) are to be accessed as part of the database query. If the applicable security policies indicate that access to the database resources should be allowed in the requested manner, then the request may be authorized. However, if there is a security policy explicitly denying access to the database resources, or if no applicable security policy provides an explicit grant of access, then the database query request may be considered unauthorized. The DBMS security hook may provide a response indicating whether access should be granted or denied. Once the indication has been determined, the system may use that indication to determine whether1010to grant or deny access to the resource based on the result of the policy evaluation. If the indication is in the affirmative, the DBMS may fulfill1012the request and use downstream components of a database manager to access the underlying data of the database that the requestor has been authorized to access. However, if the indication is to the contrary, then the DBMS may deny1014the request, indicating that the requestor has insufficient privileges to access the database in the requested manner. FIG.11illustrates a diagram1100that depicts architectural aspects of different security architectures. On the left hand side ofFIG.11, a disparate security architecture1102is depicted. The disparate security architecture1102may be used to implement mainframe-migrated workloads in a service provider environment, but has various shortcomings. For example, mainframe security definitions (e.g., defined in a RACF database) are imported, the security definitions may be transposed into different systems. For example, an RACF database may specify how various database (e.g., DB2 database) resources can be used, and those security definitions may be imported into a database system1104as grant and revoke statements1106for database users. Security definitions that pertain to the ability to read, write, and execute files may be managed by operating system1108and setting various OS permissions1110on operating system users. Policy system1112may be used to manage the permissions to various computing resources that are utilized to run the mainframe workload, for example, access to data buckets and data objects that a service provider may provide as part of a data storage service. Policy system1112may define various policies1114at the service provider level that apply to service provider user accounts. Lightweight Directory Access Protocol (LDAP) server1116depicted may be used to store additional policies1118that do not easily match the semantics that can be expressed using database system1104, operating system1108, policy system1112, and other systems that implement their own security models. For example, LDAP server may be used to define policies1118at higher-level semantics and allow for a specific grant of authorization to revert some operation on a customer account, a grant of authorization to trade or pay for a certain maximal amount, and so forth. This dispersion of disparate definitions “facilitates” the creation of security gaps by the system operators as they cannot evaluate and analyze the global security from one single place with a unified tooling. In that sense, it is a significant drawback when compared to core systems like mainframes where all of the security is managed in a fully centralized manner (e.g., using System Authorization Facility) relying on a single and unified security database (RACF) encompassing all of the security definitions. This centralization allows to develop tools doing thorough end-to-end security validation. Finally, many logging and auditing tools cannot be applied across the disparate security architecture as the authorization requests and determinations may be made internal to some of the disparate systems, resulting in a lack of uniformity and auditability. In contrast, centrally-managed security architecture1120depicted on the right hand side ofFIG.11may serve as an architecture that is implemented by various embodiments described herein. All security definitions related to cloud services such as compute service1122, database service1124are managed by policy management service1126and stored in policy database1130. The policy database1130may store mainframe-related security permissions. Mainframe-related sources may be specified as a new type of service resource, and new kind of actions/authorization may be available on these resources. For example, supported actions for database tables may include DELETE, INSERT, REFERENCES, SELECT, and UPDATE. Those permissions reside outside of the mainframe cloud service, and in a central policy database1130with all other security definitions related to other cloud services used by the applications. In various embodiments, those security definitions use standard regular expressions as needed on resources to minimize and simplify the definition work. Policy management service1126may comprise or otherwise have access to a policy analyzer1128, which may be used to provably demonstrate that the security policies for a mainframe workload meet certain security assurances. Security enforcement is achieved through reprogramming and modification of database system and operating system components. In various embodiments, an operating system that is provisioned and launched by compute service1122is configured with a kernel-mode authorization interceptor that uses policy database1130to perform policy evaluation and determine whether various system calls to access OS-level resources should be permitted. Likewise, a database management system may include hooks to reprogram and extend its original security features to use policy database1130for security enforcement. In various embodiments, these authorization components are wrapped in the managed service so that it is not possible to circumvent them through replacement or workarounds. In various embodiments, mainframe-related security permissions are defined in a policy management service (e.g., via reuse of existing APIs). New kinds of resources may be defined to specify mainframe resources and new kind of authorizations may be defined on those resources. Thus, those security policies and permissions will reside outside of a mainframe cloud service in a central policy database with all other security definitions related to other cloud services used by the application. Those definitions may use regular expressions to minimize and simplify the definition work. For example, “Read File X.Y.*” may allow a user to have a read access to any data set whose name starts with X.Y. Mainframe definitions may be imported in bulk to policy database using an automated procedure comprising of a mainframe database (e.g., RACF database) export, download, parse/reformat, and import process. Security enforcement for the mainframe-migrated cloud service may be achieved via mainframe service security hooks to reprogram and extend the original security features. For example, all security code for calls may be wrapped in the managed service. As such, it would not be possible to circumvent them through replacement or workaround of this custom code. FIG.12illustrates a computing environment1200in which mainframe workloads are processed in a computing resource service provider, in accordance with one or more example embodiments of the present disclosure. Computing resource service provider1202may refer to a cloud-based service that provides various services, such as computing services, database services, policy management services, and so forth. In various embodiments, computing resource service provider1202is used to provide mainframe services, for example, as depicted inFIG.12. A mainframe workload may involve the usage of different types of resources, including, for example, an application that runs in an operating system environment, and a database to persist and access various data records of interest. Client1204may refer to a client that submits requests such as request1206to perform various functions within the context of a mainframe service. Client1204may refer to a customer of a computing resource service provider1202or a computer system controlled by such an entity. In various embodiments, client1204communicates with computing resource service provider over a communications network to submit workload requests. Client1204may refer to an individual, organization, or computer systems operating under the control of such entities. In various embodiments, client1204migrates an on-premises mainframe environment into a computing resource service provider. Request1206may refer to a mainframe request or requests for other applications. In various embodiments, a client1204submits a workload request to a frontend of a service provider and then the request is forwarded to a mainframe for processing. In various embodiments, computing resource service provider1202operates a compute instance1208, database management system1210, and policy management service1212to facilitate the execution of mainframe workloads. In various embodiments, client1204submits request1206to a computing resource service provider1202. The request1206for access to application1218may be received by a service frontend1224, which, in some examples, comprises a web server configured to receive such requests and to process them according to one or more policies managed by policy management service1212. Policy database1222may store security policies that can be evaluated to determine whether the request1206made by client1204should be authorized. Compute instance1208may refer to a virtual machine instance, container instance, serverless compute instance, etc. that is provisioned by a compute service of computing resource service provider1202. Compute instance1208, in at least one embodiment, uses policy management service1212to manage access to resources within the service provider or of the service provider. In an example embodiment, compute instance1208is a virtual machine instance running an operating system1214(e.g., Linux or Unix-based operating system). The operating system may be provisioned with an authorization interceptor1216that runs in the operating system's kernel space and intercepts system calls to determine whether access to resources managed by the operating system should be granted. Authorization interceptor1216may comprise a local policy cache of security policies that are managed by policy management service that specifies the rules for accessing various resources of the operating system. Application1218may refer to a mainframe application that may utilize various compute and storage resources. The application may refer to an application running on a Linux or Unix-based operating system and that is configured to interface or use a database management system1210. For example, application1218may submit database requests to database management system1210to execute transactions, query data, and so forth. Database management system1210may be provisioned and hosted by a database service of computing resource service provider1202. Database management system1210may comprise various components not illustrated inFIG.12, such as a query processor, run-time database manager, the underlying data store itself, and so forth. Database management system1210may refer to software that application1218interacts with to interact with a database according to a well-defined domain-specific language, such as SQL. Database management system (DBMS) may comprise a DBMS security hook1220that is used to interface with a policy management service1212. DBMS security hook1220, in some embodiments, obtains security policies applicable to database management system1210from policy management service1212and stores them locally in a policy cache. When database query requests are received (e.g., from application1218), the requests are parsed to determine a database request content and compared against one or more applicable security policies to determine whether or not to fulfill the query based on whether the requestor has sufficient privileges to access the database resources in the requested manner. A policy management service1212may provide access to, and administration of, policies applicable to requests for access to computing resources of a computing resource service provider as well as resources within a computing resource service provider. Operating system1214and database management system1210may delegate the management of security policies for their internal resources to policy management service1212, which stores applicable OS-level and database-level security policies within policy database1222. To enforce interception of all system calls, customer administrator users would lose direct access to mainframe administrator privileges on the mainframe application (e.g., runtime) itself. But those administrators would be able to re-obtain such a global privilege by an equivalent policy management service security profile via proper authorizations on all system calls for their own identities, with the benefit (for compliance) that all their administrator commands would be then logged. FIG.13illustrates a computing environment1300in which security policies for mainframe workloads are managed by a central service, in accordance with one or more example embodiments of the present disclosure. As depicted inFIG.13, operating system and database management systems utilize security policies stored in a policy database managed by policy management service, according to at least one embodiment. Operating system1302may refer to any suitable operating system, such as a Linux or Unix-based operating system. Operating system1302may be provisioned with a kernel-mode component that interfaces with policy management service1306. Security policies that are used to control access to OS-managed resources may be stored in policy database1308. For example, security policy1310A depicted inFIG.13may refer to a security policy that is applicable to operating system1302, as depicted by the “OS” prefix in the principal, resource, and action fields. In various embodiments, these values are decorated with additional information that identifies a specific OS instance or a specific OS configuration for use in a multi-resource environment such as an environment for processing mainframe workloads. Database management system1304is provisioned with a security hook in various embodiments, wherein the security hook interfaces with policy management service1306and caches or otherwise accesses security policies managed by policy management service1306. Security policies applicable to database management system, such as security policy1310B may be locally and securely cached by database management system1304to perform policy evaluations and determine whether to authorize access to various database resources, such as the “ExTable” specified in security policy1310B. Policy database1308may be used to store various types of security policies, including those for OS-level resources, database-level resources, as well as resources managed by the computing resource service provider. Policy management system1306is used to implement various security features that may otherwise not be natively supported by operating system1302and/or database management system1304. For example, a file of operating system1302may have several applicable security policies that define a first set of permissions for a first OS user and a second set of permission for a second OS user. This type of multi-user and/or multi-group policy definitions may be used in lieu of Linux's permissions, which only allow permissions to be set for a single user and a single group. Policy management service1306may be used to provide fine-grained control over database management system1304. Whereas traditional database systems may provide administration privileges in an all-or-nothing manner, policy management service can be used to define authorization to a given user for a specific utility (for example, authorization to reorg a table in place for better access but not to backup it on disk). Policy database1308can be used to provide additional functional level of security with business-related semantics—for example, a security policy may allow for a specific grant of authorization to revert some operation on a customer account, a grant of authorization to trade or pay for a certain maximal amount, and so forth. These types of semantics are not typically definable in the context of a database-level grants and revokes. Such customized authorizations cannot be expressed through standard mainframe security products. FIG.14illustrates a computing environment1400in which a policy analyzer system is used to validate whether security policies for a mainframe workload implemented on a computing resource service provider are complaint with a reference policy, in accordance with one or more example embodiments of the present disclosure. In various embodiments, policy analyzer system1402refers to a system of a computing resource service provider that supports one or more APIs for evaluating security policies. Policy analyzer system may be implemented as a standalone service, or as an API offered by a service, such as policy management service. Policy analyzer system1402is able to determine, with mathematical certainty, whether a first security policy is more permissive than a second security policy and whether two or more security policies are equivalent. In this context, permissiveness is used to describe access to resources. For example, if a first policy can be utilized to access to a first computing resource (e.g., resource “A”) and a second resource (e.g., resource “B”) and a second policy grants access only to computing resource “B,” then the first policy may be described as being more permissive than the second policy because there exists a computing resource which the first policy grants access to which the second policy does not grant access to and there does not exist a resource that the second policy grants access to which the first policy does not grant access to. Two policies may be equivalent if they both can be utilized to access to the same resources and deny (either implicitly or explicitly) access to the same resources. Generally, speaking, if two policies are not equivalent, they may be said to lack equivalency. In some cases, if a first policy grants access to a first computing resource “A” and a second computing resource “B” and a second policy grants access to the second computing resource “B” and a third computing resource “C” the policies may be said to be incomparable. An API call supported by the policy analyzer system may accept two security policies and determine whether they are equivalent, whether one policy is more permissive than the other policy, whether the policies are incomparable, and so on. As a second example, an API call may accept two or more security policies and determine whether all of the security policies provided as part of the API request are equivalent. As a third example, an API call may accept a single security policy and compare the security policy against one or more best practices policies. The best practices policies may be a set of security policies that are determined to be a set of permissions that are should not be allowed. For example, a first best practices policy may be that a particular data container should not be world-writeable (e.g., any principal, even a guest user or anonymous user can write to the container). The API may verify that best practices policies are being followed by determining that the received policy is not more permissive than each of the best practices policies is. Examples of best practices policies may include resources being world writeable, world readable, world accessible, and the like. In some embodiments, a collection of best practices policies may be determined based on the API call, the type of computing resource requested, and other context information. A policy analyzer system1402may include multiple components and/or modules such as a policy parser; a propositional logic translator; and a satisfiability engine. The policy parser may be a component or module that receives a security policy and obtains one or more permission statements from the policy. For example, if the client provides a first policy “A” and a second policy “B,” the policy parser may obtain a first set of permission statements from policy “A” and a second set of permission statements from policy “B.” The permission statements may each be associated with the granting or denying access to computing resources. A propositional logic translator may convert permission statements into one or more constraints described using propositional logic. The constraints may be described in various formats and in accordance with various standards such as SMT-LIB standard formats, CVC language, and Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) formats. The propositional logic expressions generated by the propositional logic translator may represent a set of constraints that must be satisfied for the corresponding permission statement to be in effect. A satisfiability engine may be used to compare the first propositional logic expression and the second propositional logic expression to determine whether one propositional logic is more permissive than the other. A satisfiability engine may be used to analyze the permissiveness of two or more propositional logic expressions. The satisfiability engine may generate additional propositional logic constraints as part of determining whether the first propositional logic expression is more permissive than the second propositional logic expression. The constraints may be generated and evaluated in addition to constraints of the first propositional logic expression and the second propositional logic expression. The constraints may be generated based at least in part on what a client requests. For example, the satisfiability engine may generate constraints that are satisfied only under circumstances where a first policy grants access to a resource and a second policy denies access to the resource or is neutral regarding the resource in response to a request from a caller to determine whether a first propositional logic expression is more permissive than a second propositional logic expression. The satisfiability engine may be used to verify whether the propositional logic constraints (e.g., those obtained from the first and second propositional logic expressions and those generated by the satisfiability engine. In some embodiments, a command may be used to determine whether the set of constraints are satisfiable. A formula may be satisfiable if there is an interpretation that makes all the asserted formulas true. In other words, the model is satisfiable if each of the constraints is satisfied under some conditions. In some embodiments, the satisfiability engine may be implemented at least in part using an SMT solver such as Z3, as described in https://github.com/Z3Prover/z3. Policy analyzer system1402may be implemented using techniques described in U.S. Pat. No. 10,757,128 B2 entitled “SECURITY POLICY ANALYZER SYSTEM AND SATISFIABILITY ENGINE” to Cook et al., which is hereby incorporated by reference in its entirety. Auditor1404may refer to an entity or user that provides reference policies1406. Reference policies1406may refer to best practices policies. In some embodiments, reference policies1406encode a regulatory constraint, certification requirement, etc. that the auditor1404is attempting to validate. The reference policies may be determined by auditor1404or on behalf of auditor1404by parsing rules, constraints, requirements, etc. defined by auditor and determining a corresponding set of security policies that reflect them. In various embodiments, security policies for operating system1408and database management system1410are stored in a policy database1412. These resources may be used to perform mainframe workloads, or other suitable applications/workloads. Policy database1412is, in various embodiments, the central policy store for all security definitions for a mainframe application. Accordingly, the entire set of security definitions that define how resources of a mainframe environment can be accessed are stored in policy database1412and can be retrieved from policy database1412as mainframe security policies1414. Policy analyzer system1402may be used to provide a mathematically rigorous determination of whether mainframe security policies1414comply with reference policies1406. As described above, policy analyzer system determines propositional logic formula for reference policies1406and mainframe security policies1414and the uses a satisfiability engine to determine an equivalence result1416. Equivalence result1416may indicate that the two policies are equivalent. Two policies may be said to be equivalent if the security permissions from the first policy and the second policy apply in the same manner to all actions, resources, and principals in other words, for any given set of actions, resources, and principals, that the first policy and the second policy will both either deny access (either explicitly based on a denial statement or implicitly based on the lack of a permission granting access) or both will grant access—it will not be the case that one policy grants access and the other denies access. In the case that one policy is determined to be more permissive than another policy, it will be the case that one policy grants access under a set of parameters where another policy denies access. In some embodiments, policy analyzer system1402may provide an indication that mainframe security policies1414is more permissive than reference policies1406and provides an example tuple of {principal, action, resource} where access to a resource would be granted according to mainframe security policies1414but should be denied according to reference policies1406. This information may be used to remedy non-compliance in the mainframe security, for example, by using a policy management service to update policy database1412with updated security policies that prevent access to the resource under the example provided by policy analyzer system1402. In some embodiments, policy database1412stores frontend security policies that can be evaluated as part of the mainframe application policies to determine the validity of the global security policy within the service provider as a whole. For example, since policies for both the front-end and mainframe are stored in the same policy database1412, policy analyzer system1414can perform an end-to-end analysis and mathematically provide whether an external user accessing the mainframe via front-ends will ever have the capability to perform a specific mainframe request. As another example, policy analyzer system1414can be used to identify whether anyone outside of a specific set of mainframe users (e.g., an ops group) reads from or writes to a specific configuration file. FIG.15shows an illustrative example of a process1500for using a policy analyzer system to verify whether a mainframe application implemented in a computing resource service provider complies with rules, constraints, requirements, etc., in accordance with one or more example embodiments of the present disclosure. In at least one embodiment, some or all of the process1500(or any other processes described herein, or variations and/or combinations thereof) is performed under the control of one or more computer systems that store computer-executable instructions and may be implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, software, or combinations thereof. The code, in at least one embodiment, is stored on a computer-readable storage medium in the form of a computer program comprising a plurality of computer-readable instructions executable by one or more processors. The computer-readable storage medium, in at least one embodiment, is a non-transitory computer-readable medium. In at least one embodiment, at least some of the computer-readable instructions usable to perform the process1500are not stored solely using transitory signals (e.g., a propagating transient electric or electromagnetic transmission). A non-transitory computer-readable medium does not necessarily include non-transitory data storage circuitry (e.g., buffers, caches, and queues) within transceivers of transitory signals. Process1500may be implemented in the context of various systems and methods described elsewhere in this disclosure, such as those discussed in connection withFIGS.12-14and17. In at least one embodiment, process1500or a portion thereof is implemented by a computing resource service provider. In at least one embodiment, process1500comprises a step to store1502security policies for a mainframe application in a policy database. The security policies may be stored in the policy database as part of a migration of an on-premises or hybrid mainframe environment into a computing resource service provider. As part of the migration, different types of resources in the computing resource service provider may be provisioned to facilitate the migration of the mainframe application. As described in greater detail in connection withFIGS.2-5, the computing resource service provider may provision various compute instances—for example, virtual machine instances—with a custom operating system environment to run mainframe applications. In various embodiments, a customized operating system is configured with a kernel-level authorization component that uses a policy management service to manage the permissions for OS-level resources, such as data and executable files. The policy management service may store security policies for the OS resources in a policy database. Mainframe workloads running on computing resource service providers utilize database systems with externally managed permissions, according to at least one embodiment. In various embodiments, database systems provisioned to run mainframe workloads are configured with a security hook that uses a policy management service to manage the permission for database-level resources, such as the ability to interact with tables and views, execute stored procedures, and so forth. Various other security definitions for a mainframe workload may also be stored in policy database and collectively, these are referred to as mainframe security policies, according to at least one embodiment. In various embodiments, these additional security definitions can be used to define additional, higher-level functional behavior, such as business-related semantics. For example, a security policy may allow for a specific grant of authorization to revert some operation on a customer account, a grant of authorization to trade or pay for a certain maximal amount, and so forth. These types of semantics are not typically definable in the context of a database-level grants and revokes. Accordingly, a policy database managed by a policy management service may be used to define the entire set of mainframe-related security permissions. This means that, in at least some embodiment, all of the cloud-migrated mainframe application permissions are stored in the central policy database along with any other security definitions related to other services used by the application—for example, if the application also uses data analytics services to gather insight into customer data stored in the database system used for mainframe workloads, those permissions are also stored in the policy database alongside the security policies used by the database services, compute services, etc. that are utilized for the cloud-migrated mainframe application. In some embodiments, the security definitions use regular expressions (regexes) on the resources to minimize and simplify the definition work. For example, “Deny ExUser File/usr/foo/*” may be used to deny ExUser access to any file or directory under/usr/tmp. In at least one embodiment, process1500comprises a step to receive1504or otherwise obtain a request to analyze the mainframe application for compliance. Compliance, in this context, may refer to a rule, constraint, requirement, etc. and may be used to determine compliance with requirements set forth by regulators, auditors, compliance specialists, and so forth. In some embodiments, the request may be to perform an audit of the security of the cloud-based mainframe application to determine whether a certificate of compliance for Evaluation Assurance Level or others measurements of assurance should be issued. For example, process1500may be used to analyze the cloud-based mainframe application for certification according to EAL4, EAL4+, EAL5, EAL5+, and even higher levels of certification. In at least one embodiment, process1500comprises a step to determine1506a reference policy or set of reference policies based on the request. This step may involve translating the requirements for certification standards into a policy-based language format that is amenable to translation into a propositional logic. For example, a reference policy may encode various rules regarding how resources can or cannot be accessed. Compliance requirements may include reference policies to ensure that database contents are not world-readable, for example, as well as other more sophisticated rules to ensure that various data and/or code of the mainframe application can only be used in certain well-defined conditions. In at least one embodiment, process1500comprises a step to use1508a policy analyzer system to mathematically prove whether the mainframe security policies comply with the reference policy. In various embodiments, policies are translated into a propositional logic formula, which is then evaluated by a satisfiability engine to determine an equivalence result between reference policies and the mainframe security policies. A Satisfiability Modulo Theories (SMT)-based solver may be used to prove that certain security assurances are met by the computing resource service provider hosting the mainframe application. For example, the policy analyzer system may provably demonstrate that the only way to access certain privileged information within the computing resource service provider is via a specific web service API endpoint and only while a specific administrative role has been assumed. Policy analyzer system may provide an equivalence result that indicates that the mainframe security policies are equivalent to or less permissive than the reference policies. This indication may be sufficient to demonstrate that the mainframe security policy implemented by the computing resource service provider is in compliance with the aforementioned rule, constraint, requirement, etc. However, in some cases, policy analyzer system may provide an equivalence result indicating that the mainframe security policies are either entirely more permissive than the reference policies, or that the mainframe security policies are in some respects more permissive than the reference policies while also being less permissive in other respects. In some embodiments, policy analyzer system provides an example scenario in which the mainframe security policies and the reference policies differ—for example, a tuple comprising a principal, resource, and action that, if evaluated using the reference policies, indicates access should be denied, but if evaluated under the mainframe security policies, indicate that access should be granted. These example scenarios may be used to update the mainframe security policies to ensure that they are in compliance with the reference policies, for example, by adding additional security policies or modifying existing security policies to ensure that the example scenario evaluates to a denial of access with an updated set of applicable security policies. As described throughout this disclosure, a permission may be associated with a security policy. In some embodiments, a permission may specify a principal, a resource, an action, a condition, an effect, or various combinations thereof. In some embodiments, a permission may also specify a plurality of one or more of these elements such as, for example, a set or class of users, a collection of resources, several different actions, and/or multiple conditions. In some embodiments, the permission may specify one or more wildcard or otherwise modifiable characters that may be used to denote that the permission may be modified to make the permission applicable to different users and their associated resources. Wildcards may be represented in various formats—for example, an asterisk may represent any number of characters and a question mark may represent any single character. In some embodiments, the policy may be expressed in a language independent format such as JavaScript Object Notation (JSON). Examples discussed in this disclosure may be in JSON format or in a format similar to JSON and as illustrations of various embodiments which may be implemented. Of course, various other formats which may be utilized in the manner described in connection with JSON and JSON-like formats are also contemplated and within the scope of this disclosure. The principal may be a user, a group, an organization, a role, or a collection and/or combination of these or other such entities. A principal may be any entity that is capable of submitting API calls that cause an action associated with a resource to be performed and/or any entity to which permissions associated with a resource may be granted. As an example, a permission may have a principal element specified in the following manner:“Principal”: “rn:ws:iam::ducksfan8” In some embodiments, the principal is identified by a resource name that uniquely identifies the principal. A principal may include one or more name spaces that include additional information regarding the principal. For example, “rn” may refer to a resource name prefix and identifies the subsequent information as part of a resource name; “ws” may refer to a partition namespace that the resource is in; “iam” may refer to a service namespace that identifies a service of a computing resource service provider (e.g., the computing resource service provider may provide services related to identity and access management); namespaces may additionally be omitted (note that there are two semicolons in the example above between “iam” and “ducksfan8”)—in some formats and/or for some resources, a region namespace may be an option; and “ducksfan8” may refer to an identifier for the account, such as the account that owns the resource specified in the permission. The resource may refer to a computing resource of a computing resource service provider. Computing resources of a computing resource service provider may include: computing resources (e.g., virtual machine instances); storage resources (e.g., scalable storage, block storage, and managed file storage systems); database systems (e.g., managed relational database systems); migration services (e.g., applications, services, and hardware for streamlining the transfer of data from one physical data center to another); network and content delivery; developer tools; management tools; security, identity, and access management services; analytics services; artificial intelligence services; and more. Computing resources may be organized in a hierarchy, and may use structures such as folders, directories, buckets, etc. to organize sets of computing resources into groupings. In some cases, policies and/or permissions may be applied directly to a bucket and grant cross-account access to an environment. As an example, a permission may have a resource element specified in the following manner:“Resource”: “rn:ws:storage:::bucket/MM4_Heisman.png” In some embodiments, the resource is identified by a resource name that uniquely identifies the resource. In some cases, the resource may share a same naming convention as the principal or other elements of the permission. However, this need not be the case, as each separate element of a permission may use a naming convention, namespace, format, etc. that is independent of other elements. In the example resource given above, “rn” may refer to a resource name prefix and identifies the subsequent information as part of a resource name; “ws” may refer to a partition namespace that the resource is in; “storage” may refer to a service namespace that identifies a service of a computing resource service provider (e.g., the computing resource service provider may provide services related to object-based storage); as discussed elsewhere, namespaces may be omitted in some cases—for example, a region namespace and/or account namespace may be omitted; and a resource which may also include an indicator of the type of resource. In the example above, the resource may indicate an image in the Portable Network Graphics (PNG) format and is stored in a bucket. In various embodiments, resources may refer to resources managed by a computing resource service provider, but may also refer to other resources, such as resources that are managed by database systems or compute instances hosted by the computing resource service provider. The action may be the specific action or actions that will be allowed or denied by the permission. Different types of services (e.g., having different service namespaces) may support different actions. For example, an identity and account management service may support an action for changing passwords, and a storage service may support an action for deleting objects. An action may be performed in association with the resource and may, for example, be identified by a type of API call, a library call, a program, process, series of steps, a workflow, or some other such action. As an example, a permission may have an action element specified in the following manner:“Action”: “storage:GetObject” In this example, the action that is allowed or denied (determined based on the effect specified in the permission) corresponds to a storage service that supports an action (e.g., API call) for GetObject, which may be used in connection with obtaining an object and/or access to an object of a storage service. As discussed elsewhere, various namespaces may be used in connection with specifying an action. Wildcards may be used to specify multiple actions. For example, an action element described as “Action”: “storage:*” may refer to all APIs supported by a storage service. As a second example, an action element described as “Action”: “iam:*AccessKey*” may refer to actions supported by an identity and access management service in connection with access keys of a service—illustrative examples may include actions related to creating an access key (e.g., a “CreateAccessKey” action may exist), deleting an access key (e.g., “DeleteAccessKey”), listing access keys (e.g., “ListAccessKeys”), and updating an existing access key (e.g., “UpdateAccessKey”). In various embodiments, different types of actions are supported by different types of resources. For example, OS-managed resources, database-managed resources, and mainframe-managed resources may each have different actions that are capable of being performed. The set of possible actions for different resource types may be ascertained from security definitions that are exported from a mainframe security database, such as an RACF database. In some embodiments, the set of actions identified by a RACF database may correspond to the set of supported actions. The condition element may be one or more conditions that specify when a policy is in effect. In some embodiments, the condition element is optional and may be omitted in some permissions. Conditions may be described as Boolean expressions that may be used to determine whether the policy is in effect (i.e., if the expression evaluates to TRUE) or not in effect (i.e., if the expression evaluates to FALSE). Policies that are not in effect may be unenforced or ignored by an authorization module (such as those described elsewhere in this). In some embodiments, conditions in a permission may be evaluated against values provided as part of a web API request corresponding to one or more APIs specified in the action element. As an example, a permission may have a condition element specified in the following manner:“Condition”:{“DateLessThan”:{“ws:CurrentTime”:“2014-12-13” } } In this example, the condition, the “ws:CurrentTime” value of the request is compared against a literal value “2104-12-13” using the condition operator “DateLessThan” which may be used to evaluate whether the condition is met. In this example, the condition may be true when the current time (e.g., the time the request is received by the service provider) is less than the supplied date of Dec. 13, 2014. It should be noted that the key value (in the example, the current time) may be compared not only against literal values, but policy variables as well. Various other types of condition operators may exist, which may be used for comparing string conditions, numeric conditions, Boolean conditions, binary conditions (e.g., testing values in binary format), IP address conditions (e.g., testing values against a specific IP address or range of IP addresses), and more. Conditions may, furthermore, include quantifiers. For example, a string condition may include an operator such as “StringEquals” that compares whether two strings are equal, and a similar operator may include a quantifier such that “StringEqualsIfExists” may be used to compare two strings when the key value exists in the context of an evaluation. Quantifiers may be used in conjunction with wildcards where multiple resources matching a wildcard expression may support different context keys. In some embodiments, such as those where conditions include quantifier, first-order logic may be utilized rather than propositional logic. An effect may refer to whether the permission is used to grant or deny access to the computing resources specified in the permission in the resource element. An effect may be an ALLOW effect, which grants access to a resource, and a DENY effect, which denies access to a resource. In some embodiments, access to computing resources of a computing resource service provider are denied by default and a permission affirmatively including an ALLOW effect is required. As an example, a permission may have an effect element specified in the following manner:“Effect”: “ALLOW” Accordingly, a permission statement that grants a particular principal (e.g., “rn:ws:iam::ducksfan8”) access to call a storage service API (e.g., “storage:GetObject”) and obtain a particular image (e.g., “rn:ws:storage:::bucket/MM4_Heisman.png”) when a specific condition is true (e.g., the API request is made prior to Dec. 13, 2016) may be specified in the following manner:“Statement”: [{“Effect”: “ALLOW”,“Principal”: “rn:ws:iam::ducksfan8”,“Action”: “storage:GetObject”,“Resource”: “rn:ws:storage:::bucket/MM4_Heisman.png”,“Condition”: {“DateLessThan”: {“ws:CurrentTime”: “2014-12-13” }}}] It should be noted that the examples described above merely described one of many ways in which permissions may be expressed. Of course, in other embodiments, variations on the principles described above may be applied in various ways. FIG.16is an illustrative example of an environment1600in which a distributed computer system may utilize the various techniques described herein. In an embodiment, a principal1602may use a computing device to communicate over a network1604with a computing resource service provider1606. Communications between the computing resource service provider1606and the principal1602may, for instance, be for the purpose of accessing a service1608operated by the computing resource service provider1606, which may be one of many services operated by the computing resource service provider1606. The service1608may comprise a service frontend1610and a service backend1614. The principal1602may, through an associated computing device, issue a request for access to a service1608(and/or a request for access to resources associated with the service1608) provided by a computing resource service provider1606. The request may be, for instance, a web service application programming interface request. The principal may be a user, or a group of users, or a role associated with a group of users, or a process representing one or more of these entities that may be running on one or more remote (relative to the computing resource service provider1606) computer systems, or may be some other such computer system entity, user, or process. Each user, group, role, or other such collection of principals may have a corresponding user definition, group definition, role definition, or other definition that defines the attributes and/or membership of that collection. For example, a group may be a group of principals that have the same geographical location. The definition of that group of principals may include the membership of the group, the location, and other data and/or metadata associated with that group. As used herein, a principal is an entity corresponding to an identity managed by the computing resource service provider, where the computing resource service provider manages permissions for the identity and where the entity may include one or more sub-entities, which themselves may have identities. The principal1602may communicate with the computing resource service provider1606via one or more connections (e.g., transmission control protocol (TCP) connections). The principal1602may use a computer system client device to connect to the computing resource service provider1606. The client device may include any device that is capable of connecting with a computer system via a network, such as example devices discussed below. The network1604may include, for example, the Internet or another network or combination of networks discussed below. The computing resource service provider1606, through the service1608, may provide access to one or more computing resources such as virtual machine (VM) instances, automatic scaling groups, file-based database storage systems, block storage services, redundant data storage services, data archive services, data warehousing services, user access management services, identity management services, content management services, and/or other such computer system services. Other example resources include, but are not limited to user resources, policy resources, network resources and/or storage resources. In some examples, the resources associated with the computer services may be physical devices, virtual devices, combinations of physical and/or virtual devices, or other such device embodiments. The request for access to the service1608may be received by a service frontend1610, which, in some examples, comprises a web server configured to receive such requests and to process them according to one or more policies associated with the service1608. The request for access to the service1608may be a digitally signed request and, as a result, may be provided with a digital signature. In some embodiments, the web server employs techniques described herein synchronously with processing the requests. The service frontend1610may then send the request and the digital signature for verification to an authentication service1616. The authentication service1616may be a stand-alone service or may be part of a service provider or other entity. The authentication service1616, in an embodiment, is a computer system configured to perform operations involved in authentication of principals. Upon successful authentication of a request, the authentication service1616may then obtain policies applicable to the request. A policy may be applicable to the request by way of being associated with the principal1602, a resource to be accessed as part of fulfillment of the request, a group in which the principal1602is a member, a role the principal1602has assumed, and/or otherwise. To obtain policies applicable to the request, the authentication service1616may transmit a query to a policy database1618managed by a policy management service1620. The query to the policy database1618may be a request comprising information sufficient to determine a set of policies applicable to the request. The query to the policy database may, for instance, contain a copy of the request and/or contain parameters based at least in part on information in the request, such as information identifying the principal, the resource, and/or an action (operation to be performed as part of fulfillment of the request). A policy management service1620may provide access to, and administration of, policies applicable to requests for access to computing resources (e.g., web service application programming interface requests). For example, the policy management service may receive information sufficient for selecting policies applicable to pending requests. In some embodiments, the information may be copies of the requests, or may be information generated based at least in part on the requests. For example, a service such as a service frontend1610may receive a request for access to resources and may generate a query to the policy management service based at least in part on information specified by the request. Having obtained any policies applicable to the request, the authentication service1616may provide an authentication response and, if applicable, the obtained policies back to the service frontend1610. The authentication response may indicate whether the response was successfully authenticated. The service frontend1610may then check whether the fulfillment of the request for access to the service1608would comply with the obtained policies using an authorization module1612. Note that, in some embodiments, a policy may be configured such that, whether fulfillment of a request violates the policy depends on whether a violation of a uniqueness constraint has occurred. For instance, some data may be considered to be less sensitive than other data and requests for the less sensitive data may be fulfilled despite a detected violation of a uniqueness constraint while access to the more sensitive data may require that a uniqueness constraint violation not have occurred in connection with a public key specified to be used in authentication of requests. Similar techniques may be employed for other types of computing resources, such as computing devices, storage locations, collections of data, identities, policies, and the like. An authorization module1612may be a process executing on the service frontend that is operable to compare the request to the one or more permissions in the policy to determine whether service may satisfy the request (i.e., whether fulfillment of the request is authorized). For example, the authorization module may compare an API call associated with the request against permitted API calls specified by the policy to determine if the request is allowed. If the authorization module1612is not able to match the request to a permission specified by the policy, the authorization module1612may execute one or more default actions such as, for example, providing a message to the service frontend that causes the service frontend to deny the request, and causing the denied request to be logged in the policy management service1620. If the authorization matches the request to one or more permissions specified by the policy, the authorization module1612may resolve this by selecting the least restrictive response (as defined by the policy) and by informing the service frontend whether the fulfillment of the request is authorized (i.e., complies with applicable policy) based on that selected response. The authorization module1612may also be select the most restrictive response or may select some other such response and inform the service frontend whether the fulfillment of the request is authorized based on that selected response. Note that, whileFIG.16shows the authorization module1612as a component of the service frontend1610. In some embodiments, the authorization module1612is a separate service provided by the computing resource service provider1606and the frontend service may communicate with the authorization module1612over a network. Finally, if the fulfillment of the request for access to the service1608complies with the applicable obtained policies, the service frontend1610may fulfill the request using the service backend1614. A service backend1614may be a component of the service configured to receive authorized requests from the service frontend1610and configured to fulfill such requests. The service frontend1610may, for instance, submit a request to the service backend to cause the service backend1614to perform one or more operations involved in fulfilling the request. In some examples, the service backend1614provides data back to the service frontend1610that the service frontend provides in response to the request from the principal1602. In some embodiments, a response to the principal1602may be provided from the service frontend1610indicating whether the request was allowed or denied and, if allowed, one or more results of the request. One or more operations of the methods, process flows, or use cases ofFIGS.1-17may have been described above as being performed by a user device, or more specifically, by one or more program module(s), applications, or the like executing on a device. It should be appreciated, however, that any of the operations of the methods, process flows, or use cases ofFIGS.1-17may be performed, at least in part, in a distributed manner by one or more other devices, or more specifically, by one or more program module(s), applications, or the like executing on such devices. In addition, it should be appreciated that processing performed in response to execution of computer-executable instructions provided as part of an application, program module, or the like may be interchangeably described herein as being performed by the application or the program module itself or by a device on which the application, program module, or the like is executing. While the operations of the methods, process flows, or use cases ofFIGS.1-17may be described in the context of the illustrative devices. It should be appreciated that such operations may be implemented in connection with numerous other device configurations. The operations described and depicted in the illustrative methods, process flows, and use cases ofFIGS.1-17may be carried out or performed in any suitable order, such as the depicted orders, as desired in various example embodiments of the disclosure. Additionally, in certain example embodiments, at least a portion of the operations may be carried out in parallel. Furthermore, in certain example embodiments, less, more, or different operations than those depicted inFIGS.1-17may be performed. Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. Certain aspects of the disclosure are described above with reference to block and flow diagrams of systems, methods, apparatuses, and/or computer program products according to example embodiments. It will be understood that one or more blocks of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and the flow diagrams, respectively, may be implemented by execution of computer-executable program instructions. Likewise, some blocks of the block diagrams and flow diagrams may not necessarily need to be performed in the order presented, or may not necessarily need to be performed at all, according to some embodiments. Further, additional components and/or operations beyond those depicted in blocks of the block and/or flow diagrams may be present in certain embodiments. Accordingly, blocks of the block diagrams and flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, may be implemented by special-purpose, hardware-based computer systems that perform the specified functions, elements or steps, or combinations of special-purpose hardware and computer instructions. The examples presented herein are not meant to be limiting. FIG.17illustrates a block diagram of an example of a machine1700(e.g., implemented in whole or in part in the context of embodiments described in connection with other figures. In some embodiments, the machine1700may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine1700may operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine1700may act as a peer machine in Wi-Fi direct, peer-to-peer (P2P) (or other distributed) network environments. The machine1700may be a wearable device or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), or other computer cluster configurations. Examples, as described herein, may include or may operate on logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations when operating. A module includes hardware. In an example, the hardware may be specifically configured to carry out a specific operation (e.g., hardwired). In another example, the hardware may include configurable execution units (e.g., transistors, circuits, etc.) and a computer readable medium containing instructions where the instructions configure the execution units to carry out a specific operation when in operation. The configuring may occur under the direction of the execution units or a loading mechanism. Accordingly, the execution units are communicatively coupled to the computer-readable medium when the device is operating. In this example, the execution units may be a member of more than one module. For example, under operation, the execution units may be configured by a first set of instructions to implement a first module at one point in time and reconfigured by a second set of instructions to implement a second module at a second point in time. The machine (e.g., computer system)1700may include any combination of the illustrated components. For example, the machine1700may include a hardware processor1702(e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory1704and a static memory1706, some or all of which may communicate with each other via an interlink (e.g., bus)1708. The machine1700may further include a power management device1732, a graphics display device1710, an alphanumeric input device1712(e.g., a keyboard), and a user interface (UI) navigation device1714(e.g., a mouse). In an example, the graphics display device1710, alphanumeric input device1712, and UI navigation device1714may be a touch screen display. The machine1700may additionally include a storage device (e.g., drive unit)1716, a signal generation device1718, and a network interface device/transceiver1720coupled to antenna(s)1730. The machine1700may include an output controller1734, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate with or control one or more peripheral devices (e.g., a printer, a card reader, other sensors, etc.)). The storage device1716may include a machine readable medium1722on which is stored one or more sets of data structures or instructions1724(e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions1724may also reside, completely or at least partially, within the main memory1704, within the static memory1706, or within the hardware processor1702during execution thereof by the machine1700. In an example, one or any combination of the hardware processor1702, the main memory1704, the static memory1706, or the storage device1716may constitute machine-readable media. While the machine-readable medium1722is illustrated as a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions1724. Various embodiments may be implemented fully or partially in software and/or firmware. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein. The instructions may be in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; a flash memory, etc. The term “machine-readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine1700and that cause the machine1700to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding, or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples may include solid-state memories and optical and magnetic media. In an example, a massed machine-readable medium includes a machine-readable medium with a plurality of particles having resting mass. Specific examples of massed machine-readable media may include non-volatile memory, such as semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), or electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. Mainframe-migrated application1736may refer to various software and/or hardware that is migrated from an on-premises environment into a cloud-based provider. The cloud-based provider may use various types of computing resources, such as compute services and database services, to implement the mainframe functionality in a cloud provider. In various embodiments, security policies for a mainframe-migrated application are centrally managed in a policy database. The instructions1724may further be transmitted or received over a communications network1726using a transmission medium via the network interface device/transceiver1720utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communications networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), plain old telephone (POTS) networks, wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, and peer-to-peer (P2P) networks, among others. In an example, the network interface device/transceiver1720may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network1726. In an example, the network interface device/transceiver1720may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine1700and includes digital or analog communications signals or other intangible media to facilitate communication of such software. The operations and processes described and shown above may be carried out or performed in any suitable order as desired in various implementations. Additionally, in certain implementations, at least a portion of the operations may be carried out in parallel. Furthermore, in certain implementations, less than or more than the operations described may be performed. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. The terms “computing device,” “user device,” “communication station,” “station,” “handheld device,” “mobile device,” “wireless device” and “user equipment” (UE) as used herein refers to a wireless communication device such as a cellular telephone, a smartphone, a tablet, a netbook, a wireless terminal, a laptop computer, a femtocell, a high data rate (HDR) subscriber station, an access point, a printer, a point of sale device, an access terminal, or other personal communication system (PCS) device. The device may be either mobile or stationary. As used within this document, the term “communicate” is intended to include transmitting, or receiving, or both transmitting and receiving. This may be particularly useful in claims when describing the organization of data that is being transmitted by one device and received by another, but only the functionality of one of those devices is required to infringe the claim. Similarly, the bidirectional exchange of data between two devices (both devices transmit and receive during the exchange) may be described as “communicating,” when only the functionality of one of those devices is being claimed. The term “communicating” as used herein with respect to a wireless communication signal includes transmitting the wireless communication signal and/or receiving the wireless communication signal. For example, a wireless communication unit, which is capable of communicating a wireless communication signal, may include a wireless transmitter to transmit the wireless communication signal to at least one other wireless communication unit, and/or a wireless communication receiver to receive the wireless communication signal from at least one other wireless communication unit. As used herein, unless otherwise specified, the use of the ordinal adjectives “first,” “second,” “third,” etc., to describe a common object, merely indicates that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner. Some embodiments may be used in conjunction with various devices and systems, for example, a personal computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a personal digital assistant (PDA) device, a handheld PDA device, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a consumer device, a non-mobile or non-portable device, a wireless communication station, a wireless communication device, a wireless access point (AP), a wired or wireless router, a wired or wireless modem, a video device, an audio device, an audio-video (A/V) device, a wired or wireless network, a wireless area network, a wireless video area network (WVAN), a local area network (LAN), a wireless LAN (WLAN), a personal area network (PAN), a wireless PAN (WPAN), and the like. Some embodiments may be used in conjunction with one way and/or two-way radio communication systems, biomedical sensors, wearable devices or sensors, cellular radio-telephone communication systems, a mobile phone, a cellular telephone, a wireless telephone, a personal communication system (PCS) device, a PDA device which incorporates a wireless communication device, a mobile or portable global positioning system (GPS) device, a device which incorporates a GPS receiver or transceiver or chip, a device which incorporates an RFID element or chip, a multiple input multiple output (MIMO) transceiver or device, a single input multiple output (SIMO) transceiver or device, a multiple input single output (MISO) transceiver or device, a device having one or more internal antennas and/or external antennas, digital video broadcast (DVB) devices or systems, multi-standard radio devices or systems, a wired or wireless handheld device, e.g., a smartphone, a wireless application protocol (WAP) device, or the like. Some embodiments may be used in conjunction with one or more types of wireless communication signals and/or systems following one or more wireless communication protocols, for example, radio frequency (RF), infrared (IR), frequency-division multiplexing (FDM), orthogonal FDM (OFDM), time-division multiplexing (TDM), time-division multiple access (TDMA), extended TDMA (E-TDMA), general packet radio service (GPRS), extended GPRS, code-division multiple access (CDMA), wideband CDMA (WCDMA), CDMA 2000, single-carrier CDMA, multi-carrier CDMA, multi-carrier modulation (MDM), discrete multi-tone (DMT), Bluetooth®, global positioning system (GPS), Wi-Fi, Wi-Max, ZigBee, ultra-wideband (UWB), global system for mobile communications (GSM), 2G, 2.5G, 3G, 3.5G, 4G, fifth generation (5G) mobile networks, 3GPP, long term evolution (LTE), LTE advanced, enhanced data rates for GSM Evolution (EDGE), or the like. Other embodiments may be used in various other devices, systems, and/or networks. It is understood that the above descriptions are for purposes of illustration and are not meant to be limiting. Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. Program module(s), applications, or the like disclosed herein may include one or more software components including, for example, software objects, methods, data structures, or the like. Each such software component may include computer-executable instructions that, responsive to execution, cause at least a portion of the functionality described herein (e.g., one or more operations of the illustrative methods described herein) to be performed. A software component may be coded in any of a variety of programming languages. An illustrative programming language may be a lower-level programming language such as an assembly language associated with a particular hardware architecture and/or operating system platform. A software component comprising assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware architecture and/or platform. Another example programming language may be a higher-level programming language that may be portable across multiple architectures. A software component comprising higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution. Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query or search language, or a report writing language. In one or more example embodiments, a software component comprising instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form. A software component may be stored as a file or other data storage construct. Software components of a similar type or functionally related may be stored together such as, for example, in a particular directory, folder, or library. Software components may be static (e.g., pre-established or fixed) or dynamic (e.g., created or modified at the time of execution). Software components may invoke or be invoked by other software components through any of a wide variety of mechanisms. Invoked or invoking software components may comprise other custom-developed application software, operating system functionality (e.g., device drivers, data storage (e.g., file management) routines, other common routines and services, etc.), or third-party software components (e.g., middleware, encryption, or other security software, database management software, file transfer or other network communication software, mathematical or statistical software, image processing software, and format translation software). Software components associated with a particular solution or system may reside and be executed on a single platform or may be distributed across multiple platforms. The multiple platforms may be associated with more than one hardware vendor, underlying chip technology, or operating system. Furthermore, software components associated with a particular solution or system may be initially written in one or more programming languages, but may invoke software components written in another programming language. Computer-executable program instructions may be loaded onto a special-purpose computer or other particular machine, a processor, or other programmable data processing apparatus to produce a particular machine, such that execution of the instructions on the computer, processor, or other programmable data processing apparatus causes one or more functions or operations specified in any applicable flow diagrams to be performed. These computer program instructions may also be stored in a computer-readable storage medium (CRSM) that upon execution may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means that implement one or more functions or operations specified in any flow diagrams. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational elements or steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process. Additional types of CRSM that may be present in any of the devices described herein may include, but are not limited to, programmable random access memory (PRAM), SRAM, DRAM, RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the information and which may be accessed. Combinations of any of the above are also included within the scope of CRSM. Alternatively, computer-readable communication media (CRCM) may include computer-readable instructions, program module(s), or other data transmitted within a data signal, such as a carrier wave, or other transmission. However, as used herein, CRSM does not include CRCM. Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. | 168,829 |
11943262 | DETAILED DESCRIPTION Lawful interception for the MIKEY-IBAKE process relies on network knowledge of the timestamp Tα used to generate keying information. However, a subversive user could change the software running on a UE to avoid lawful interception by using a timestamp in generating key information that is different from the timestamp signaled in SIP, thus generating a different key component (K2βP), but transmitting a timestamp Tα that was not used to generate the key component K2βP. For example, as shown inFIG.3, suppose UE2's user is malicious and wants to prevent lawful interception in his network. He thus rebuilds the kernel software that runs on his device and modifies the SIP stack such that the timestamp Tα used for signaling on SIP is different from the timestamp Tβ used for generating the keying information. As a result, the network is unable to regenerate the necessary keying information K2βfor UE2, thus preventing lawful interception. In this example, the second network stores K2βP, and thus has the necessary information to determine that UE2has not used Tα in generating K2β. If device CSCF2in the second network detects the misuse at call set-up, the network could disallow the communication. However, to be effective, the network would be required to verify K2βP in at least some percentage of call set-ups, which is highly undesirable from an operator's point of view. Operators strongly prefer any such checking, if necessary, be done at the UE. Alternatively device CSCF2in the second network could verify K2βP as a part of the lawful interception process. However, any action such as disabling the phone or simply cutting communication, would break an existing requirement that lawful interception be undetectable by any entity except the requesting law enforcement agency and the intercepting network. An additional consequence of this requirement is that the second network cannot work with the first network for lawful interception. For instance, in the above example, the first network has all the information necessary for lawful interception, i.e., K2βP, Tα, and KM1. However, because LEA2may not necessarily wish to reveal that lawful interception is occurring, any final key exchange protocol must enable the second network to carry out lawful interception without the need for contacting any additional entity. Therefore, while the second network can detect malformed key information in the current MIKEY-IBAKE process, this process requires further modification in order to become a feasible solution meeting all current requirements. Further, it should be noted that if both UE1and UE2have the freedom to modify their kernels, they are also free to implement any key agreement scheme, potentially even one different from a standardized key agreement scheme, but with signaling that is compliant. With the arrival of open source operating systems, such as Android, the ability to modify a kernel is, unfortunately, an accepted fact today. In fact, that ability is often touted as being desirable. Since lawful interception becomes highly improbable in such a scenario, the problem addressed herein is that of protecting against one of two UEs maliciously modifying its kernel to prevent lawful interception. A related problem is SIP signaling of the International Mobile Station Equipment Identity (IMEI), which is an identifier of the mobile equipment (ME), i.e, of the UE not including the Subscriber Identity Module (SIM) card. The IMEI is used in some jurisdictions as the identifier under which lawful interception occurs. However, due to counterfeiting, more than one phone may share the same IMEI. While this is less of a problem in Western regions of the world, it is quite problematic in others. If multiple MEs share the same IMEI, then specifying the targeted ME becomes a more-involved process, making lawful interception more difficult. Additionally, if a UE modifies its kernel, there is a danger the UE may also signal a false IMEI, perhaps preventing lawful interception through IMEI targeting. Thus, a solution for the secure signaling of the IMEI is also needed. In conventional systems, the subversive user will often be successful since there is insufficient security protection of the timestamp Tα used by each KGU. What is needed is a method to guarantee that the timestamp used by the KGU is also signaled in SIP. Accordingly, there is provided a method for secure communication, comprising: (1) generating a signature using a private key, a nonce, and at least one of an identifier and a key component; and (2) transmitting the signature, the nonce, a security parameter, and the at least one of the identifier and the key component, wherein the security parameter associates a user identity with a public key, the public key being associated with the private key. The identifier is one of an International Mobile Station Equipment Identity (IMEI), a Globally Routable User Agent URI (GRUU), an International Mobile Subscriber Identity (IMSI), and a Temporary International Mobile Subscriber Identity (TIMSI). Further, the nonce is one of a timestamp, a random number, and a sequence number and the security parameter is a certificate. In another embodiment, there is provided a method for secure communication, the method comprising: (1) receiving a nonce, at least one of an identifier and a key component, a security parameter, and a signature that was generated using a private key, the nonce, and the at least one of the identifier and the key component; and (2) verifying the nonce and the at least one of the identifier and the key component using the received signature and the security parameter, wherein the security parameter associates a user identity with a public key, the public key being associated with the private key. When verification is successful in the verifying step, the method further includes (1) generating a session key using the at least one of the identifier and the key component; (2) generating a second signature using a second private key, a second nonce, and at least one of a second identifier and a second key component; and (3) transmitting the second signature, the second nonce, a second security parameter, and the at least one of the second identifier and the second key component, wherein the second security parameter associates a second user identity with a second public key, the second public key being associated with the second private key. In another embodiment, there is provided a method for secure communication, the method comprising: (1) generating a MAC tag using a MAC key, a nonce, and at least one of an identifier and a key component; and (2) transmitting the MAC tag, the nonce, and the at least one of the identifier and the key component. In another embodiment, there is provided a method for secure communication, the method comprising: (1) receiving a nonce, at least one of an identifier and a key component, and a MAC tag that was generated using the nonce, the at least one of the identifier and the key component, and a MAC key; (2) verifying the nonce and the at least one of the identifier and the key component using the received MAC tag. In particular, in one embodiment, the KGU of a UEjsigns the timestamp T□ and the key component KjP using a private key PRjobtained at the time of manufacture. The public key Pujassociated with the private key PRjis certified by a certificate Cj, which can also be provided to the KGU at the time of manufacture. Note that while the public key is described as being separate from the certificate, in general, the public key can form part of the certificate. FIG.4provides an illustration of a method of key component protection according to one embodiment. As shown inFIG.4, after signing Tα and KjP using the function Sj=Sig(KjP, Tα, PRj)□, each KGU passes not only the key component KjP and the timestamp Tα to the software, but also the signature Sj, the public key Puj, and the certificate Cjfor transmission on SIP. Note that since the KGUs are often implemented in hardware, the KGUs are expected to be significantly more robust to tampering by a malicious user. Further, by passing Sj, Puj, and Cjto SIP for signaling, both the receiving UE and the network can be assured of the timestamp Tα used in generating Ksess. While it is necessary that the key components and timestamps transmitted by UE1and UE2are verified during the key generation process, it is preferable that the verification entity be the KGU or some other entity of the UE. Additionally the network CSCF devices can also perform this verification. However, it is likely that operators would prefer not to verify every key exchange, and instead would push such checking to the UE rather than perform this task within the network, other than for lawful interception warrants, in order to lighten the network load. When verification of the timestamp fails, the connection attempt can be terminated by the verification entity in which the failure occurs. If a UE refuses a connection due to failed verification, an alert can be signaled to the network, e.g., as a first step in blacklisting the transgressing UE. FIG.5illustrates the steps in the key component protection method according to one embodiment. In step501, UE1's KGU generates the key component K1and the signature S1. In step502, UE1transmits (K1P, Tα, [S1Pu1C1])SIP1) in the SIP header to device CSCF1. In step503, device CSCF1stores a copy of (K1P, Tα, [S1Pu1C1])SIP1) in addition to forwarding (K1P, Tα, [S1Pu1C1])SIP1) to device CSCF2. In step504, device CSCF2stores a copy of (K1P, Tα, [S1Pu1C1])SIP1) in case it is needed for lawful interception. Device CSCF2also forwards (K1P, Tα, [S1Pu1C1])SIP1) to UE2. In step505a, UE2receives (K1P, Tα, [S1Pu1C1])SIP1) and checks the signature S1. If the signature is verified, UE2computes the session key Ksess=K1K2P in step505b. then proceeds to step506. Otherwise, the connection is refused and the key agreement protocol terminated. In step506, UE2's KGU generates the key component K2and the signature S2. In step507, UE2transmits (K2P, Tα, [S2Pu2C2])SIP2) in the SIP header to device CSCF2. In step508, device CSCF2stores a copy of (K2P, Tα, [S2Pu2C2])SIP2) in addition to forwarding (K2P, Tα, [S2Pu2C2])SIP2) to device CSCF1. In step509, device CSCF1stores a copy of (K2P, Tα, [S2Pu2C2])SIP2) in case it is needed for lawful interception. Device CSCF1also forwards (K2P, Tα, [S2Pu2C2])SIP2) to UE1. In step510a, UE1receives (K2P, Tα, [S2Pu2C2])SIP2) and checks the signature S2. If the signature is verified, UE1computes the session key Ksess=K1K2P in step510band protected communication commences. Otherwise, the connection is refused and the key agreement protocol terminated. Note that this embodiment includes the signing of parameters used in key generation, and thus need not be limited to the example case of the MIKEY-IBAKE key agreement protocol discussed above. This embodiment can be extended to other key agreement protocols currently under consideration for IMS Media Security, such as MIKEY-TICKET and Session Description protocol security description (SDES). Similarly, the signed parameter need not be a timestamp and need not be the same in both UEs. For example, each UE could use its own specific nonce value in generating the keying information Kj, which it signs and which is signaled in some fashion to the target UE through the network. A signature on the nonce value will enable it to be verified, similarly to the timestamp discussed above. The nonce can be, e.g., a timestamp, a random number, or a sequence number. In another embodiment, to protect the integrity of the IMEI, a hardware portion of the UE signs a nonce and the IMEI. The nonce Nican be, e.g., randomly generated or be the timestamp Tα signaled in SIP. As shown inFIG.6, instead of signaling the IMEI alone, a protocol contains the elements IMEIi, Ni, and [SiPuiCi], where the additional information Niand [SiPuiCi] is carried in an extension field. Similar to the case in key generation, the integrity protection of IMEI1can be verified by any one of several entities, such as LEA1, LEA2, UE2or any network (including either CSCF device). As discussed above, it is preferable that such checking be done by UEs, and connections refused in the case of verification failure. If a UE refuses a connection due to a failed verification, an alert can be signaled to the network, e.g. as a first step in blacklisting a likely counterfeit UE. Since the verification information (IMEIi, Ni, [Si, PuiCi]) is stored in the CSCF device, the network also has the means to re-validate any such alert as a further step in determining a counterfeit UE. In another embodiment, instead of using a signature mechanism, each KGU computes a Message Authentication Code (MAC) tag from a MAC key. As shown inFIG.7, the signature, the public key, and the certificate used in the embodiment ofFIG.4are replaced by the computed MAC tag. Note that since the use of a MAC tag amounts essentially to a symmetric key signature scheme, the interception device associated with a given UE network and the corresponding KGU of the UE must first agree on a MAC key (KMACi) with which to compute the MAC tag, as shown inFIG.7. Note that this embodiment has an advantage in complexity over the embodiment shown inFIG.4since generation of a MAC tag is cheaper than that of a digital signature. However, one disadvantage of this embodiment is that only the interception function in the UE's current network stores the MAC key KMACi, which is needed to verify the MAC tag of UEi. Thus, storage of the MAC tag may only be needed in the CSCF device directly serving the UE. Further, UE2can no longer verify the timestamp of UE1(or visa versa). Stated differently, interception device of LEA1is the only entity outside of UE1that can verify MAC1as the MAC tag computed for [K1P,Tα]. The embodiment ofFIG.4achieves the goal of lawful interception by binding the Elliptic curve Diffie-Hellman (ECDH) key component KiP to the timestamp used in deriving Ki. In other alternative methods, this binding can be achieved in different ways. For example, in a first alternative method, the session key can be derived using a key derivation function (KDF) that takes as input the ECDH-generated key as well as the two timestamps (nonces). In a second alternative method, both time stamps are multiplied as scalars by the ECDH-generated key. For example, UE2calculates Ksess=Tα1Tα2K2K1P after checking that Tα1Tα2K2mod n≠1, where n is the group order, i.e., the order of P. A third alternative method is a slightly modified version of Elliptic Curve Menezes-Qu-Vanstone (ECMQV) that incorporates both time stamps, which are here called Tα1 □and Tα2, in the session key calculation. The timestamps are also treated as nonces. This approach is more bandwidth efficient since a signature is not signaled on SIP, and is more calculation efficient compared to the timestamp signature verification method. In this third alternative method, UE2has a long term key (d2, Pu2), where Pu2is in UE2's certificate C2. Here d2can be derived from KM2through a KDF, since LEA2is able to calculate it. Alternatively, d2can be another ephemeral derived through KDF along with k2. Then, the sequence of calculations in the KGU of UE2is: k2=f(KM2,Tα2); (same asK2as calculated in FIG. 2) (1) G2=k2P; (as before, here ECMQV starts) (2) s2=k2+Tα2×(G2)d2(modn); (ECMQV with the addition ofTα2) (3) UE2sends [G2,Tα2,C2] to UE1and UE2receives [G1,Tα1,C1] from UE1(4) ; Ksess=hs2(G1+[Tα1×(G1)]Pu1); (ECMQV with the addition ofTα1) (5) Note that while calculating s2, UE2checks that Tα2×(G2) mod n≠1, otherwise the process goes back to step 1. Further, while calculating Ksess, UE2checks that Tα1×(G1) mod n≠1, otherwise the process aborts. If UE2attempts to signal on SIP a Tα2′ that is different from Tα2, the session key will not be established correctly. This assures LEA2that Tα2is the one used in the calculation inside KGU2. An extra check can be performed by LEA2: (1) k2′=f(KM2, Tα2), and (2) check that G2′=k2′ P is equal to G2. Also note the second and third alternative methods described above both require some modification to the key protocol itself, and thus might entail greater changes to prior agreements within 3GPP. The embodiments described above have several advantages in that they (1) can secure integrity protection of keying information and UE-identifier information using a MAC tag or signature; (2) can be used to refuse connection and/or reporting of malicious UEs by other UEs; (3) can be used by the network as a means of blacklisting counterfeit or compromised UEs; and (4) if the target UE or KGU are verification entities, the embodiments place no significant load on the network, thus reducing network implementation concerns. Devices CSCF1and CSCF2, as well as the intercepting devices of LEA1and LEA2, can be implemented by one or more computers and/or one or more specialized circuits. A hardware description of such a computer is described with reference toFIG.8. Further, each UE includes at least one or more processors (e.g., CPUs), a memory, a display, and a communication interface. The processor is configured to execute software to perform the functionality of the UEs described above. The KGUs described above can be implemented as a specialized hardware circuit or as software executed on the one or more processors. As shown inFIG.8, the process data and instructions may be stored in memory302. These processes and instructions may also be stored on a storage medium disk304such as a hard drive (HDD) or portable storage medium or may be stored remotely. Further, the claimed advancements are not limited by the form of the computer-readable media on which the instructions of the inventive process are stored. For example, the instructions may be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk or any other information processing device with which the computer communicates, such as a server. Further, the claimed embodiments may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU300and an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art. CPU301may be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, the CPU301may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, CPU301may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above. The computer inFIG.8also includes a network controller306, such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing with network399. As can be appreciated, the network399can be a public network, such as the Internet, or a private network such as an LAN or WAN network, or any combination thereof and can also include PSTN or ISDN sub-networks. The network399can also be wired, such as an Ethernet network, or can be wireless such as a cellular network including EDGE, 3G and 4G wireless cellular systems. The wireless network can also be WiFi, Bluetooth, or any other wireless form of communication that is known. The network controller306may be used to establish a communication channel between the two parties, possibly through the network399. The computer further includes a display controller308, such as a NVIDIA GeForce GTX or Quadro graphics adaptor from NVIDIA Corporation of America for interfacing with display310, such as a Hewlett Packarda HPL2445w LCD monitor. A general purpose I/O interface312interfaces with a keyboard and/or mouse514as well as a touch screen panel316on or separate from display310. General purpose I/O interface also connects to a variety of peripherals318including printers and scanners, such as an OfficeJet or DeskJet from Hewlett Packard. A sound controller320is also provided in the computer, such as Sound Blaster X-Fi Titanium from Creative, to interface with speakers/microphone322thereby providing sounds and/or music. The speakers/microphone322can also be used to accept dictated words as commands for controlling the computer or for providing location and/or property information with respect to the target property. The general purpose storage controller324connects the storage medium disk304with communication bus326, which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the computer. A description of the general features and functionality of the display310, keyboard and/or mouse314, as well as the display controller308, storage controller324, network controller306, sound controller320, and general purpose I/O interface312is omitted herein for brevity as these features are known. In the above description, any processes, descriptions or blocks in flowcharts should be understood to represent modules, segments, or portions of code that include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the exemplary embodiments of the present advancements in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending upon the functionality involved, as would be understood by those skilled in the art. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods, apparatuses and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods, apparatuses and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. | 22,433 |
11943263 | DESCRIPTION OF THE EXAMPLES Reference will now be made in detail to the present examples, including examples illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Systems and methods are described for providing recommendations for an improved user experience in online meetings. A recommendation engine can aggregate data from user devices to make recommendations before, during and after online meetings. Before a meeting, the recommendation engine can recommend which of a user's devices to use for the meeting. During the meeting, the recommendation engine can identify current or anticipated issues and recommend changes the user can make to correct or prevent the issue. After meetings, the recommendation engine can aggregate data and identify an ongoing issue for one or multiple users. The recommendation engine can identify the cause of the issue and make recommendations to the user or an administrator accordingly. FIG.1is an illustration of an example system that can be used for performing the methods described herein.FIGS.2and3illustrate an example method and sequence diagram, respectively, for providing recommendations before a scheduled online meeting.FIGS.2and3illustrate an example method and sequence diagram, respectively, for providing recommendations before a scheduled online meeting.FIGS.6and7illustrate an example method and sequence diagram, respectively, for identifying an ongoing issue for one or multiple users, identifying the cause of the issue, and providing a recommendation for fixing the issue. FIG.1is an illustration of an example system for providing recommendations for an improved user experience in online meetings. The system can be part of any system for managing a group of user devices, such as a Unified Endpoint Management (“UEM”) system. Users can have one or more user devices110enrolled in the UEM system. The user devices110can be one or more processor-based devices, such as a personal computer, tablet, or cell phone. The user devices110can be user-owned or assigned to the user by the organization. The user devices110can have various hardware components needed to allow the user to participate in online meetings. For example, the user devices110can include a CPU112, memory114, storage116, a battery118, a microphone120, speakers122, and camera126. The memory114can be a short-term storage component, such as RAM. The storage116can be a used for longer-term storage, like a hard drive. The microphone120, speakers122, and camera126can be integrated or peripheral components of the user devices110. For example, the microphone120and camera126can be part of a webcam that plugs into a Universal Serial Bus (“BUS”) or other connection port on a user device110. As used herein, the user devices110can be user devices that belong to a particular user in an organization. The attendee user devices160can be user devices that are also enrolled in the UEM system but belong to other users that attend an online meeting with the user of the user devices110. The attendee user devices160can therefore include the same internal and peripheral components of the user devices110. A management server130can be responsible for managing user devices that are enrolled in the UEM system. The management server130can be a single server or a group of servers, including multiple servers implemented virtually across multiple computing platforms. The management server130can manage the user devices110,160by sending management instructions to a management application124installed on them. The management application124can be a stand-alone application, part of an enterprise application, or part of an operating system of the user devices110,160. The management application124can be downloaded and installed at the user device110prior to, or as part of, the enrollment process. For example, a user can download the management application124using a URL that points to a content delivery server with an installation file for the management application124. The URL can be provided by the enterprise, for example. Alternatively, the user can download the management application124from an app store, such as APPLE's APP STORE or GOOGLE PLAY. When the user launches the management application124for the first time, the management application124can prompt the user to enter authentication credentials, such as a username and password or a one-time password (“OTP”) provided to the user by the enterprise. The management application124can send the user-provided credentials to the management server130in an encrypted format. If the management server130can authenticate the credentials, then the management server130can begin provisioning the user device110for enterprise management. For example, the management server130can send a management profile to the management application124. The management profile can include compliance and security settings assigned to the user's profile and any user groups that the user is assigned to. The management server130can also send a security certificate associated with the user's profile that can be used at the user device110to access enterprise data and resources, including managed applications. A managed application can be an application that allows control of access and functionality by the UEM system. The management server130can provision managed applications assigned to the user or the management profile. For example, the management server130can provide the management application124with installation files or URLs for retrieving the installation files for managed applications. The management application124can configure the user device110using the management profile. For example, the management application124can install compliance and security settings from the management profile. As an example, the management application124can encrypt all or a portion of the user device's hard drive, apply virtual private network (“VPN”) settings for accessing UEM data and resources, and set device access controls (e.g., password or personal identification number (“PIN”) requirements). The management application124can also install and configure managed applications for the user device110. After enrollment is complete, the UEM system can actively manage the user device110by sending, via the management server130or another server in the UEM system, management commands, including any updated compliance and security settings, to the management application124. The management application124can ensure that the user device110is up to date with the compliance and security settings prior to allowing access to enterprise data and resources. For example, the management application124can analyze the device state of the user device110using rules included in the management profile. The device state can include various aspects of the user device110, such as the device110manufacturer, model, and ID, a current battery charge level, whether the device110is jailbroken, OS type and version, geographic location, a login status of the user, and an identification of the applications installed on the device110. The user device110can provide periodic updates of its state information to the management server130. These updates can provide a full list of the various device aspects in one example, but in another example each update can identify changes from the previous device status update. The management server130can send management commands to the management application124using any available communication protocol or channel. For example, the management server130can send management commands using an application programming interface (“API”), a notification system, a messaging system native to the user device110, or a command queue. In one example using a command queue, the management server130can store one or more commands in a queue that is available to the user device110over a network. The commands can encompass any management action, such as instructing the user device110to download an application, report a device state, or apply a new profile. The management system can alert the user device110to a change in the command queue, such as by sending a notification to the user device110instructing the device to access the command queue. The notification can be sent through the management application124in some examples, but can also be sent as an OS-level notification or a message that utilizing an OS messaging scheme, such as a Short Message Service (“SMS”) message. The management application124can be responsible for ensuring that the user devices110,160are up to date with compliance and security settings prior to accessing enterprise data and resources. The management application124can communicate with the management server130, allowing UEM management of the user devices110,160based on compliance and security settings at the management server130. The management application124can enforce compliance at the user devices110,160, such as by locking a device, notifying an admin, or wiping enterprise data when compliance standards are not met. Example compliance standards can include ensuring a device is not jailbroken, that particular encryption standards are used in enterprise data transmission, that the device does not have certain blacklisted applications installed or running, and that the device is located within a geofenced area when accessing certain enterprise resources. In one example, the user devices110,160can access enterprise or UEM resources through the management server130. The management server130can include a recommendation engine132. The recommendation engine132can be responsible for aggregating data from user devices110,160enrolled in the UEM system that can have an impact on the users' experience in online meetings and when performing other job functions. For example, such data can include connectivity data, location data, and performance data. The connectivity data can include data about the network connection of the user device110. This can include data indicating a network that the corresponding user device110is connected to and the measured strength, speed, and packet loss rate of the user device's connection to the network. The location data can include any data related to the location of the user device110, such as historical data about the stability of a network that the user device110is currently connected to, alternative networks that may be available, local weather conditions that can impact a user device's connection to the network, and historical network congestion at the user's location based on the scheduled start time of an event. Performance data can include data relating to the user device's110performance and capabilities. For example, performance data can include data about hardware specs, such as CPU speed, RAM speed and capacity, storage capacity, battery capacity, connectivity capabilities (e.g., subscriber identity module (“SIM”) card, WI-FI, BLUETOOTH, near field communications (“NFC”), etc.), waterproof rating, and dust rating. Performance data can also include data about integrated and peripheral components, such as the microphone120, speakers,122, webcam126, etc. The performance data can also include data about software on the user device110, such as the OS version, installed applications, and installed drivers. The data can be stored in a database170, such as a database server. The management application124can collect the data and send the data to the management server130or, alternatively, directly to the database170where it can be accessed by the recommendation engine132. Some data, such as hardware specs and location data, can be retrieved from external sources, such as third-party servers. Data collected by the management application124can be collected and sent to the management server130as one or more data files, such as a JavaScript Object Notation (“JSON”) file, Extensible Markup Language (“XML”), or other file type. The recommendation engine132can use the data from user devices110,160to provide recommendations to users. For example, the recommendation engine132can feed the connectivity data, location data, and performance data into one or more ML models152. The data used to create a recommendation can depend on the type of recommendation. As an example, the recommendation engine132can access user calendars140. When a user has an upcoming online meeting, the recommendation engine132can feed connectivity and location data into a first ML model152that outputs a recommended user device110to use for the meeting. The recommendation engine132can use a different ML model152to make recommendations during the meeting. For example, the recommendation engine132can feed relevant connectivity data, location data, and performance data into a second ML model152that can output recommended actions for the user to take when issues occur during the meeting. Examples of how the recommendation engine132can identify issues and corresponding recommendations is described in detail later herein. Although the ML model152is illustrated as being separate from the recommendation engine132, the ML model152can be a component of the recommendation engine132. For example, the ML algorithm152can be retained at the management server130as part of the recommendation engine132. Alternatively, the ML server150can include an agent of the recommendation engine132that inputs data into the ML model152and sends outputs from the ML model152to the recommendation engine132. The recommendation engine312can also be used to identify system-wide issues and recommend fixes. For example, the recommendation engine312can analyze the data from user devices110,160to identify issues experienced by a large number of devices, determine the cause of the issues, and attempt to determine a fix. For example, for a hardware-related issue the recommendation engine132can access an approved device list172retained on the database170and identify a replacement device to an administrator (“admin”). The approved device list172can be a list of computing devices that are pre-approved by an organization. Examples of how the recommendation engine132can identify issues and corresponding recommendations for system-wide issues is described in detail later herein. FIG.2is a flowchart of an example method for providing recommendations for an improved user experience for an upcoming online meeting. At stage210, the recommendation engine132can receive connectivity data and location data from user devices110associated with a user. The connectivity data can include data about the network connection of a user device110. For example, the connectivity data can indicate a network that the corresponding user device110is connected to and the measured strength and speed of the user device's connection to the network. The connectivity data can be collected by the management application124. For example, the management application124can run a speed test and gather signal strength data from the OS of its corresponding user device110. An upcoming event in the user's calendar140, such as an online meeting, can trigger the collection of connectivity data. For example, a predetermined amount of time before a scheduled meeting, such as 15 or 30 minutes, the recommendation engine132can send instructions to the management applications124on the user devices110to gather and send connectivity data. Alternatively, the management applications124can proactively gather and send the connectivity data the predetermined amount of time before the event. The recommendation engine132can also receive information about the user devices110themselves. This can include, for example, hardware specifications, installed applications, available peripheral devices (such as a peripheral microphone120, speaker122, and/or camera126), and hardware usage, such as the measured usage of the CPU112, memory114, and storage116. The management application124can also inform the recommendation engine132of the power level of the battery118and indicate whether the user device110is connected to a power source or not. Not all the user's devices110need to send connectivity data. For example, user devices110that are not near the user before the event are unlikely to be used by the user for the event. To save on computing and network resources, the management applications124can determine whether the corresponding user device110is near the user before collecting and sending connectivity data. The location of the user can be determined using a variety of methods. For example, a user device110that the user is most likely to be near the user, such as the user's smart phone, can be designated as the user's primary device. Any other user devices110that are near the primary device can collect and send connectivity data. Alternatively, the management server130can identify a user device110that the user is currently using and instruct the user's other nearby user devices110to provide connectivity data. Once the user's location is determined, different methods can be used in determining whether a user device110is near the user. For example, the user devices110can send location data, such as Global Positioning System (“GPS”) and/or Global System for Mobile Communications (“GSM”) data to the management server130. Based on the location data, each user device110within a predetermined distance from the user can be instructed to collect a send connectivity data. Alternatively, or in addition to location data, any user devices110connected to the same network as a user's primary device can collect and send connectivity data. In some cases, the last-received location information, such as the last network connection or GPS location, is used to estimate the location of the device, which may be currently powered off. At stage220, the recommendation engine132can retrieve location information using the location data. The location information can include information about the user device's location that may impact the performance of the user device110during the upcoming online event. For example, the recommendation engine132can retrieve historical data about the stability of a network that a user device110is currently connected to, identify alternative networks that may be available, retrieve information about local weather conditions that can impact a user device's connection to the network, retrieve network congestion where the user is based on the time of the schedule event, and so on. In some examples, historical data about certain can be periodically retrieved and analyzed for purposes of user device recommendations. For example, if a user frequently connects to a certain network from enrolled user devices, then historical data about that network can be collected and aggregated. The recommendation engine132can calculate a score for the network based on factors like speeds, congestion, stability, security, and so on. When the user connects to the network before an online meeting, the recommendation engine132can retrieve the score for the network instead of processing all the historical data for the network every time user connects to the network for an upcoming meeting. The recommendation engine132can also score other networks available at the same location so that the networks can be quickly compared. Scores for networks can be based on the time of day. For example, the recommendation engine132can score networks by the hour or half hour, or only during business hours, for example. This can be useful, for example, when a network typically experiences high congestion or lower stability at a certain time of day. The recommendation engine132can use the score corresponding to the time of the scheduled meeting when making recommendations. At stage230, the recommendation engine132can determine a recommended user device110to use in the upcoming online event. In one example, the recommendation engine132can extract data points from the connectivity and location data, and then input the data points into the ML algorithm152. Examples of data points that can be used as inputs can include an identifier (“ID”) of a user device110, the network it is connected to, the measured network speed and signal strength of the user device110to the network, network information about alternative networks available, the scheduled start time of the event, hardware identifiers, The following is an example of how the recommendation engine132can recommended a user device132. In the example, one of the user's devices110can be designated the user's primary device. The primary device can be the user device110that the user most frequently uses. Alternatively, the primary device can be the user device110that the user most often uses for the type of event that the upcoming online event is. For example, if the online event is an audio meeting, and the user most often frequently uses his smart phone for audio meetings, then the user's smart phone can be designated the primary device for purposes of recommending a device to use. On the other hand, if the online event is a video meeting and the user most frequently uses a desktop computer with a webcam for such meetings, then the desktop computer can be designated the primary device. When the user has a designated primary device, the recommendation engine132can determine whether the user should use an alternate user device110based on various metrics. Because the primary device is likely the user's preferred device, the recommendation engine132can recommend changing devices only if the recommendation engine132engine determines that the user is likely to encounter issues using the primary device, even if the metrics indicate that an alternative user device110would be better. For example, the recommendation engine132can compare the connectivity and location data of the primary device to predetermined metric thresholds. If the primary device satisfies the metric thresholds, then the recommendation engine132can be configured to take no further action before the online event regarding recommendations. However, if a metric threshold is not satisfied, then the recommendation engine132can compare the connectivity and location data of the user's other user devices110to the metric thresholds. If one or more of the other user devices110does satisfy the thresholds, then one of those devices110can be recommended to the user. As an example, the user's primary device can be a laptop computer. The laptop computer can connect via WIFI to a network that is known to have issues at the time the online event is supposed to take place. For example, the historical data can indicate that, at the scheduled time of the event, devices connected to the network report a high latency rate. In one example, the network can be part of a satellite network, and the weather information can indicate that a heavy storm is incoming that can disrupt the satellite signal during the meeting. The recommendation engine132can identify a 5G network that is available, but the laptop does not have 5G capabilities. The recommendation engine312can eliminate any of the user's devices110that do not have 5G capability. The remaining user devices110can be assessed based on other metrics. For example, the recommendation engine132can compare the hardware of the remaining user devices110to identify the user device110with the best hardware for the even type. As an example, for an online video meeting, the recommendation engine132can recommend the user device110with the best camera126and microphone120. For an audio only meeting, the recommendation engine132can recommend the user device110with the best microphone120and ignore the camera126. In the example above, if there is no designated primary device for the user, then the recommendation engine132can compare the metrics of each user device110near the user to identify the best user device110for the user to use. For example, for each user device110, the recommendation engine132can assign a score to each metric and calculate an overall user experience score. The user device110with the highest user experience score can be recommended to the user. The metrics and scoring methods can be set by an admin. The recommendation engine132can also consider battery level of each battery118. The recommendation engine132can determine, based on a user device's current usage and anticipated usage for the online event, whether the user device110has enough battery power left to last until the scheduled end time of the event. For example, if the user's primary device has 15% battery power remaining and is not connected to a power source, and the recommendation engine132determines that 25% of the battery's power would be required to keep the primary device powered through the event, then the recommendation engine132engine can recommend an alternative user device110that has more battery power and/or is connected to a power source. At stage240, the recommendation engine132can send a notification to the user identifying the recommended user device110. The notification can be any type of message that informs the user of the recommended user device110. For example, the notification can be a push notification, web notification, email, or Short Message Service (“SMS”) text. If the notification is a push notification or other notification sent directly to a user device, then the recommendation engine132can send the notification to those user devices110located near the user. FIG.3is a sequence diagram of an example method for providing recommendations for an improved user experience for an upcoming online meeting. At stage302, the user devices110can receive an event notification from the user's calendar140. For example, the user devices110can be subscribed to notifications for events in the user's calendar140. A predetermined amount of time before an event, the user devices110can receive a notification of the upcoming event. At stage304, the user devices110can send connectivity data to the management server130, where it can be accessed by the recommendation engine132. This can be handled by the management application124. For example, the management application124for each user device110can collect data about the network that the user device110is connected to, the network's signal strength and speeds, any alternative available networks detected, and information about any processes running on the user device110that are consuming bandwidth. The management application124can send the connectivity data to the management server130using any communication protocol for transferring data, such as a Hypertext Transfer Protocol (“HTTP”) or Application Programming Interface (“API”) call. At stage306, the recommendation engine132can extract data points from the connectivity data. The data points can correspond to metrics used for recommending a user device110for the upcoming event. The data points can be extracted using rules defined for data fields in a template. As an example, one data field can have rules for network latency information, and another data field can have rules for packet loss rate. The recommendation engine132can create a data file for each user device110with measured values inputted into their corresponding fields in the template. The data file can be any kind of transferrable file, such as a JSON file, XML, or other file type. Although this stage is described as being performed by the recommendation engine132, the management application124can be configured to extract the data points and create a data file using the connectivity data. The management application124can then send the data file to the management server130. At stage308, user devices110can send location data to the recommendation engine132. For example, the management application124can collect GPS and/or GSM and send the data to the management server130, which can then be accessible by the recommendation engine132. At stage310, the recommendation engine132can retrieve information about the provided locations. The location information can include information about the user device's location that may impact the performance of the user device110during the upcoming online event. For example, the recommendation engine132can retrieve historical data about the stability of a network that a user device110is currently connected to, identify alternative networks that may be available, retrieve information about local weather conditions that can impact a user device's connection to the network, retrieve network congestion where the user is based on the time of the schedule event, and so on. At stage312, the recommendation engine132can extract data points from the location information. Like the connectivity data described above, the data points can correspond to metrics used for recommending a user device110for the upcoming event. The data points can be extracted using rules defined for data fields in a template. As an example, one or more data fields can have rules for weather conditions, another data field can have a rule for inputting the start time of the event, another data field can have a field for historical network congestion, and another data field can have rules for noise levels near the user. The recommendation engine132can create a data file for each user device110with measured values inputted into their corresponding fields in the template. Alternatively, the extracted data points for connectivity data and location data can be combined into the same data file. The recommendation engine132can also add information about the hardware of the user devices110to the data file. At stage314, the recommendation engine132can send the data points to the ML server150. For example, the recommendation engine132can cause the management server130to send a data file with the data points to the ML server150. The management server130can send the data file using any communication protocol for transferring data, such as HTTP or an API call. At stage316, the ML server150can input the data points into the ML algorithm152. In an example where the ML algorithm152executes on the management server130, the recommendation engine132can perform this stage. The ML algorithm152can be any teachable algorithm that uses data points related to a user's devices110and their environments to output recommendations for an upcoming online event. The ML server150can provide additional inputs that the ML algorithm152can use to make recommendations. For example, the ML server150can input hardware information about the user devices110and the type of event the recommendations are for. The event type can affect how the ML algorithm152processes the inputs. For example, if the event is a video call, then the ML algorithm152can more heavily weigh factors like network speed and stability, webcam and microphone quality, and anticipated background noise. On the other hand, if the event is an audio call, then the ML algorithm152can ignore webcam quality and lower any network speed thresholds due to audio requiring much less bandwidth than video. At stage318, can output recommendations. The type of recommendations outputted can depend on various factors, such as the event type and what actions are available to the user. As an example, a user has an upcoming video call. The ML algorithm152can determine that, at the scheduled time of the video call, the network that the user's device110is connected to may likely cause problems for the user during the call. This can be based on any number of input factors, such as the network's current measured speed being too slow, historical data indicating high congestion at the scheduled start time that often causes instability and slow speeds, or a weather event that may cause the network to drop. If an alternative network is available that is faster and more reliable, then the ML algorithm152can recommend that the user connect to that network. If the user's primary device110, or the device110that the user is currently using, is unable to connect the alternative network (e.g., the user is using a laptop that cannot connect to a cellular network), then the ML algorithm152can recommend that the user switch to a user device110that can. In one example, the ML algorithm152can recommend a user device110with a better peripherals, such as a better microphone120, speakers122, or webcam126. For example, if the user is in a noisy location, the ML algorithm152can recommend a user device112with a microphone120that has good ambient noise filtering. Alternatively, if the recommended microphone120is an external device that can be used on multiple user devices110, then the ML algorithm152can recommend that the user connect the recommended microphone120to the user's preferred device110. At stage320, the ML server150can send the recommendations to the recommendation engine132. For example, the ML server150can make an API call to the management server130to send the recommendations. At stage322, the recommendation engine132can send the recommendations to the user device110. The recommendation can be sent as any kind of message that informs the user about the recommendation, such as a notification, email, or chat message. The recommendation engine132can send the recommendations to one or multiple user devices110. For example, the recommendation engine132can send the recommendations to only those devices110of the user that are located near the user. The user can then review the recommendations and choose what actions to take, if any, in preparation for the online event. FIG.4is a flowchart of an example method for providing recommendations for an improved user experience during an online meeting. At stage410, the recommendation engine132can receive performance data from the user device110while it is executing an online meeting. Performance data can include any data that may be relevant to the user's experience during an online event. For example, performance data can data related to network speed, packet loss rate, background noise, video quality, and audio quality during an online event. The performance data can also include data about hardware on the user device110, such as the usage of the CPU112and memory114or the battery level. The performance data can be collected by the management application124and sent to the management server130where it can be accessed by the recommendation engine132. At stage420, the recommendation engine132can identify, based on the performance data, a user experience issue relating to the online meeting. For example, the recommendation engine132can identify poor incoming or outgoing audio quality, poor incoming or outgoing video quality, excessive background noise, or other similar issues. Some issues can be identified using performance data from attendee user devices160of other users participating in the online event. For example, attendee user devices160can report excessive background noise or poor audio or video quality being received from the user device110. Although this stage is described as being performed by the recommendation engine132, the management application124can be responsible for identifying user experience issues. For example, the management application124can collect and analyze performance data during an online event. If an issue is detected, the management application124can notify the recommendation engine132and send performance data relevant to the issue. In one example, the management application124can include an agent of the recommendation engine132that analyzes the performance data to identify issues. At stage430, the recommendation engine132can determine a cause of the user experience issue. The recommendation engine132can do this using the performance data. For example, if the measured network connectivity is within allowable levels, but the CPU112or memory114usage rates are high, then the recommendation engine132can determine that the CPU112or memory114is the cause, and vice versa. If excessive background noise is detected, the recommendation engine132can determine that the user's location is the cause. The recommendation engine132can also compare the performance data to historical data about the user to determine the cause. For example, the recommendation engine132can identify trends indicating that the user frequently has issues when using a certain user device110, a certain microphone112, a certain webcam126, or is connected to a certain network. The recommendation engine132can also consider software running on the user device110during the online event. For example, if the memory114or CPU122usage on the user device110is high during the meeting, the recommendation engine132can look at what applications and services running that may be causing the high usage. As an example, if the user frequently has a large number of tabs open in a web browser, this can require significant memory usage. At stage440, the recommendation engine132can determine a recommendation for fixing the user experience issue. The recommendation can be based on the determined cause. For example, if the network that the user device110is connecting to is having congestion or latency issues, and if an alternative network without those issues is available, then switching to the alternative network can be recommended. If the user device110is unable to connect to the alternative network (e.g., the user device110does not have the proper hardware to connect to the alternative network), then the recommendation engine132can recommend that the user switch to another user device110and connect to the alternative network. If the issue is caused by high CPU112or memory114usage rates, then the recommendation engine132can recommend another of the user's devices110to use that has lower usage rates. The recommendation engine132can determine multiple recommendations in some instances. For example, if the recommendation engine132detects loud background noise, the recommendation engine132can recommend that the user move to a quieter location. However, if another of the user device's110has a microphone120that better filters background noise, or if the user has access to a peripheral microphone120with a better noise filter, then the recommendation engine132can recommend switching to that device110or connecting the peripheral microphone120. If the user frequently experiences an issue using a particular user device110, the recommendation engine132can cross-reference the issue with a list of available software or hardware that may provide a fix. For example, if the user frequently has loud background noise during online meetings while using a particular user device110, the recommendation engine132can cross-reference the issue with the approved device list172to identify a device that may fix the issue. For example, the recommendation engine132can recommend a peripheral microphone or a pair of headphones that may block out background noise better than the user device110. The approved device list172can also include approved software that can be recommended. For example, the recommendation engine132can recommend that the user download an approved software application that can filter background noise from existing microphones120. The recommendation engine132can also identify potential issues and provide recommendations before the issue arises. As an example, the recommendation engine132can collect performance data from other managed user devices connected the same network as the user device110during the online event. If the recommendation engine132identifies other user devices having network connectivity or latency problems, then the recommendation engine132can proactively recommend that the user switch to an alternative network that is available. In another example, if the recommendation engine132determines that the battery118may run out before the scheduled end of the online event, then the recommendation engine132can recommend that the user close any unneeded applications to preserve battery life or switch to another user device110. In an example, the recommendation engine132can determine a recommendation using the ML model152. For example, the recommendation engine132can extract data points from the performance data and send the data points to the ML server150to use as inputs for the ML model152. The ML model152can be the same as or different from model described inFIGS.2and3above. The ML model152can use the data points to identify a cause of the issue and output a recommendation. The ML server150can then send the recommendation to the management server130. The examples use the terms algorithm and model interchangeably and are not meant to be limited to one or the other. At stage450, the recommendation engine132can send the recommendation to the user device110. For example, the recommendation engine132can send a notification to the user device110that the user is using for the online event. The notification can be any type of message that informs the user of the recommendation. For example, the notification can be a push notification, web notification, chat message, or SMS text. FIG.5is a sequence diagram of an example method for providing recommendations for an improved user experience during an upcoming online meeting. At stage502, the user device110can collect performance data. For example, the management application124on the user device110can collect data related to network speed, packet loss rate, background noise, video quality, audio quality, and the like during an online event. The management application124can also collect data about hardware and software on the user device110, such as the usage of the CPU112and memory114, the measured battery level, installed applications, and applications and services currently running. At stage504, the user device110can send the performance data to the recommendation engine132. For example, the management application124can send the performance data by making an API call to the management server130. The recommendation engine132can then access the performance data at the management server130. At stage506, the attendee user devices160can collect performance data. For example, the recommendation engine132can collect performance data from all user devices110,160attending the online event. This data can be compared during the meeting to determine whether an issue that arises may be specific to one particular device or may be a system issue. The management application124on the attendee user devices160can collect the performance data and, at stage508, send the performance data to the management server150. At stage510, the recommendation engine132can identify a user experience issue based on the performance data. Although this stage is described as being performed by the recommendation engine132, alternatively the management application124, or an agent of the recommendation engine132running on the user device110, can identify the issue. The management application124can then send relevant data to the management server130. A user experience issue can be any occurrence that may have a negative impact on the user's experience during an online event. Some examples of user experiences issues can include poor incoming or outgoing audio quality, poor incoming or outgoing video quality, excessive background noise, and other similar issues. Some issues can be identified using performance data from attendee user devices160of other users participating in the online event. For example, attendee user devices160can report excessive background noise or poor audio or video quality being received from the user device110. At stage512, the recommendation engine132can extract data points related to the issue. For example, the recommendation engine132can extract data points relating to the measured network speed, packet loss rate, audio quality, and video quality. The recommendation engine132can also extract data about the user device110, such as installed applications, running applications and services, hardware specs, and so on. In addition, the recommendation engine132can extract data points relating to environmental factors, such as the user's location, the time of day, weather conditions, and performance data from other enrolled devices that are near the user device110. At stage514, the recommendation engine132can send the data points to the ML server150. At stage516, the ML server150can input the data points into the ML algorithm152. If the ML algorithm152is retained on the management server130, then the recommendation engine132can perform this stage. The ML algorithm152can be any teachable algorithm that uses data points related to a user device110attending an online environment to output recommendations for resolving issues experienced during the event. At stage518, the ML algorithm152can output recommendations. The type of recommendations outputted can depend on various factors, such as the event type and what actions are available to the user. As an example, during a video call, if a user experiences poor video quality, then the ML algorithm152can use the data points to identify the likely cause and output a corresponding recommendation. For example, if the likely cause is network connectivity problems, such as high packet loss rate or high latency, then the ML algorithm152can output recommendations to address the network connectivity issue. If another network is available, the ML algorithm152can recommend switching to that network, or, if the user device110currently being used cannot connect to the network, the ML algorithm152can recommend switching to another device110that can. If another network is not available at the user's current location, but is available at a nearby location, then the ML algorithm152can recommend relocating closer to the nearby network. Similarly, if a router that the user device110is connected to is experiencing slow speeds due to high network traffic, then the ML algorithm152can recommend connecting to another router on the network, such as by moving to a different location within an office building. For issues caused by high CPU112or memory114usage, the ML algorithm152can look at applications or services that the user can close. For example, if an application or service that uses high CPU112levels is running in the background of the user device110, and if that application is not relevant to the online event, then the ML algorithm152can recommend closing the application. If a large number of open tabs in a web browser is causing high memory114usage, then the ML algorithm152can recommend closing unused tabs. The ML algorithm152can also recommend switching to another user device110that would not experience the same CPU112and memory114issues. For issues with excessive background noise, the ML algorithm152can identify recommendations for reducing the noise. For example, if the user device110has noise canceling software installed that is not being used, the ML algorithm152can recommend using that software. If another of the user's devices110has a microphone120known to filter background noise better, then the ML algorithm152can recommend switching to that user device110. The ML algorithm152can also recommend moving to a quieter area. Recommendations can be based on more than just the current online event. For example, the recommendation engine132can provide historical data points from when the user experienced the same or a similar issue. The ML algorithm152can use the historical data points to better identify the cause of the issue and provide a better recommendation. For example, if issues frequently occur for a user that have the same root cause, then the ML algorithm152can recommend a solution that can address the root cause. This can include cross-referencing the issue with the approved device list172. For example, if a user frequently has audio issues, then the ML algorithm152can recommend that the user purchase or is provided with a new microphone or a headset on the approved device list172. The ML algorithm152can also make software recommendations, such as downloading a software application that filters background noise or a web browser extension that suspends unused browser tabs. At stage520, the ML server150can send the recommendations to the recommendation engine132. For example, the ML server150can make an API call to the management server130to send the recommendations. At stage522, the recommendation engine132can send the recommendations to the user device110. The recommendation can be sent as any kind of message that informs the user about the recommendation, such as a notification, email, or chat message. The recommendation engine132can send the recommendations to one or multiple user devices110. For example, the recommendation engine132can send the recommendations to just the user device110that the user is using for the online event. The user can then review the recommendations and choose what actions to take, if any. FIG.6is a flowchart of an example method for providing organizational hardware and software recommendations based on performance data from user devices. At stage610, the recommendation engine132can aggregate performance data from attendee user devices160, which can include any enrolled user device, including user devices110. As used herein, the term “performance data” is used to described data relating to any data that may be relevant to the user's experience during an online event, such as audio or video calls. However, this use of the term “performance data” is merely exemplary. For example, performance data can include any data related to the performance of enrolled user devices. The performance data can also include data relating to the performance of other device types that may be used for work purposes, such as peripheral microphones, cameras, speakers, and the like. The aggregated performance data can include data relating to the user device's110performance and capabilities. For example, the recommendation engine132can collect data about each user device's hardware specs, such as CPU speed, RAM speed and capacity, storage capacity, battery capacity, connectivity capabilities (e.g., SIM card, WI-FI, BLUETOOTH, NFC, etc.), waterproof rating, and dust rating. The recommendation engine132can also collect data about integrated and peripheral components, such as a microphone120, speakers,122, webcam126, etc. In addition to hardware, the recommendation engine132can also collect about software on each attendee user device160, such as the OS version, installed applications, and installed drivers. The hardware and software information can be collected internally, such as from the management application124or an admin, or externally, such as from the manufacturer's specification documentation. The recommendation engine132can also collect information from independent third-party rating agencies about the hardware and software components. This rating information can be cross-referenced with performance data to help identify the root causes of issues. The performance data can be aggregated by the management application124. For example, the management application124can collect various types of data on its respective attendee user device160. The data collected can be current and/or historical. The management application124can extract and aggregate data classified as performance data based on classification rules. These classification rules can be established at the management server130and distributed to the attendee user devices160. The management application124can send the aggregated performance data to the management server130. When a peripheral device is connected to an attendee user device160, the management application124can collect information about the peripheral device, such as the manufacturer, model, hardware specs, and data about how the peripheral device performs. At stage620, the recommendation engine132can identify a common issue experienced by attendee user devices160. The examples herein describe issues relevant to online events, such as audio or video calls, such as poor video or audio quality, network issues, microphone issues, webcam issues, and the like. However, these are merely examples and not meant to be limiting in any way. For example, issues experienced by attendee user devices160can include any issue that can negatively impact a user's ability to use the device in its intended way or to perform assigned responsibilities with the device. As some examples, such issues can include deteriorating battery life, poor durability, faulty hardware, and the like. In an example, common issues can be identified by categorizing issues identified in performance data. For example, issues can be categorized as incoming or outgoing video quality, incoming or outgoing audio quality, background noise, and so on. Some issues for an attendee user device160can be identified using data from other attendee user devices160. For example, an attendee user device160may not be able to detect when its outgoing audio or video to other attendee user devices160in a meeting is having issues. To address this, the other attendee user devices160can report the poor audio or video quality to the recommendation engine132. The recommendation engine132can then categorize the issue. In an example, the recommendation engine132can identify a common issue based on the frequency in which the issue occurs. For example, the recommendation engine132can track the rate at which an issue occurs over time. If the occurrence rate rises above a threshold rate, then the recommendation engine132can be configured to attempt to identify the cause and provide recommendations. The threshold rate can be a rate set by an admin. The frequency of the issue can be determined based on an individual attendee user device160or a group of devices160. For example, the recommendation engine132can determine that a particular attendee user device160is repeatedly having the same issue and attempt to find a recommendation for that device160. Alternatively, or in addition, the recommendation engine132can identify a group of attendee user devices160that are frequently experiencing the same issue. At stage630, the recommendation engine132can determine a root cause of the issue. The recommendation engine132can first determine whether the issue is related to the attendee user device160or is a network issue. For example, the root cause of some issues, such as poor audio and video quality, can be found at either the attendee user device160or on a network. The recommendation engine132can first analyze the measured network connectivity data in the performance data to determine whether the cause is rooted in the network. As an example, the recommendation engine132can determine whether packet loss rate, speeds, network load, and other factors are likely causing the issue. If attendee user devices160experiencing the issue while connected to an internal network of an organization, then the recommendation engine132can analyze data from the routers that the devices160are connecting to and other network devices. If the root cause is determined to be an internal network issue, then the recommendation engine132can notify an admin. If the root cause is not a network issue, then the recommendation engine132can attempt to identify a root cause at the user device level. For example, the recommendation engine132can compare data about the attendee user devices160experiencing the issue and identify similarities. For example, the recommendation engine132can determine whether the affected devices160are from a similar manufacturer or of a similar model. The recommendation engine132can also compare hardware specs of the affected devices160, such as the specs of their CPUs112, memory114, storage116, battery,118, microphone120, speakers122, and webcam126. The recommendation engine132can also check software similarities, such as the OS version, installed applications, applications and services running when the issue occurs, and so on. For any aspect shared by a large percentage of attendee user devices160experiencing the issue, the recommendation engine132can attempt to determine whether the shared aspect is the root cause. The percentage of attendee user devices160required to trigger such an analysis can be a threshold percentage set by an admin. The recommendation engine132can analyze the performance data of the affected devices160related to the shared aspect. As an example, if the issue is a video quality issue and most of the affected attendee user devices160have the same webcam126, then the recommendation engine132can analyze the performance data related to the webcam126. The recommendation engine132can include data from external sources as well. For example, the recommendation engine132can retrieve data from the manufacturer and trusted third-party review sites to identify any known issues with the webcam126. Even if the performance data does not provide conclusive evidence that a particular aspect is causing an issue, the fact that a large percentage of the affected devices160share an aspect can be sufficient to provide a recommendation. For example, if the issue is an audio quality issue and most of the affected devices160use the same microphone120, even if the performance data does not clearly identify a problem with the microphone120, the recommendation engine160can still recommend a solution based on the microphone120. At stage640, the recommendation engine132can provide a recommendation for the common issue to an administrator, and end user, or both, as described in more detail below with respect to this stage of the method. In one example, the recommendation engine132can identify a recommended hardware option using the approved device list172. A hardware option can include any computing device or component of a computing device, such as a user device, webcam, microphone, router, hard drive, and so on. As an example, if a microphone120is causing audio issues, then the recommendation engine132can recommend a peripheral microphone from the approved device list172or a new computer with a better microphone. If an issue is caused by degrading battery life in a laptop, then the recommendation engine132can recommend a replacement battery or a computer with a better battery life. For software issues, the recommendation engine132can recommend installing applications that can resolve an issue or uninstalling an application that may be causing an issue. The recommendation engine132can also determine when a driver is causing an issue and recommend rolling back to a previous driver. For example, if a new driver for a battery is causing a laptop or phone battery to drain more quickly or permanently shorten the battery life, then the recommendation engine132can recommend rolling back the driver update to a previous version. In an example, the recommendation engine132can identify issues and recommendations using one or more ML models152. For example, the recommendation engine132identify a hardware component associated with a large percentage of user devices affected by the issue. The recommendation engine132can extract data points from the performance data and other data sources related to the hardware component, and then send the data to the ML server150. The ML model152can create a profile of the hardware component in relation to the issue. The profile can include any data that may indicate whether the hardware component is the cause. For example, the ML model152can compare the performance of the hardware component to expected performance metrics and output a profile that indicates where the hardware component is underperforming or where the hardware component's capabilities are insufficient for the required tasks. An admin can provide feedback on the recommendations, and that feedback can be used to retrain the ML model152. For example, if the ML algorithm152incorrectly identifies the cause of the issue, then the admin can provide data used to identify the correct cause. The ML model152can be retrained with the feedback so that it can better identify the cause of similar issues in the future. The recommendation engine132can also use ML models152to recommend devices for users based on user behavior and device usage. For example, a first ML model can be trained to learn user behavior trends based on various factors, such as a user's role, basic requirements of device needed based on a user's work, the general location where a user works and travel patterns, and other similar data. A second ML model can be trained to learn how a device is used by a user or users with similar roles. The second ML model can be trained using data related to the average lifespan of a device before replacement, software capabilities (maximum OS upgrade, support for certain apps, and so on), and general use cases of a device (e.g., the types of user to whom a particular device is assigned). A third ML model can be trained to map a device to a user. The third model can be trained using outputs from the first and second ML models. As an example, when a new user enrolls, information about the user and the user's role can be inputted into the first model, and the first model can output an aggregate user profile. The aggregate user profile can be used as an input to the third ML model to recommend a user device for the user. In another example, when an existing user is to be assigned a new device based on job requirements, or if the user upgrades to a new device, data about roles requirements and usage history on prior devices can be inputted into the second ML model. The second ML model can output an aggregate user profile, which can then be inputted into the third ML model to recommend a device for the user. In another example, if a new device is obtained by an organization, the hardware specs of the device can be inputted into the first ML model to generate a device profile. The device profile can be inputted into the third ML model to identify users or user groups to whom the new device is recommended. The recommendation engine132can send recommendations to the affected users or an admin, depending on the example. For example, recommendations that do not require admin approval can be sent directly to the affected users, while recommendations that require admin approval can be sent to the admin. As an example, if the recommendation engine132recommends installing pre-approved software or that the user purchase a pre-approved peripheral device, such as a microphone, webcam, or headphones, then the recommendation engine132can send the recommendation directly to the user. For example, the recommendation engine132can send a push notification, web notification, chat message, email, or SMS text. On the other hand, if the recommendation engine132recommends a new smart phone or computer, then the recommendation engine132can notify the admin. The admin can then review the recommendation to determine whether to issue new devices to the affected users. In some examples, some recommendations can trigger automated purchasing from a third-party system. As an example, the approved device list172can identify a vendor for certain devices and include information for purchasing the device. The recommendation engine132can be configured to purchase a recommended device in certain circumstances, such as when a set of rules or conditions are satisfied. As an example, if more than a threshold percentage of users are regularly experiencing an issue with the particular webcam, and if the recommendation engine132recommends replacing the webcam with a new webcam from the approved device list172, then the recommendation engine132can trigger a purchase the new webcam for the affected users. Alternatively, the recommendation engine132can notify another server in the UEM system that purchases the devices. The recommendation engine132can notify an admin after making the purchase so that the admin can manually review the order and make any changes, if needed. The rules and conditions for automatic purchasing can be set by an admin. FIG.7is a sequence diagram of an example method for providing organizational hardware and software recommendations based on performance data from user devices during online meetings. At stage702, the user devices110can collect performance data. For example, the management application124on the attendee user devices160can collect data related to network speed, packet loss rate, background noise, incoming and outgoing video quality, incoming and outgoing audio quality, and so on. The management application124can also collect data about hardware and software on the attendee user devices160, such as the usage of the CPU112, memory114, storage116, the measured battery level, installed applications, and applications and services currently running. At stage704, the user devices110can send the performance data to the recommendation engine132. For example, the management application124can send the performance data by making an API call to the management server130. The recommendation engine132can then access the performance data at the management server130. At stage706, the recommendation engine132can extract data points from the performance data. For example, the recommendation engine132can extract data points relating to the measured network speed, packet loss rate, audio quality, and video quality. The recommendation engine132can also extract data about the attendee user devices160, such as installed applications, running applications and services, hardware specs, and so on. In addition, the recommendation engine132can extract data points relating to environmental factors, such as the user's location, the time of day, weather conditions, and performance data from other enrolled devices that are near the user device110. At stage708, the recommendation engine132can send the data points to the ML server150. At stage710, the ML server150can input the data points into the ML algorithm152. The ML algorithm152, using the data points, can learn issues that are occurring for users. For example, the ML algorithm152can learn what issues are occurring, how often the issues occur, what attendee user devices160are experiencing the issues, and what the affected devices160have in common. Upon discovering an issue, the ML algorithm152can attempt to learn a root cause. For example, the ML algorithm152can use network connectivity data inputs to determine whether the root cause is network related. For example, the ML algorithm152can determine that a wireless router is not powerful enough to handle heavy traffic that regularly occurs at a certain time of day, determine that a wireless router does not have sufficient range to provide a reliable network connection to its designated area, or other similar issues. If the issue is not network related, then the ML algorithm152attempt to identify a root cause at the user device level. This can include analyzing data related to the learned similarities of affected devices160. For examples, if the issue is related to video quality, the ML algorithm152can identify shared aspects or components of the affected attendee user devices160that can affect video quality. For example, the ML algorithm152can determine that most of the affected devices160are of the same make and/or model, use the same camera, have the same network card, have the same OS version, use the same camera driver, have the same CPU or memory specs, and so on. The ML algorithm152can then analyze performance and other data related to those shared aspects to determine whether any of them may be the root cause. For example, the ML algorithm152can use data points from manufacturer documentation and third-party review sites to identify known issues with a hardware or software component. In another example, the ML algorithm152can determine that a certain OS version or driver causes problems with certain microphones or webcams. Different ML algorithms can be used to learn issue trends and determine the cause of an issue. For example, a first ML algorithm can learn the issue trends occurring and output and issue profile. The issue profile can include information that identifies what the issue is, how frequently the issue occurs, and information about the attendee user devices160experiencing issues. The issue profile can be used as input for a second ML algorithm that determines the cause. Information from third parties can be used as inputs in combination with the issue profile in determining the cause. The second ML algorithm can output a profile of the cause. At stage712, the ML model152can compare the ML output to the approved device list172. In one example, the database170can include a table that maps issues to hardware and software in the approved device list172. Based on the issue, the ML model152can identify possible hardware or software to recommend. For example, if a microphone120is causing audio issues, then the recommendation engine132can recommend a peripheral microphone from the approved device list172or a new computer with a better microphone. If an issue is caused by degrading battery life in a laptop, then the recommendation engine132can recommend a replacement battery or a computer with a better battery life. For software issues, the recommendation engine132can recommend installing applications that can resolve an issue or uninstalling an application that may be causing an issue. At stage714, the ML output152can output a recommendation. The recommendation can be single or include multiple options. For example, background audio issues, the recommendations can include identifying a pair of headphones with a noise cancelling feature, a better microphone, and noise cancelling software. This gives users options to resolve the issue according to their personal needs. The ML model that outputs the recommendation can be different from any ML models that learn issue tendencies and identify issue causes. For example, continuing the previous example of two ML models (one for learning issue tendencies and one for identifying causes), a third ML model can use output from the second ML model as input in determining a recommendation. The third ML model can also incorporate data points from external sources, such as manufacturer documentation and third-party review sites. At stage716, the ML server150can send the recommendations to the recommendation engine132. For example, the ML server150can make an API call to the management server130to send the recommendations. At stage718, the recommendation engine132can send the recommendations to the affected attendee user devices160. The recommendation can be sent as any kind of message that informs the user about the recommendation, such as a notification, email, or chat message. For some recommendations, the recommendation engine can notify an admin. This can be done for recommendations that require admin approval. For example, recommendations for replacing smartphones or computers can be sent to admins for review. The admins can then carry out the device replacement if needed. Other examples of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the examples disclosed herein. Though some of the described methods have been presented as a series of steps, it should be appreciated that one or more steps can occur simultaneously, in an overlapping fashion, or in a different order. The order of steps presented are only illustrative of the possibilities and those steps can be executed or performed in any suitable fashion. Moreover, the various features of the examples described here are not mutually exclusive. Rather any feature of any example described here can be incorporated into any other suitable example. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims. | 72,203 |
11943264 | The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION FIG.1shows an embodiment of an environment for content item synchronization including communication of interaction information.FIG.1includes devices100A,100B,100C (referred to generally as device100), content management system110, and network120. Three devices are shown only for purpose of illustration; in practice any number of devices may be present in the environment. Similarly, other modules or components described and illustrated throughout may include single or multiple instances as appropriate to the needs of the implementer and without loss of generality. Device100may be any suitable computing device for locally storing and viewing content items and synchronizing the content items with content management system110. Examples of devices include desktop and laptop computers, hand-held mobile devices, tablet computers, and other computing devices. The operation of device100in various embodiments is further described below. Each device100communicates with content management system110through network120. Network120is any suitable network and may include local networks, corporate networks, global networks, and any combination of these. In typical configurations, devices100communicate via a wired or wireless communication network to a local network service provider, and communicate with content management system110through the Internet. In certain configurations, devices100A,100B, and100C communicate directly with one another without network120as indicated inFIG.1by dashed lines. For example, devices100may communicate via a wired or wireless connection, such as wirelessly via a Bluetooth connection or a wired connection via a Universal Serial Bus (USB). Content management system110provides content sharing and synchronization services for users of devices100. These services allow users to share content with users of other devices100. In addition to content sharing, content management system110updates shared content responsive to changes and enables synchronized changes to content items across multiple devices100. A user may synchronize content across multiple devices100owned by the user and associated with the user's account, and the user may share content that is synchronized with devices associated with other users' accounts. Content stored by content management system110can include any type of data, such as digital data, documents, media (e.g., images, photos, videos, audio, streaming content), data files and databases, source and object code, recordings, and any other type of data or file, collectively referred to here as “content items.” Content items stored by content management system110may also be used to organize other content items, such as folders, tables, collections, albums, playlists, or in other database structures (e.g., object oriented, key/value etc.). In practice, various devices100may be synchronizing different groups of content items, based on user associations, permissions, content sharing permissions, and so forth. The operation of content management system110in various embodiments is further described below with respect toFIG.4. FIG.2shows various modules and components of device100in accordance with one embodiment. Device100includes display220for providing information to the user, and in certain client devices100includes a touchscreen. Device100also includes network interface225for communicating with content management system110via network120. Device100also includes a user input module260, which receives user inputs from various user input devices, such as a keyboard, a mouse, a trackpad, or other device. Other conventional components of a client device100that are not material are not shown, for example one or more computer processors, local fixed memory (RAM and ROM), as well as optionally removable memory (e.g., SD-card), power sources, and audio-video outputs. Software modules include operating system245and one or more native applications255. Native applications255vary based on the client device, and may include various applications for creating, viewing, consuming, and modifying content stored on content management system110, such as word processors, spreadsheets, database management systems, code editors, image and video editors, e-book readers, audio and video players, and the like. Operating system245on each device provides a local file management system and executes the various software modules such as content management system client application200and native application255. A contact directory240stores information about the user's contacts, such as name, picture, telephone numbers, company, email addresses, physical address, website URLs, and the like. Further operation of native applications255, operating system245, and content management system client application200are described below. In certain embodiments, device100includes additional components such as camera230and location module235. Camera230may be used to capture images or video for upload to the online content management system110. Location module235determines the location of device100, using for example a global positioning satellite signal, cellular tower triangulation, or other methods. Location module235may be used by client application200to obtain location data and add the location data to metadata about a content item, such as an image captured by camera230. Client device100accesses content management system110in a variety of ways. Client application200can be a dedicated application or module that provides access to the services of content management system110, providing both user access to shared files through a user interface, as well as programmatic access for other applications. Client device100may also access content management system110through web browser250. As an alternative, client application200may integrate access to content management system110with the local file management system provided by operating system245. When access to content management system110is integrated in the local file management system, a file organization scheme maintained at content management system110is represented as a local file structure by operating system245in conjunction with client application200. Client application200may take various forms, such as a stand-alone application, an application plug-in, or a browser extension. Client application200includes user interface module202, interaction management module204, content access module206, local content data store208, monitored presence data store210, and collaboration module207. In addition to handling other device tasks, operating system245displays information from applications executing on device100to a user via display220, which may include one or more user interface elements. Such user interface elements may vary based on the particular device and configuration. User interface elements include windows on a desktop interface as well as interface elements on a mobile device. Examples of operating systems that employ user interface elements such as windows are Microsoft Windows 10 by Microsoft Corporation of Redmond, Washington, and OS X by Apple Inc. of Cupertino, California. In addition, operating system245manages control of multiple native applications255, which may be executing simultaneously. The user interface elements may be layered, such that one layer overlaps another layer. In some operating systems and configurations, only a single user interface element is displayed at a given time. One user interface element is typically the active user interface element, meaning that it is the user interface element to which the operating system245routes user inputs, such as keyboard entry, cursor movement, touch sensors, touch gestures, and so forth. As understood by those of skill in the art, a window or other user interface element that is active at a particular time is often said to have focus. Users may select another user interface element to change the focus from one user interface element to another, and in some instances operating system245may change the focus without user input. Typically, the user interface elements, e.g., windows, associated with native applications255are managed by operating system245, which maintains an association between process identifiers of executing native applications255and user interface element identifiers of the user interface elements. For example, a particular application may be associated with process id “2587”, which may be managing multiple user interface elements, with user interface element identifiers 4, 8, and 10. Each user interface element identifier may be separately associated with a particular content item opened by that native application255, and multiple user interface element identifiers and process identifiers may be associated with the same content item. Operating system245also handles and recognizes various events. Such events include a request from native applications255to close or open a content item, a request from native applications255to close a window or other user interface element, and requests to change a user interface element focus, among many others. As described below, these events may be used by interaction management module204to recognize a change in presence related to a content item. Client application200identifies interactions that take place with respect to a content item, such as when a user opens, closes, edits, or saves the content item on the device. These interactions are identified by client application200to generate interaction information describing the interaction with the content item. Interaction information includes interactions with client application200and interactions with native application255. Interaction information determined from actions performed within native applications255is termed presence information. An application, such as client application200that determines interaction information and presence information is termed a presence application. Additional types of interaction information (in addition to presence information) include notes, messages, and notification requests related to the content item, which may be received by client application200. Messages may include chat messages to other devices, messages indicating a user's intent to interact with (e.g., to edit) a content item, and messages indicating a user's intent to begin a collaboration session. Notification requests may include a request to be notified when another user's interaction information changes. Interaction information also includes metadata modifications, such as versioning notes, event timestamps, or requests for further information stored at content management system110about the content item, such as a request to view versioning information or prior content item versions. Further examples of interaction information are described below. Client application200may receive chat or intent information from a user. In various embodiments, device100identifies a user's presence in a content item (i.e. that the user has the content item open or is editing the content item using the native application255) through interaction with operating system245as described further below. Interaction information is transmitted to other devices100that are synchronized with respect to the content item. Device100receives content items from content management system110and permits users to view, modify, and interact with the content items using various native applications255stored on the device100. For example, device100may include a photo editing application that manipulates image content items, a word processing application that permits modification of text content items, or a computer-aided design (CAD) application that permits modification of drawing content items. As described further below, interaction information is determined by device100via user interactions applications and the interaction information is sent to other devices100. In addition, when device100receives interaction information relating to other devices100, the device100displays that interaction information. In one embodiment, an application detecting interaction information relating to content items is distinct from the applications viewing or manipulating the content items. For example, the client application detecting interaction information is distinct from a photo editing application manipulating or displaying the image content items. In various embodiments, the application detecting interaction information is also responsible for synchronizing the content items with content management system110. Since the application detecting presence information may be distinct from the applications about which presence is detected, presence may be monitored for many applications and content items at once and without requiring integration of the presence monitoring into each type of content item viewer. That is, no special presence monitoring add-on or application modification is required, for example, for each of a photo editing application, a word processing application, and a playlist editing application. FIGS.3A and3Bshow an example of a user interface element focus change on desktop300shown on display220of device100. InFIG.3A, windows310A,310B, and310C are displayed on desktop300and viewable by the user. In this embodiment, desktop300is a general container or frame maintained by operating system245that encloses user interface elements on display220. InFIGS.3A and3B, the user interface elements are windows310in a desktop computing environment. In other configurations, such as a mobile device, or other display with limited area, only a single user interface element might be displayed at a time. As shown byFIG.3A, window310A is the active window, shown as the front window, partially obscuring windows310B and310C. InFIG.3B, focus changed to window310B, which is now the front window and the active window. The focus may change due to user interaction with window310B, or due to a process requesting that its window become the active window. In certain operating systems and configurations, a user interface element has focus (e.g., is receiving user input) without being the front user interface element. Referring again toFIG.2, to open a content item, native application255requests the content item from operating system245and receives a handle to the content item from operating system245for the content item. In some cases, application245does not maintain the handle, and may load the content item data into memory and subsequently close the content item handle even if native application255continues to use data from the content item or if the user enters edits to the content item. Accordingly, open content item handles are often not a reliable way to determine whether an application is interacting with a particular content item. As such, in certain embodiments, further behaviors exhibited by the native applications255are used to determine whether an application is editing a content item. Native applications255also perform various behaviors when a user modifies a content item, and prior to the user saving the content item. These behaviors vary based on the application and operating system245. For example, some native applications255create a temporary content item with a filename that differs from the open content item, for example leading the temporary content item's filename with a tilde or other recognizable mark. In other examples, the native applications255changes the title of a user interface element associated with the content item, which may or may not be directly viewable by a user. In still further examples, native application255sets a flag indicating the content item has been modified. Native application255may also provide information regarding content item modification in response to a request from another application or the operating system. For example the Accessibility API in the OS X operating system as described above provides information regarding content items associated with a user interface element. Since an open content item handle may not reliably determine whether a content item is being edited by a native application255, these behaviors are used by presence management module204to determine presence relating to editing or modifying a content item as described further below. Native applications255may typically be executed on device100independently from one another, and may permit communication between the applications and other applications or processes executing on device100. Native applications255typically provide information to processes using application programming interfaces (APIs), which permit applications to request information from the executing process. For example, native applications255may present an API permitting a request for user interface elements controlled by the application, or to indicate the title of a user interface element, or to request a path in a file system associated with a content item opened by the native application255. Similarly, operating system245may provide similar APIs to requesting processes, such as requesting information about a process that controls a particular user interface element. Client application200manages access to content management system110. Client application200includes user interface module202that generates an interface to the content accessed by client application200, as variously illustrated herein, and is one means for performing this function. The generated interface is provided to the user by display220. Client application200may store content accessed from a content storage at content management system110in local content data store208. While represented here as within client application200, local content data store208may be stored with other data for client device100in non-volatile storage. When local content data store208is stored this way, the content is available to the user and other applications or modules, such as native application255, when client application200is not in communication with content management system110. Content access module206manages updates to local content data store208and uses synchronization logic to communicate with content management system110to synchronize content modified by client device100with content maintained on content management system110. One example of such synchronization is provided in U.S. Pat. No. 9,053,165, filed Sep. 27, 2013, which is hereby incorporated by reference in its entirety. Client application200may take various forms, such as a stand-alone application, an application plug-in, or a browser extension. Content management system110may provide additional data for synchronizing content items, such as information designating that a content item has been deleted, or that the device100may be viewing or editing an outdated version of a content item. Interaction management module204obtains and manages interaction information relating to a user's synchronized content items. As described above, the interaction management module204is typically a distinct module from the native applications255being monitored by interaction management module204for presence information and executes as a separate process. Interaction management module204detects interaction events occurring on device100for synchronized content items. Interaction management module204may detect interaction events by monitoring presence events, or by monitoring received user inputs such as comments and messages. Interaction events indicate that a user has interacted with a content item. Interactions include viewing the content item, editing the content item, saving the content item, commenting on the comment item, sending a message related to the content item, and collaborating in the content item. Interaction management module204sends notifications about interaction events and other interaction information to content management system110. In one embodiment, interaction management module204instructs user interface module202to prompt a user for interaction information. For example, interaction management module204may detect a save of a content item on a device100and instruct user interface module202to prompt the user of the device to comment on changes to the content item associated with the save action. Interaction management module204may store this information in monitored presence data store210or send it to content management system110. In one embodiment, interaction management module receives and maintains prompt conditions specifying when users should be prompted for change comments. For example, prompt conditions may specify that a user should only be prompted for change comments if a change is significant (e.g., at least a minimum proportion of the data has been changed, if a change is the first change for a content item, or if a change is the first change by a particular user. Prompt conditions may be specified by users or other implementers of content item synchronization. Interaction management module204determines whether prompt conditions have been met by analyzing changes to content items. Change comments are discussed in more detail below with respect toFIGS.9and10A-B. Interaction management module204also receives interaction information, including user notification queues, relating to other users' interactions with content items from content management system110for display to the user. In one embodiment the interaction management module204displays interaction information from user notification queues by attaching an interaction indicator to a user interface element associated with a synchronized content item. In various embodiments, the interaction indicator and associated user interface elements display real-time interaction information, such as presence information, and interaction information relating to past activities. This allows users to view the content item and associated interaction information simultaneously, which provides a more holistic view of a content item, users associated with the content item, and changes made to the content item. In one embodiment, the interaction management module204provides received notification queue content and other interaction information for display in chronological order so that users may view a sequence of interactions with the content item. Displayed interaction information may include metadata such as timestamps, user identifiers, user photos, and other data. In another embodiment, the interaction module displays interaction information as it is received in a notification channel style. When a new piece of interaction information is received via the user interface, from another device100, or from the content management system110, it is added to the channel, and users may be notified by the interaction indicator, another user interface element, or by some other method. Displaying interaction information, including notifications, is discussed in more detail below with respect toFIGS.6A-6D. In one embodiment, the interaction management module204detects when a user has been provided with a notification about interaction information or has viewed interaction information displayed in the user interface. The interaction management module204may send this information to the content management system110and/or other devices100associated with the user and the content management system110so that the interaction information that has been viewed by a user is tracked across multiple devices100. Using this information, the interaction management module204or the content management system110can determine whether a user has viewed interaction information, or been provided with a notification about the interaction information and avoid duplicating notifications so that users are not notified about the same interactions on several devices. To determine many types of interaction information, interaction management module204receives interaction information through user interface elements, as further described below. To determine presence information related to a synchronized content item, interaction management module204monitors user interface elements associated with native applications255. Interaction management module204can monitor all user interface elements, or alternatively monitor just certain user interface elements after the user interface element is associated with a content item. Monitored presence data store210includes information maintained by interaction management module204to indicate that particular user interface elements are monitored to determine actions relating to a synchronized content item. While shown here as a part of client application200, in various implementations content access module206and interaction management module204are separated into distinct modules for performing their respective functions. Similarly, various modules and data stores are described separately throughout this disclosure for convenience and in various implementations may be combined or further separated into separate components as desired. FIG.4shows components of content management system110ofFIG.1, according to one embodiment. When using content management system110, to facilitate the various content management services a user can create an account with content management system110. In one embodiment, the user's account information is maintained in user account database418. User account database418can store profile information for registered users. In some cases, the only personal information in the user profile is a username and/or email address. However, content management system110can also be configured to accept additional user information, such as password recovery information, demographics information, payment information, and other details of interest to the implementer. Each user is associated with an identifier, such as a userID or a user name. User account database418can also include account management information, such as account type, e.g., free or paid; usage information for each user, e.g., file edit history; maximum storage space authorized; storage space used; content storage locations; security settings; personal configuration settings; content sharing data; etc. Account management module404can be configured to update and/or obtain user account details in user account database418. Account management module404can be configured to interact with any number of other modules in content management system110. An account can be associated with multiple devices100, and content items can be stored in association with an account. The stored content can also include folders of various types with different behaviors, or other content item grouping methods. For example, an account can include a public folder that is accessible to any user. The public folder can be assigned a web-accessible address. A link to the web-accessible address can be used to access the contents of the public folder. In another example, an account can include a photo folder that is intended for photo content items and that provides specific attributes and actions tailored for photos; an audio folder that provides the ability to play back audio file content items and perform other audio related actions; or other special purpose folders. An account can also include shared folders or group folders that are linked with and available to multiple user accounts. The permissions for multiple users may be different for a shared folder. In one embodiment, the account is a namespace that may be associated with several users, each of whom may be associated with permissions to interact with the namespace. In one embodiment, the content is stored in content storage420. Content storage420can be a storage device, multiple storage devices, or a server. Alternatively, content storage420can be a cloud storage provider or network storage accessible via one or more communications networks. In one configuration, content management system110stores the content items in the same organizational structure as they appear on the device. However, content management system110can store the content items in its own order, arrangement, or hierarchy. Content storage420can also store metadata describing content items, content item types, and the relationship of content items to various accounts, folders, or groups. The metadata for a content item can be stored as part of the content item or can be stored separately. In one configuration, each content item stored in content storage420can be assigned a system-wide unique identifier. Content storage420can decrease the amount of storage space required by identifying duplicate content items or duplicate segments of content items. In one embodiment, for example, a content item may be shared among different users by including identifiers of the users within ownership metadata of the content item (e.g., an ownership list), while storing only a single copy of the content item and using pointers or other mechanisms to link duplicates with the single copy. Similarly, content storage420stores content items using a version control mechanism that tracks changes to content items, different versions of content items (such as a diverging version tree), and a change history. The change history includes a set of changes that, when applied to the original content item version, produces the changed content item version. In one embodiment, content management system110automatically synchronizes content items from one or more devices using synchronization module412. The synchronization is platform-agnostic. That is, the content items are synchronized across multiple devices100of varying type, capabilities, operating systems, etc. For example, client application200synchronizes, via synchronization module412at content management system110, content in the file system of device100with the content items in an associated user account on system110. Client application200synchronizes any changes to content items in a designated folder and its sub-folders with the synchronization module412. Such changes include new, deleted, modified, copied, or moved files or folders. Synchronization module412also provides any changes to content associated with device100to client application200. This synchronizes the local content at device100with the content items at content management system110. Conflict management module414determines whether there are any discrepancies between versions of a content item located at different devices100. For example, when a content item is modified at one device and a second device, differing versions of the content item may exist at each device. Synchronization module412determines such versioning conflicts, for example by identifying the modification time of the content item modifications. Conflict management module414resolves the conflict between versions by any suitable means, such as by merging the versions, or by notifying the device of the later-submitted version. A user can also view or manipulate content via a web interface generated by user interface module402. For example, the user can navigate in web browser250to a web address provided by content management system110. Changes or updates to content in content storage420made through the web interface, such as uploading a new version of a file, are synchronized back to other devices100associated with the user's account. Multiple devices100may be associated with a single account and files in the account are synchronized between each of the multiple devices100. Content management system110includes communications interface400for interfacing with various devices100, and with other content and/or service providers via an Application Programming Interface (API), which is one means for performing this function. Certain software applications access content storage420via an API on behalf of a user. For example, a software package, such as an app on a smartphone or tablet computing device, can programmatically make calls directly to content management system110, when a user provides credentials, to read, write, create, delete, share, or otherwise manipulate content. Similarly, the API can allow users to access all or part of content storage420through a web site. Content management system110can also include authenticator module406, which verifies user credentials, security tokens, API calls, specific devices, etc., to determine whether access to requested content items is authorized, and is one means for performing this function. Authenticator module406can generate one-time use authentication tokens for a user account. Authenticator module406assigns an expiration period or date to each authentication token. In addition to sending the authentication tokens to requesting devices, authenticator module406can store generated authentication tokens in authentication token database422. Upon receiving a request to validate an authentication token, authenticator module406checks authentication token database422for a matching authentication token assigned to the user. Once the authenticator module406identifies a matching authentication token, authenticator module406determines if the matching authentication token is still valid. For example, authenticator module406verifies that the authentication token has not expired or was not marked as used or invalid. After validating an authentication token, authenticator module406may invalidate the matching authentication token, such as a single-use token. For example, authenticator module406can mark the matching authentication token as used or invalid, or delete the matching authentication token from authentication token database422. Content management system110includes a sharing module410for sharing content publicly or privately. Sharing content publicly can include making the content item accessible from any computing device in network communication with content management system110. Sharing content privately can include linking a content item in content storage420with two or more user accounts so that each user account has access to the content item. The content can also be shared across varying types of user accounts. In some embodiments, content management system110includes a content management module408for maintaining a content directory that identifies the location of each content item in content storage420, and allows client applications to request access to content items in the storage420, and which is one means for performing this function. A content entry in the content directory can also include a content pointer that identifies the location of the content item in content storage420. For example, the content entry can include a content pointer designating the storage address of the content item in memory. In some embodiments, the content entry includes multiple content pointers that point to multiple locations, each of which contains a portion of the content item. In addition to a content path and content pointer, a content entry in some configurations also includes a user account identifier that identifies the user account that has access to the content item. In some embodiments, multiple user account identifiers can be associated with a single content entry indicating that the content item has shared access by the multiple user accounts. To share a content item privately, sharing module410adds a user account identifier to the content entry associated with the content item, thus granting the added user account access to the content item. Sharing module410can also be configured to remove user account identifiers from a content entry to restrict a user account's access to the content item. To share content publicly, sharing module410generates a custom network address, such as a URL, which allows any web browser to access the content in content management system110without any authentication. The sharing module410includes content identification data in the generated URL, which can later be used by content management system110to properly identify and return the requested content item. For example, sharing module410can be configured to include the user account identifier and the content path in the generated URL. The content identification data included in the URL can be transmitted to content management system110by a device to access the content item. In addition to generating the URL, sharing module410can also be configured to record that a URL to the content item has been created. In some embodiments, the content entry associated with a content item can include a URL flag indicating whether a URL to the content item has been created. Interaction synchronization module416receives presence information from a device, stores it as part of a presence record in interaction data store424and determines a user presence with respect to a content item. Each user may be associated with a user presence describing presence records associated with that user with respect to a content item, which may be without reference to any particular user device, process, or user interface element. While presence information may describe presence with respect to a particular user interface element or process, this presence associated with a user is termed a user presence. Example user presence includes collaborating, editing, viewing, open, and not present. In this example, a “collaborating” user presence indicates the content item is associated with a user interface element that is presented for viewing and modification on two or more devices, an “editing” user presence indicates the content item is associated with a user interface element that has modified the content item, a “viewing” user presence indicates the content item is associated with an active user interface element on a device100, while an “open” user presence indicates a user interface element is associated with the content item and has opened the content item, but has not yet closed the content item. Various embodiments may use more or fewer user presences. For example, one embodiment includes only “editing” “viewing” and “not present,” in which case user interface elements that have opened the content item but are not the active user interface element may be treated as viewing or not presence, according to the configuration of the system. Obtaining and tracking presence information is also further described in U.S. patent application Ser. No. 14/635,192, incorporated by reference herein. Interaction synchronization module416manages synchronization of interaction information across devices100. Devices100provide interaction information to interaction synchronization module416. Interaction synchronization module416stores interaction information in interaction data store424. Interaction synchronization module416sends interaction information about synchronized content items to synchronized devices100for display to users. Interaction synchronization module416may further send instructions to notify users of new or unviewed interaction information. In one embodiment, devices100send viewing information to interaction synchronization module416indicating whether and when users have viewed interaction information. Viewing information is stored in interaction data store424. In another embodiment, viewing information indicates whether and when users have interacted with interaction information. Interaction synchronization module416may use this information to avoid duplicate notifications on multiple devices100associated with the same user. For example, if a user is notified of new interaction information on a first device100and views the interaction information, this event will be stored such that the user will not be notified about the same interaction information on a second device100. In one embodiment, interaction information stored in interaction data store424is accessible by client application200so that users may view and interact with stored interaction information related to a content item. Stored interaction information may include metadata such as interaction event timestamps and version information. Version information associates interaction events with different versions of a content item. In one embodiment, stored interaction information is provided to users of devices100as a content item history log, in which interaction information and metadata is displayed chronologically. In this way, users may easily view interaction information in one place and better understand the context of changes, edits, views and comments to a content item. For example, a user may see that a content item was edited at 3:00 PM, and the editing user provided the comments “changed the conclusion paragraph” at 3:01 PM. This gives users a comprehensive view of the entire editing process in one place. In one embodiment, content management system110includes collaboration module426. Collaboration module426can be configured to facilitate collaboration between devices100. For instance, collaboration module426may initiate a device handshake by sharing a device's address with another device so that collaboration may occur. Further, collaboration module426may be configured to perform any of the tasks that are performed by collaboration module207of a device100or by any other module of client application200. Notification queue module428creates and manages user notification queues430for shared content items. User notification queues430are stored at content management system110and sent to devices100. A user notification queue430is a group of one or more interactions with a shared content item that may be presented to a user to indicate recent interactions with the shared content item by sharing users. In one embodiment, each sharing user associated with a content item has a user notification queue430corresponding to that content item. Different users' user notification queues for a particular content item may differ. In one embodiment, notification queue module428receives a notification of an interaction event, and determines interactions that are candidates to be added to user notification queues430. The notification queue module428modifies user notification queues430corresponding to a shared content item. Modifying user notification queues430may include adding candidate interactions to the notification queue and removing interactions already present in the notification queue. When an interaction event corresponding to a shared content item is received by the content management system110, the notification queue module420determines whether to add interactions to and/or remove interactions from the sharing users' user notification queues430. Types of interactions added to a user notification queue430may include content item views, content item edits, content item collaborations, content item comments, and content item messages. In one embodiment, interactions have an associated interaction priority. An interaction priority specifies a relative priority of an interaction type to other interaction types. For example, a content item edit may have a higher priority than a content item view. Interaction priorities may be specified by an implementer of the content management system110or by a user of the content management system110. The notification queue module428determines the interaction types and interaction priorities for candidate interactions and interactions in user notification queues430. In various embodiments, the notification queue module428selects higher priority interactions to add to user notification queues430and lower priority interactions to remove from user notification queues430. For example, the notification queue module428may compare the priority of a candidate interaction by a user A to the priority of interactions by user A already present in a notification queue430. If the candidate interaction is a lower priority interaction than an interaction in the user notification queue430, the candidate interaction is not added to the queue. If the candidate interaction is a higher priority interaction than an interaction in the user notification queue430, the candidate interaction is added to the queue, and the interaction already in the queue may be removed from the queue. This allows users to be presented other users' higher priority interactions with a content item, which provides important information for the users without also providing less important information that may confuse the user or waste space in a user interface element. Notification queue module428may send user notification queues430to devices100. In one embodiment, notification queue module428sends a user notification queue430responsive to receiving a notification that a user has accessed a content item. The access notification may come directly from device100or from interaction synchronization module416. The access notification may be generated responsive to detecting a presence event consistent with access of the content item, such as opening a content item for viewing or editing. In one embodiment, notification queue module428clears a user notification queue430responsive to receiving a notification that the associated user viewed the notification queue. This way, the user will not be presented with notifications that the user has already viewed. Content management system110may be implemented using a single computer, or a network of computers, including cloud-based computer implementations. For the purposes of this disclosure, a computer is device having one or more processors, memory, storage devices, and networking resources. The computers are preferably server class computers including one or more high-performance CPUs and 1 G or more of main memory, as well as 500 Gb to 2 Tb of computer readable, persistent storage, and running an operating system such as LINUX or variants thereof. The operations of content management system110as described herein can be controlled through either hardware or through computer programs installed in computer storage and executed by the processors of such server to perform the functions described herein. These systems include other hardware elements necessary for the operations described here, including network interfaces and protocols, input devices for data entry, and output devices for display, printing, or other presentations of data, but which are not described herein. Similarly, conventional elements, such as firewalls, load balancers, failover servers, network management tools and so forth are not shown so as not to obscure the features of the system. Finally, the functions and operations of content management system110are sufficiently complex as to require implementation on a computer system, and cannot be performed in the human mind simply by mental steps. In one configuration, components described below with reference to content management system110are incorporated into devices100that share and synchronize content items without management by content management system110. These devices100may synchronize content and share interaction information over network120or via a direct connection as described above. In this configuration, devices100may incorporate functionality of synchronization module412, conflict management module414, interaction synchronization module416, and other modules and data stores for incorporating functionality described below as provided by content management system110. Accordingly, devices100in this configuration operate in a peer-to-peer configuration and may do so without content management system110or network120. FIG.5shows an example process for determining presence information associated with a content item according to one embodiment. This process is typically performed by interaction management module204. Where the user interface elements are monitored only after being associated with a content item, interaction management module204uses events indicating that a content item is being opened by an application or user interface element to determine whether to monitor a user interface element. This is one example of an event that may associate a content item with a user interface element to initiate monitoring of the user interface element, termed a monitoring event. In other embodiments, a selection of user interface elements to monitor is determined in another way, or all user interface elements are monitored, in which case the interaction management module204may not use monitoring events. In another embodiment, the monitoring event includes a process saving a content item. If enabled by operating system245, the interaction management module204may register with operating system245to receive monitoring events for specific applications. In these embodiments, operating system245notifies interaction management module204when a request to open or save a content item is received by operating system245. In this embodiment, interaction management module204receives500a monitoring event that indicates a window or other user interface element is interacting with a content item, which may be a synchronized content item (i.e., the process is interacting with the content item in a particular user interface element). The monitoring event designates at least a user interface element that triggered the monitoring event. In other embodiments, interaction management module204monitors events associated with user interface elements from time-to-time (e.g., five minute intervals) and queries whether the user interface elements are associated with any open content items. According to operating system245and native application255configuration, this query may be directed to operating system245or native application255. When a user interface element is associated with a newly opened content item, that newly opened content item is treated as a monitoring event to determine whether the newly opened content item is a content item synchronized with content management system110and that presence information should be determined for the newly opened content item. When the monitoring event is received, interaction management module204determines510which process is responsible for the user interface element associated with the monitoring event. Interaction management module204typically determines the process by requesting the process ID associated with the user interface element from operating system245. In some embodiments, the interaction management module204identifies a process by requesting an identification of the process from the user interface element itself. To confirm that the process and user interface element are correctly associated with one another and that the user interface element is still active, interaction management module204may also request from the process the identity of the currently active user interface element. The interaction management module204confirms that the currently active user interface element received from the process matches the user interface element associated with the monitoring event. Using the process identifier, interaction management module204requests520any open content item from the process to obtain an associated directory path for the content item. The interaction management module204may designate the user interface element associated with the monitoring event with the request for the open content item's path. The interaction management module204requests the open item from the process or operating system using an interface available interface to the process or operating system. As one example, in the OS X operating system, the accessibility API may be used to access information relating to a content item and content item path for a user interface element, as known in the art. Using the content item path provided by the process, the interaction management module204determines whether the opened content item path corresponds to any synchronized content items. If so, interaction management module204determines that the content item accessed by the process is a content item synchronized to content management system110and associates that process and user interface element with the content item. In other embodiments, other methods may be used to determine whether a content item accessed by the process is a synchronized content item. If the content item is synchronized530to content management system110, interaction management module204stores information relating to the content item, process, and user interface element, to monitor540the user interface element for events. When the content item associated with the monitoring event is not synchronized, the process may end or may continue by displaying a synchronization interface to a user. Monitoring information is stored in monitored presence data store210. To monitor and subsequently receive presence events related to the user interface element, interaction management module204registers to receive events associated with the user interface element. The registration process by the interaction management module204varies according to the configuration of device100. Typically the interaction management module204registers a request to receive presence events from operating system245or from the applicable process or user interface element. While the monitoring events determine whether a user interface element or process is associated with a synchronized content item, presence events are events that may indicate a change in state of a user's presence relating to the user interface element or process associated with a content item. Example presence events include a change in focus of a user interface element, closing a user interface element, closing a content item, opening a content item, and so forth based on the types of presence recognized by the interaction management module204. In various configurations, the presence events used by interaction management module204depend on the events operating system245and native application255make available for receipt by interaction management module204. The presence events are used to determine presence information associated with the content item to which the presence event relates. For example, a presence event indicating that a user interface element that is associated with a content item has the focus will indicate that the user is viewing the content item, and hence the presence information for that content item indicates that state. Likewise, a presence event indicating that a user interface element unrelated to a content item has gained focus indicates that the content item associated with a previously focused user interface element has lost focus, and thus indicates that user is no longer be viewing the content item. Thus, presence information provides a level of semantic interpretation of the underlying presence event itself. In addition to receiving presence events that the interaction management module204registered for, presence events may also be initiated by interaction management module204to confirm that presence information has not changed for a monitored user interface element. These presence events may be initiated if a threshold amount of time passed since the last presence event for a particular user interface element or process, or at particular intervals, e.g., every five minutes. In addition to registering for presence events, interaction management module204may receive interaction events in other ways. In one embodiment, users may expressly indicate interaction information through a user interface element. The user interface element can be configured to allow the user to indicate, for example, that a user intends to revise a content item, to indicate that intent to other users who are editing or viewing the content item, for example by selection of a menu item or icon that represents the particular intent. The user interface element can also be configured to allow a user to indicate other intentions of the user, such as a user's intention to no longer view a content item, or to expressly indicate that a user is not or will not be present for a content item. Other users may use such “not present” intention to know that the content item is free for editing. User input interaction events may also include messages or chat features to be disseminated to other users associated with the content item, for example, to transmit a message to other users currently viewing the content item on other devices. When a presence event is received550, interaction management module204determines560whether any presence information has changed since the last presence event related to a monitored user interface element. For user-initiated interaction information, the interaction information may be the information provided by the user, for example the user's selection of a user interface element indicating that the user intends to modify a content item, or a user's chat message. For presence events, the interaction management module204queries the monitored process to determine the status of the monitored user interface element. In particular, the interaction management module204queries the process to determine if the monitored user interface element is the active user interface element. When the monitored user interface element is the active user interface element, the content item is being viewed by the user. In some embodiments, in addition to detecting user presence with respect to a content item, interaction management module204also determines whether the content item is being or has been modified by the user. This further aspect enables presence information to be reported more granularly, for example with an indication that a user has a presence with respect to the content item as an editor rather than as a viewer. As the particular actions performed by applications when a content item is being modified may vary as described above, detecting one of these actions by interaction management module204indicates that the process has edited the content item. For example, according to the type of actions expected by the process when the content item is edited, interaction management module204may query the process to determine if the process indicates the content item has been flagged as modified, if the title information of the user interface element has changed, if a temporary file has been saved or cached, or any other data that suggests the content item has been modified. Interaction management module204may also query the operating system to determine if a content item has been saved that matches a temporary content item format, for example a content item with a filename similar to the content item, but with a tilde or other specialized variation of the filename. Such modifications indicate that the presence information associated with the content item should reflect that the user is editing the content item. After determining560the presence information, any new presence information for a user interface element may be stored as monitored presence data store250. This presence information in one embodiment is stored on a user interface element-by-user interface element basis, such that multiple user interface elements by one process may be associated with the same content item, and have presence information individually managed. In one embodiment, presence information may change based on the current presence status. For example, when the presence information for a content item reflects that the content item is being edited, in one embodiment the presence for the content item in a user interface element is not changed when a user changes focus to another user interface element. Instead, the edited status is maintained with respect to that user interface element until a presence event indicates the user interface element is closed. In another embodiment, since editing has the potential to introduce modifications to the content item, in one embodiment the presence information for an edited document is not changed until the interaction management module204receives a notification that modifications to the content item are either committed or the modifications are discarded. A content item with presence information indicating it is being viewed may have that status change when the user interface element loses focus, or within a threshold period of time of losing focus. This may be the case even if the user interface element associated with the content item is still open. In one embodiment, “viewed” presence information indicates whether a content item is associated with an active user interface element. In one embodiment, “viewed” presence information is retained until the user interface element is not active (or has lost focus) for longer than a threshold amount of time. In one embodiment, the content item is considered “viewed” while the content item is open by an application. When there is a change to the interaction information, interaction management module204sends570the presence information to content management system110. In one embodiment, the sent presence information includes an identifier of the content item, the process id, the user interface element id, and the presence status. The presence information may further include metadata, such as versioning notes and presence event timestamps. In one embodiment, the content management system110maintains received interaction information for the synchronized content item, for example in a data store of the content management system110. The content management system110may provide received interaction information to other devices100that are synchronized with respect to the content item for display to users. FIGS.6A-6Dshow example user interfaces displaying interaction information, including user notification queue content. These user interfaces may be generated, for example, by user interface module202, and is one means for doing so. InFIG.6A, the example window605of the user interface displays a synchronized content item, here “content item6.”. The example user interface displays interaction information received from content management system110. To display interaction information, interaction management module204provides interaction indicator(s)600along a boundary or border of the window associated with the content item. Interaction indicator600is displayed along with the window associated with the content item, and in one embodiment interaction management module204tracks the location of the window and displays interaction indicator600adjacent to or near the window, for example alongside a border or boundary of the window. The interaction indicator600may be located on any convenient area of display220. In one embodiment the interaction indicator is displayed proximal the associated user interface element of the content item so as to visually indicate to the user the relationship between the interaction indicator and the specific content item. In addition, the display of the interaction indicator along a boundary or border of the window increases the likelihood that the user will notice interaction indicator600. In one embodiment, the interaction indicators600are displayed on or alongside a vertical edge of the window containing the content item (e.g., right edge as shownFIG.6A). Alternatively, interaction indicator600may be shown in a separate area of the display, such as a taskbar, or tray icon or may be a separate user interface element that does not interact with the user interface element of the content item. Though shown here as a single interaction indicator600, any number of interaction indicators600may be shown related to the content item. InFIG.6B, the interaction indicator600includes a badge element610. Badge element610may include a number of other visual elements to provide more information about interaction information. For example, the badge element610may have a number representing a number of unviewed interaction events, as illustrated inFIG.6B. In other embodiments, the badge element610may be a visual element such as an icon to indicate unviewed interaction events to a user. Turning toFIG.6C, supplemental interaction indicator620may appear when a user selects or hovers over interaction indicator600to provide further information or interfaces for the user. In the example shown inFIG.6C, supplemental interaction indicator620describes a recent interaction with the content item, specifically that Andrew edited content item6. Supplemental interaction indicator620may also appear without action by the user, for example, when a presence changes, to indicate a new user is viewing or editing the document. FIG.6Dshows an example user interface with an interaction element630through which a user may view and enter interaction information. This interface includes interaction indicators600, in addition to further user interface elements. The interaction element630may be presented in lieu of the example ofFIGS.6A-6C, or may be presented as a supplemental element providing additional data regarding the content item. The interaction element630includes content item information section646, which displays the content item name, as well as the time of a last interaction event, such as a save action. In the example ofFIG.6D, the content item information section646indicates that Content Item6was last saved 3 minutes ago. The interface also includes sharing element642that allows users to share the content item with other users, either via synchronization or other methods known in the art. The example interface ofFIG.6Dincludes interaction viewing section634, which displays interaction information and associated information to users. Associated information may include times that interactions occurred and user information associated with interactions. In the example ofFIG.6D, interaction viewing section634contains messages634A-B and presence information650A-B. For each item of displayed interaction information, interaction viewing section634contains interaction times644A-D. In one embodiment, as shown inFIG.6D, interaction times644A-D are expressed as a time that the interaction occurred. In another embodiment, interaction times may be expressed as a relative time, for example, how much time has elapsed since the interaction occurred. The interaction viewing section634contains user images636A-B for messages and other interaction information associated with users. User images636A-B may be received from the content management system110. The interaction viewing section634contains user identifiers648A-B, which may be user IDs, names, or other identifiers. The interaction viewing section634may include other icons or graphics for interaction information. For example, icons638A-B may correspond to displayed presence information. Icon638A is an eye to represent viewing, and icon638B is a pencil to represent editing or saving a new version. In one embodiment, users may interact with (e.g., click, hover over, etc.) various elements within the interaction viewing section634to view additional information. For example, selecting or hovering over name elements648A-B or user images636A-B may allow the user to view additional user information related to that user. This interface also provides a chat interface for users to communicate with other users associated with the content item. The chat interface permits users to enter and receive messages to other users. A text input element632allows users to enter messages to other users, and interaction viewing element634allows users to view messages. The chat interface may permit users to specifically discuss information relating to that content item, such as when a user expects to finish editing the item. These messages are received by interaction management module204as interaction information and sent to other clients synchronized to the content item. This permits users to chat directly about a content item, even if the native application provides no chat functionality. FIG.7shows an example process for updating notification queues for sharing users according to one embodiment. Content management system110receives702a notification of an interaction event for a shared content item. The interaction event indicates a new interaction with the shared content item by a sharing user, which we refer to here for purposes of explanation as “User A.” In one embodiment, content management system110determines704whether users are collaborating in the content item—that is, whether more than one user currently has the document open for viewing or editing. If users are collaborating in the content item, the process ends, and the notification is not added to the users' notification queues. If users are not currently collaborating in the content item, the content management system110proceeds with updating the notification queues for the content item for each sharing user. Content management system110determines706the interaction type and the priority of the new interaction. Content management system110uses the interaction type and the priority of the new interaction to determine whether to add the interaction to each user's notification queue for the content item. For each sharing user, content management system110determines710whether the notification queue already includes an interaction by the sharing user A with a higher priority than the priority of the new interaction. If the notification queue does already include a higher priority interaction by the sharing user A, the notification queue is not updated, and the process proceeds from step708with the next user. If the notification queue does not already include a higher priority interaction by the sharing user A, content management system110adds712the new interaction to the notification queue. In one embodiment, content management system110removes714lower priority interactions by the sharing user A from the notification queue. FIG.8shows an example process for sending a notification queue to a device accessing a content item according to one embodiment. Content management system110receives802a notification that a content item is accessed by sharing user B. Content management system110determines804whether the sharing user B is collaborating in the content item. If the sharing user B is collaborating in the content item, content management system110clears806the sharing user B′s notification queue for the content item without delivering the queue. Alternatively, if the sharing user B is not collaborating in the content item, content management system110sends the sharing user B′s notification queue for the content item to a device100of the sharing user B. FIG.9shows an example process for prompting a user of a device to notify other sharing users of a change to a content item by providing a comment according to one embodiment. Interaction management module204of device100detects902a save operation in a native application255. The save interaction may be detected, for example, by monitoring interaction events on device100. In one embodiment, interaction management module204determines904whether there is a change to the content item, for example by detecting an editing interaction event. If there is no change to the content item, the process ends. If there is a change to the content item, user interface module202prompts906the user to notify other sharing users of the content item by providing comment about the changes made to the content item. In one embodiment, the user is prompted by a user interface element displayed in the user interface of device100, as described below with respect toFIGS.10A and10B. In one embodiment, the user is prompted according to specified prompt conditions, as discussed above with respect toFIG.2. FIG.10Ashows an example user interface for prompting a user to notify other sharing users by providing a comment on a content item change. The example window1005of native application255displays a synchronized content item, here “content item10.” User interface module202displays an interaction indicator1000along a boundary or border of the example window1005or in another suitable location in the user interface of device100. Prompt element1020is displayed responsive to detecting the save operation, and prompts the user to notify other sharing users by providing a comment on the changes to the content item. In one embodiment, the interaction indicator1000includes a badge1010that indicates a number of outstanding user prompts. The user may interact with the prompt element1020, for example by selecting the yes element or clicking the badge1010, to be presented with a user interface element for providing comment. FIG.10Bshows an example user interface for providing a comment about a changed content item. Comment element1030includes a text input element1040into which a user may enter comment text, and a send element1050that allows a user to instruct interaction management module204to send the comment to content management system110. Returning toFIG.9, interaction management module204receives908a comment from the user, for example, responsive to the user entering the comment in the text input element1040and selecting the send element1050. Interaction management module204sends910the comment to content management system110for storage and distribution to devices100of sharing users. As described above with respect toFIG.4, content management system110stores the comment along with other interaction information as part of a historical log of the content item so that users may view a comprehensive view of the editing process of the content item in one place. Interaction management module204may provide content item metadata, such as version information, along with the comment so that the comment may be associated with a version of the content item in the historical log. Content management system110provides comments to client applications200of sharing users so that the sharing users can be notified of the change to the content item and the comments provided by the editing user. Notifications may be provided to users as described above with respect toFIGS.2and6A-6D. In various embodiments, content management system110also provides other historical log contents along with comments for presentation to sharing users. The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure. Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof. Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Embodiments of the invention may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Embodiments of the invention may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein. Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims. | 78,067 |
11943265 | DETAILED DESCRIPTION In the following description, reference is made to drawings which show by way of illustration various embodiments. Also, various embodiments will be described below by referring to several examples. It is to be understood that the embodiments may include changes in design and structure without departing from the scope of the claimed subject matter. The current disclosure solves at least some of the drawbacks disclosed in the background by a system, method, and computer-readable medium for generating specific secure deep links, a system, method, and computer-readable medium for generating spatial deep links for virtual space, and a system, method, and computer-readable medium enabling distributed deep link security. FIG.1shows a system100for generating secure deep links, according to an embodiment. System100comprises at least one server computer102of a server computer system comprising at least one processor104and memory106comprising instructions executed by said at least one processor104. The memory106further stores a deep link generator108and a videoconferencing platform110comprising at least one videoconferencing space112. The deep link generator108is configured to receive a deep link generation request and receive a videoconferencing meeting slot list, wherein each videoconferencing meeting slot114comprises at least a location within the videoconferencing space112. The deep link generation request may be sent by an administrator of the videoconferencing platform110through, for example, an administrator client device122, or may be automatically created by the videoconferencing platform110as people confirm interest in joining a corresponding videoconferencing session. The deep link generator108is further configured to generate a deep link that corresponds to (e.g., is unique for) each videoconferencing meeting slot114, each deep link including information that represents the location of the videoconferencing meeting slot114within the videoconferencing space112. In some embodiments of the current disclosure, a deep link comprises a Uniform Resource Locator (URL) that links to a specific part of a videoconferencing space of the videoconferencing platform, such as a videoconferencing meeting slot. The URL may be presented on a computing device in the form of a hyperlink. The URL includes all the information required to point to a particular location of the videoconferencing space. When a videoconferencing session participant clicks on, taps, or otherwise activates a deep link generated by the deep link generator, the participant's client device generates and sends a message (e.g., an HTTP request or a message that uses a different protocol) to a server computer that includes the URL. In this way, the deep link takes the participant to a desired location within the videoconferencing space, such as to a specific meeting slot or a to a specific communication instance. In some embodiments, the videoconferencing meeting slot114to which a participant will be assigned is initially determined by an administrator. In an illustrative scenario, an administrator makes this determination via a graphical user interface, manually clicking on or otherwise selecting a specific place of a layout of a virtual meeting space (e.g., seating positions around a virtual conference table), and place each participant accordingly. Once the locations are selected, and the deep link generator108generates respective deep links which direct participants to the corresponding meeting slot, where each deep link is unique to a meeting slot. Alternatively, the videoconferencing meeting slot114to which a participant will be assigned is determined automatically (e.g., in a pseudo-random manner) by the system100when each user registers, so that when a user clicks on the deep link the user is placed in a position determined automatically by the system100. In some embodiments, a communication instance of a videoconferencing session comprises an instance of the videoconferencing space (e.g., of a 2D or 3D virtual environment representing such a video conferencing space) plus one or more corresponding communication channels that enable communications within the videoconferencing space. A public communication instance may be an instance of the videoconferencing space enabling a plurality of users to view each other and to communicate to each other simultaneously. A private communication instance may be an instance of the videoconferencing space enabling communication only to reduced or predetermined number of users upon invitation. In some embodiments, a user graphical representation graphically represents a user or meeting participant and comprises, e.g., a user 3D virtual cutout constructed from a user-uploaded or third-party-source photo with a removed background, or a user real-time 3D virtual cutout with a removed background generated based on the real-time 2D, stereo, depths or 3D live video stream data feed obtained from the camera, thus comprising the real-time video stream of the user, or a video without removed background, or a video with removed background and displayed utilizing a polygonal structure. Such polygonal structures can be a quad structure or more complex 3D structures used as a virtual frame to support the video. Such user graphical representations may be inserted into three dimensional coordinates within a virtual environment of a 3D videoconferencing space and are therein graphically combined. In the current disclosure, the term “user 3D virtual cutout” refers to a virtual replica of a user constructed from a user-uploaded or third-party-source 2D photo. The user 3D virtual cutout is created via a 3D virtual reconstruction process through machine vision techniques using the user-uploaded or third-party-source 2D photo as input data, generating a 3D mesh or 3D point cloud of the user with removed background. In some embodiments, the data used as input data comprised in the live data feed and/or user-uploaded or third-party-source 2D photo comprises 2D or 3D image data, 3D geometries, video data, media data, audio data, textual data, haptic data, time data, 3D entities, 3D dynamic objects, textual data, time data, metadata, priority data, security data, positional data, lighting data, depth data, and infrared data, amongst others. In the current disclosure, the term “user real-time 3D virtual cutout” refers to a virtual replica of a user based on the real-time 2D or 3D live video stream data feed obtained from the camera and after having the user background removed. The user real-time 3D virtual cutout is created via a 3D virtual reconstruction process through machine vision techniques using the user live data feed as input data by generating a 3D mesh or 3D point cloud of the user with removed background. In the current disclosure, the term “video with removed background” refers to a video streamed to a client device, wherein a background removal process has been performed on the video so that only the user may be visible and then displayed utilizing a polygonal structure on the receiving client device. In the current disclosure, the term “video without removed background” refers to a video streamed to a client device, wherein the video is faithfully representing the camera capture, so that the user and his or her background are visible and then displayed utilizing a polygonal structure on the receiving client device. The deep link generator108is further configured to send deep links to each participant client device118-120. A participant client device118-120may click on or otherwise activate the received link to confirm participation on a corresponding videoconferencing session, which is then sent to the deep link generator108. The deep link generator108receives the authorization to participate in the session from, e.g., the user click, and accordingly initiates the video conferencing session by instructing the videoconferencing platform110. The deep link generator108, in communication with the videoconferencing platform110, then assigns the participant to the corresponding videoconferencing meeting slot114. The deep link generator108is a computer-generated deep link creator program stored in memory106that is configured for generating deep links encoding a plurality of data. For example, the deep link may encode a specific videoconferencing meeting slot from a videoconferencing meeting slot list, so that when the participant clicks on the deep link, the deep link takes the participant to the allocated video conferencing meeting slot114. The term “videoconferencing space” refers to a virtual space where a videoconferencing session takes place. The videoconferencing space may be a 2D or a 3D videoconferencing space. In an embodiment where the videoconferencing space is a 2D videoconferencing environment, each videoconferencing meeting slot represents a tile thereof. The meeting slot tiles may be tiles in a matrix, where each tile represents a participant assigned to a specific area within the videoconferencing space and comprises a live-recording or picture of the participant. In an embodiment where the videoconferencing space is a 3D virtual environment, each videoconferencing meeting slot represents a precise position including 3D coordinates within that 3D virtual environment. The 3D virtual environment is a computer-managed virtual environment supporting real-time communications between participants. As a 3D videoconferencing environment, the virtual environment may comprise other graphical elements not necessarily required for enabling communications, but which may enhance the user experience within the virtual environment. For example, the 3D videoconferencing environment may include a plurality of virtual, graphical elements representing walls, structures and objects within the virtual environment. In some situations, the 3D videoconferencing environment simulates a physical, real-world space. The 3D virtual environment may follow rules related to gravity, topography, physics and kinematics, which may or may not be based on real-world elements, and which may be implemented by suitable computer-implemented mathematical models. In some embodiments, suitable models comprise one or more of a 3D model, dynamic model, geometric model, or a machine learning model, or a combination thereof. In an embodiment, the deep link generator108is configured to encode in a deep link an expiration factor, wherein the expiration factor is session-based, time-based, or click-based (activation-based), or a combination thereof. A session-based expiration factor enables the deep link to be used only for a specific session, deleting or deactivating the deep link after the session is over. A time-based deep link enables a participant using such link to access the videoconferencing session within a predetermined time, after which the deep link may expire. A click-based or activation-based deep link enables using the deep link only a predetermined number of times (e.g., one time, two times, or some other number of times) before deleting or deactivating the deep link. Combinations thereof may also be possible. Elements ofFIG.1, including the at least one server computer102and the various client devices118-122, may connect through a network124, such as a wireless network. In some embodiments, the network124may include millimeter-wave (mmW) or combinations of mmW and sub 6 GHz communication systems, such as 5th generation wireless systems communication (5G). In other embodiments, the system may connect through wireless local area networking (Wi-Fi). In other embodiments, the system may communicatively connect through 4th generation wireless systems communication (4G), may be supported by 4G communication systems, or may include other wired or wireless communication systems. FIG.2shows a videoconferencing platform202comprising attributes204and a videoconferencing space206, according to an embodiment. The videoconferencing space206comprises a plurality of meeting slots208, e.g., meeting slots A and B, each comprising one or more entitlements210. The entitlements210refer to permissions that participants212may have when occupying a specific meeting slot208that enable a plurality of options within the videoconferencing session. In some embodiments, the entitlements210are provided to the participant of the corresponding videoconferencing meeting slot208. In one embodiment, the entitlements210are provided to the participant212before positioning the participant212in the corresponding meeting slot208. In another embodiment, the entitlements210are provided to the participant212at the moment that the participant is positioned on the meeting slot208, or afterwards. In certain embodiments, the videoconferencing platform202sends the participant terms and conditions that need to be reviewed and approved by the participants for use of the specific entitlements in order to receive said entitlements, before providing the entitlements210to the participant212. Providing the entitlements to each of the meeting slots208and providing a deep link that directs a participant directly to the corresponding meeting slot208comprising the entitlements210, enables increased session security. In situations where the deep link is unique to the meeting slot208and is renewed after each session, this decreases the chances of the link being “leaked” or otherwise obtained by an unauthorized user. In a hypothetical case of a deep link being leaked, the deep link may be valid only for a specific meeting slot208, so only one participant may enter the videoconferencing session in one particular meeting slot208, simplifying the process of tracking such a leak. This contrasts with most traditional methods, where a single link is generated for the videoconferencing session, increasing the probabilities of the session link being leaked because of the link being the same for multiple participants. Furthermore, providing a deep link corresponding to each meeting slot208increases the quality of the user experience by increasing transparency of the positioning within the videoconferencing space206, as any potential social friction about any of the participants having a wrong meeting slot would be reduced if not eliminated. In some embodiments, a deep link generator214connects to the videoconferencing platform202and is configured to receive a participant list, wherein each participant has one or more associated attributes204. Each attribute204may represent a specific property, or characteristic, of the participant, such as characteristics related to the user profile, including user identification data, spending ranking, buying preferences, role during the session (e.g., speaker, host, listener, minutes taker, etc.), and the like. The participant list may be input, for example, by an administrator of the videoconferencing platform through, for example, an administrator client device, or may be automatically created by the videoconferencing platform202as people confirm interest in joining a corresponding videoconferencing session. In yet further embodiments, each entitlement210is further adjusted based on the at least one attribute204linked to the corresponding participant212. For example, if the entitlements210that are assigned to a specific meeting slot comprise muting one participant at a time, and the participant in question comprises an attribute204of being a main presenter, then the entitlement204of the meeting slot may be adjusted to include enabling muting all participants at the same time. The deep link generator214may be further configured to create a meeting slot protocol by allocating a videoconferencing meeting slot208to each participant based on the one or more attributes204. In an embodiment, a meeting slot protocol includes the list of participants and the order of seating in the videoconferencing space206along with the corresponding attributes204of each meeting slot208. The deep link generator214(which may specify the meeting slot protocol in the corresponding deep link) creates a deep link that corresponds to each meeting slot214based on said meeting slot protocol. For example, a VIP participant (e.g., a speaker, a president, or anyone with a special status for a specific videoconferencing session) may be a assigned a priority meeting slot208, e.g., meeting slot A, that enables more entitlements than other meeting slots208, e.g., meeting slots B and C, wherein meeting slot A enables administrator entitlements such as muting other participants' microphones or enables having a higher resolution image or bigger meeting slot tile than the other participants. The deep link generator214may be further configured to send deep links to each participant client device216-218; receive a notification of a click on the deep link or other activation of the deep link from a participant confirming participation on a corresponding videoconferencing session; trigger the videoconferencing session; and assign the participant to the corresponding videoconferencing meeting slot based on the meeting slot protocol. In one embodiment, the deep link generator214is further configured to: receive a participant list, wherein each participant comprises one or more associated attributes; select a participant based on the associated attributes; generate a link that has information from the selected participant encoded for authentication purposes; publish the link to the list of participants; receive, from a participant, a click on or other activation of the link; authenticate the participant; in a case where the identity of the participant is valid, generate and send a deep link to the participant; and in a case where the identity of the participant is not valid, deny entry to the invalidated participant to the videoconferencing session. In this embodiment, the participants from the participant list may be users of the videoconferencing platform, and the link that is generated by the deep link generator214may be a link that is designated for one specific participant based on the participant attributes but may nevertheless be visible to all other users. The deep link generator214, upon authenticating the user to which the link is destined, automatically generates a deep link that places the participant in the corresponding meeting slot. The authentication may take any suitable form, e.g., biometric scanning, including face scanning, fingerprint scanning, voice recognition, and the like; password; PIN; or combinations thereof. In some embodiments, each participant is placed in a virtual waiting room. The virtual waiting room is a virtual space where participants may be placed while the administrator of the videoconferencing session allows the participants into the session, such as after verifying their identities. Waiting rooms may increase videoconferencing security by preventing intruders to join a videoconferencing session and potentially hijack the meeting or disrupt the experience. In some embodiments, the waiting room may be a virtual environment where waiting participants may interact before joining the actual session for which they may have registered. For example, in the case of a 3D videoconferencing space, the waiting room may be a 3D room with virtual chairs, where each user may be assigned to a corresponding chair. In the case of a 2D videoconferencing space, the virtual environment may comprise a plurality of tiles, each tile assigned to a corresponding participant that is placed in the waiting room. In some embodiments, the location of the participant within the waiting room is selected based on the entitlement that is adjusted according to at least one attribute linked to the corresponding participant. FIG.3depicts a schematic representation of a sample hybrid system architecture300that may be employed in a system for generating secure deep links, according to an embodiment. The hybrid system architecture300is, in some embodiments, a hybrid model of communication for interacting with other peer clients (e.g., other attendees), comprising a client-server side304and a P2P side306, each delimited inFIG.3by a dotted area. Using such a hybrid model of communication may enable rapid P2P communications between users reducing latency problems while providing web services, data and resources to each session, enabling a plurality of interactions between users and with content in the videoconferencing space. In various embodiments, the level and ratio of usage of the client-server side304with respect to the P2P side306depend on the amount of data to be processed, the latency permitted to sustain a smooth user experience, the desired quality of service (QOS), the services required, and the like. In one embodiment, the P2P side306is used for video and data processing, streaming and rendering. This mode of employing the hybrid system architecture300may be suitable, for example, when a low latency and low amounts of data need to be processed, and when in the presence of “heavy” clients, meaning that client devices308comprise sufficient computing power to perform such operations. In another embodiment, a combination of the client-server side304and P2P side306is employed, such as the P2P side306being used for video streaming and rendering while the client-server side304is used for data processing. This mode of employing the hybrid system architecture300may be suitable, for example, when there is a high amount of data to be processed or when other micro-services may be required. In yet further embodiments, the client-server side304may be used for video streaming along with data processing while the P2P side306is used for video rendering. This mode of employing the hybrid system architecture300may be suitable, for example, when there is an even higher amount of data to be processed and/or when only a thin client is available. In yet further embodiments, the client-server side304may be used for video streaming, rendering and data processing. This mode of employing the hybrid system architecture300may be suitable when a very thin client is available. The hybrid system architecture300may be configured for enabling alternating between the different modalities of usage of both the client-server side304and the P2P side306within the same session, as required. In some embodiments, the at least one cloud server from the client-server side304may be an intermediary server, meaning that the server is used to facilitate and or optimize the exchange of data between client devices308. In such embodiments, the at least one cloud server may manage, analyze, process and optimize incoming image and multimedia streams and manage, assess, optimize the forwarding of the outbound streams as a router topology (for example but not limited to SFU (Selective Forwarding Units), SAMS (Spatially Analyzed Media Server), multimedia routers, and the like), or may use an image and media processing server topology (for example but not limited for decoding, combining, improving, mixing, enhancing, augmenting, computing, manipulating, encoding) or a forwarding server topology (for example but not limited to MCU, cloud media mixers, cloud 3D renderer, media server), or other server topologies. In such embodiments, where the intermediary server is a SAMS, such media server manages, analyze and processes incoming data of each sending client device308(including but not limited to meta-data, priority data, data classes, spatial structure data, three dimensional positional, orientation or locomotion information, image, media, scalable video codec based video), and in such analysis optimizes the forwarding of the outbound data streams to each receiving client device308by modifying, upscaling or downscaling the media for temporal (e.g., varying frame rate), spatial (e.g., different image size), quality (e.g., different compression or encoding based qualities) and color (e.g., color resolution and range) based on the specific receiving client device user's spatial, three dimensional orientation, distance and priority relationship to such incoming data achieving optimal bandwidths and computing resource utilizations for receiving one or more user client devices308. In some embodiments, the media, video and data processing comprise one or more further encoding, transcoding, decoding spatial or 3D analysis and improvements comprising image filtering, computer vision processing, image sharpening, background improvements, background removal, foreground blurring, eye covering, pixilation of faces, voice-distortion, image uprezzing, image cleansing, bone structure analysis, face or head counting, object recognition, marker or QR, code-tracking, eye tracking, feature analysis, 3D mesh or volume generation, feature tracking, facial recognition, SLAM tracking and facial expression recognition or other modular plugins in form of micro-services running on such media router or servers. The client-server side304employs secure communication protocols310to enable a secure end-to-end communication between the client device308and web/application servers312over a network. Sample suitable secure communication protocols310may comprise, for example, Datagram Transport Layer Security (DTLS) which is a secure user datagram protocol (UDP) in itself, Secure Realtime Transport Protocol (SRTP), Hypertext Transfer Protocol Secure (https://) and WebSocket Secure (wss://), which are compatible with each other and may provide full duplex authenticated application access, protection of privacy and integrity of exchanged data in transit. Suitable web/application servers312may comprise, for example, Jetty web application servers, which are Java HTTP web servers and Java Servlet containers, enabling machine to machine communications and a proper deployment of web application services. The web/application servers312may be accessed through the client devices308via a corresponding downloadable/web application328through a graphical user interface330. Although the web/application servers312are depicted as a single element inFIG.3, those skilled in the art may appreciate that the web servers and application servers may be separate elements. For example, the web servers may be configured to receive client requests through the secure communication protocols310and route the requests to the application servers. The web/application servers312may thus receive the client requests using the secure communication protocols310and process the requests, which may comprise requesting one or more micro-services314(e.g., Java-based micro-services) and/or looking data up from a database316using a corresponding database management system318. The application/web servers312may provide session management and numerous other services such as 3D content and application logic as well as state persistence of sessions (e.g., for persistently storing shared documents, synchronizing interactions and changes in the virtual environment, or persisting the visual state and modifications of a virtual environment). A suitable database management system318may be, for example, an object-relational mapping (ORM) database management system, which may be appropriate for database management using open-source and commercial (e.g., proprietary) services given ORM's capability for converting data between incompatible type systems using object-oriented programming languages. In further embodiments, a distributed spatial data bus320may further be utilized as a distributed message and resource distribution platform between micro-services and client devices by using a publish-subscribe model. The P2P side306may use a suitable P2P communication protocol322enabling real-time communication between peer client devices308in the virtual environment through suitable application programming interfaces (APIs), enabling real-time interactions and synchronizations thereof, allowing for a multi-user collaborative environment. For example, through the P2P side306, contributions of one or more users may be directly transmitted to other users, which may observe, in real-time, the changes performed. An example of a suitable P2P communication protocol324may be a Web Real-Time Communication (WebRTC) communication protocol, which is collection of standards, protocols, and JavaScript APIs, which, in combination, enable P2P audio, video, and data sharing between peer client devices308. Client devices308in the P2P side306may perform real-time 3D rendering of the live session employing one or more rendering engines324. An example of a suitable rendering engine324may be 3D engines based on WebGL, which is a JavaScript API for rendering 2D and 3D graphics within any compatible web browser without the use of plug-ins, allowing accelerated usage of physics and image processing and effects by one or more processors of the client device308(e.g., one or more graphic processing units (GPUs)). Furthermore, client devices308in the P2P side306may perform image and video-processing and machine-learning computer vision techniques through one or more suitable computer vision libraries326. In one embodiment, the image and video-processing performed by the client devices308in the P2P side306comprises the background removal process used in the creation of the user graphical representation previous to the insertion of the user graphical representation into a virtual environment, which may be performed either in real-time or almost real-time on received media streams or in non-real-time on, for example, a photo. An example of a suitable computer vision library326may be OpenCV, which is a library of programming functions configured mainly for real-time computer vision tasks. FIGS.4A-4Bshow 2D and 3D videoconferencing spaces, respectively, according to an embodiment. With reference toFIG.4A, a 2D videoconferencing space402comprises a plurality of tiles404, such as tiles A-F, wherein each tile404represents a place within the 2D videoconferencing space402that has been assigned to a particular user. Thus, a deep link of the current disclosure may, upon being clicked by a participant, bring the participant directly to the correspondingly assigned tile. Each tile404further is assigned a plurality of entitlements406, in such a way that a participant attending a videoconferencing session hosted in the 2D videoconferencing space402can make use of such entitlements. In some embodiments, some of the entitlements comprise providing a larger tile and/or higher resolution to a specific meeting slot and thus to the corresponding participant assigned to such a meeting slot. In the example embodiment ofFIG.4A, tile A is larger than tiles B-F, and therefore, a participant such as a VIP or a main speaker may be assigned to such a tile. Tile F is the second largest tile, and may be assigned to, for example, an assistant, co-speaker or co-host. The remaining tiles B-E may be assigned to listeners of the videoconferencing session not having special tile-size entitlements. As another example of an entitlement, participants may be allocated a tile in a location close to the speaker, which may be useful in situations where many (e.g., hundreds or thousands) of participants may be part of a videoconferencing session. With reference toFIG.4B, a 3D videoconferencing space408comprises a plurality of participant user graphical representations410sitting on a conferencing table412, each participant user graphical representations410sitting on a 3D seat414that represents a corresponding meeting slot. Thus, a deep link of the current disclosure may, upon being clicked by a participant, bring the participant directly to the correspondingly assigned seat. Each 3D seat414is further assigned a plurality of entitlements406, in such a way that a participant attending a videoconferencing session hosted in the 3D videoconferencing space408can make use of such entitlements406. As another example of an entitlement in a 3D video conferencing space408, participants may be allocated a 3D seat414in a location close to the speaker. In an example application, a coworking space may have a plurality of 3D videoconferencing spaces408, each representing a meeting slot. A plurality of participants may request participation in the videoconferencing session to an administrator (e.g., by confirming an invitation or requesting access through the videoconferencing platform). The administrator may send a deep link generation request to the deep link generator, which creates a deep link comprising the meeting slot position assigned to each participant. The deep link generator sends a deep link to each participant, which, after validation from the participant by clicking on the deep link, initiates the videoconferencing session by instructing the videoconferencing platform accordingly. The deep link generator, in communication with the videoconferencing platform, then assigns the participant to the corresponding videoconferencing meeting slot within the coworking space based on the encoded meeting slot information. Participants within the coworking space may thus be “spawned” by inserting their corresponding graphical representations or avatars within the videoconferencing session in a specific 3D coordinate where the meeting slot may be located. The meeting slots may comprise one or more entitlements that may be provided to each participant. The deep link generator may be further configured to receive a participant list, wherein each participant comprises one or more attributes. Each entitlement may further be adjusted based on the at least one attribute linked to the corresponding participant. Participants may thus have special entitlements based on the meeting slot that has been assigned, alternatively, on a combination of the meeting slot entitlement and their own attributes. Similar examples may apply to applications on virtual events that may be hosted on a videoconferencing platform of the current disclosure, including but not being limited to political speeches, concerts, theater plays, comedy shows, conferences, learning (e.g., virtual schools), karaokes, and the like. Participants may be enabled to engage in a plurality of interactions with each other, as described inFIG.5below. FIG.5shows examples of interactions500that users may engage on depending on their adjusted entitlements, according to an embodiment. Such interactions500may include, for example, chatting502, screen sharing504, host options506, remote sensing508, recording510, voting512, document sharing514, emoticon sending516, agenda sharing and editing518, or other interactions520. The other interactions520may comprise, for example virtually hugging, hand-raising, hand-shaking, walking, content adding, meeting-summary preparation, object moving, projecting, laser-pointing, game-playing, purchasing and other social interactions facilitating exchange, competition, cooperation, resolution of conflict between users. The various interactions are described in more detail below. When the videoconferencing space is a 2D videoconferencing space, the entitlements may further comprise an increased resolution and/or tile size allocated to the particular meeting slot208, for example. Chatting502may open up a chat window enabling sending and receiving textual comments and on-the-fly resources. Screen sharing504may enable to share in real-time the screen of a user to any other participants. Host options506are configured to provide further options to a conversation host, such as muting one or more users, inviting or removing one or more users, ending the conversation, and the like. Remote sensing508enables viewing the current status of a user, such as being away, busy, available, offline, in a conference call, or in a meeting. The user status may be updated manually through the graphical user interface or automatically through machine vision algorithms based on data obtained from the camera feed. Recording510enables recording audio and/or video from the conversation. Voting512enables to provide a vote for one or more proposals posted by any other participant. Through voting512, a voting session can be initiated at any time by the host or other participant with such a permission. The subject and choices may be displayed for each participant. Depending on the configuration of the voting interaction, at the end of a timeout period or at the end of everyone's response the results may be shown to all the attendees. Document sharing514enables to share documents in any suitable format with other participants. These documents may also be persisted permanently by storing them in persistent memory of the one or more cloud server computers and may be associated with the virtual environment where the virtual communication takes place. Emoticon516sending enables sending emoticons to other participants. Agenda sharing and editing518enables sharing and editing an agenda that may have been prepared by any of the participants. In some embodiments, a checklist of agenda items may be configured by the host ahead of the meeting. The agenda may be brought to the foreground at any time by the host or other participants with such a permission. Through the agenda-editing option, items can be checked off as a consensus is reached or may be put off. The other interactions520provide a non-exhaustive list of possible interactions that may be provided in the virtual environment depending on the virtual environment vertical. Hand-raising enables raising the hand during a virtual communication or meeting so that the host or other participants with such an entitlement may enable the user to speak. Walking enables moving around the virtual environment through the user real-time 3D virtual cutout. Content adding enables users to add interactive applications or static or interactive 3D assets, animations or 2D textures to the virtual environment. Meeting-summary preparation enables an automatic preparation of outcomes of a virtual meeting and distributing such outcomes to participants at the end of the session. Object moving enables moving objects around within the virtual environment. Projecting enables projecting content to a screen or wall available in the virtual environment from an attendee's screen. Laser-pointing enables pointing a laser in order to highlight desired content on a presentation. Game-playing enables playing one or more games or other types of applications that may be shared during a live session. Purchasing enables making in-session purchases of content. Other interactions not herein mentioned may also be configured depending on the specific use of the virtual environment platform. FIG.6shows a method600for generating secure deep links. Method600may be implemented by at least one computer of a computer system comprising at least one processor and memory comprising instructions configured for performing the steps of method600. Method600begins in step602by receiving, by a deep link generator stored in memory, a deep link generation request. The deep link generation request may be sent by an administrator of the videoconferencing platform through, for example, an administrator client device, or may be automatically created by the videoconferencing platform as people confirm interest in joining a corresponding videoconferencing session. In step604, the method600continues by receiving a videoconferencing meeting slot list, wherein a videoconferencing meeting slot in the list comprises at least a location within a videoconferencing space of a videoconferencing platform. In step606, the method600ends by generating a deep link that corresponds to (e.g., is unique for) the videoconferencing meeting slot, the deep link includes information representing at least the location of the videoconferencing meeting slot within the videoconferencing space. The deep link is configured to, upon activation by a meeting participant, direct the meeting participant to the location within the videoconferencing space. The information representing the location may be, e.g., coordinates for a particular meeting slot, a numeric or alphanumeric code that can be used to look up coordinates for a particular meeting slot, or some other representative information. In some embodiments, the method600further comprises receiving a participant list, wherein each participant in the participant list comprises one or more attributes linked to the corresponding participant. Each attribute may represent a specific property, or characteristic, of the participant, such as characteristics related to the user profile, including user identification data, spending ranking, buying preferences, role during the session (e.g., speaker, host, listener, minutes taker, etc.), and the like. The participant list may be input, for example, by an administrator of the videoconferencing platform through, for example, an administrator client device, or may be automatically created by the videoconferencing platform as people confirm interest in joining a corresponding videoconferencing session. In some embodiments, the method600further comprises creating a meeting slot protocol by allocating a videoconferencing meeting slot to each participant based on the one or more attributes. The meeting slot protocol includes, in some embodiments, the list of participants and the order of seating in the videoconferencing space along with the corresponding attributes of each meeting slot. The deep link generator may encode the meeting slot protocol in the corresponding deep link and create a deep link that corresponds to each meeting slot based on said meeting slot protocol. In some embodiments, the method600further comprises sending deep links to each participant client device; receiving a notification of a click on or other activation of the deep link from a participant confirming participation on a corresponding videoconferencing session, confirming participation in said videoconferencing session; triggering the video conferencing session; and assigning the participant to the corresponding videoconferencing meeting slot based on the meeting slot protocol. FIG.7shows a system700for generating spatial deep links for virtual spaces, according to an embodiment. The system700comprises at least one server computer702of a server computer system comprising at least one processor704and memory706storing instructions executed by said at least one processor704to implement a deep link generator708configured to receive a deep link generation request that is triggered when a participant710of a videoconferencing session invites an invitee712to join the videoconferencing session. The deep link generator708is further configured to retrieve videoconferencing session context data714and a session communication instance716corresponding to the specific session to which the participant takes part; and to generate a deep link comprising an encoded representation of the videoconferencing session context data714. The instructions further implement a videoconferencing platform718connected to the deep link generator708comprising at least one videoconferencing space720hosting the videoconferencing session, wherein the at least one videoconferencing space720is a 3D virtual environment. The context data714and session communication instances716may be part of the videoconferencing platform718The 3D virtual environment may comprise characteristics as described with reference toFIG.1. The context data714videoconferencing platform718 In one embodiment, the context data714comprises the 3D coordinates of a user graphical representation722of the participant710within the 3D virtual environment and the desired 3D coordinates of the invitee712. In yet further embodiments, the desired 3D coordinates of the invitee712are restricted to a predefined radius around the participant710inviting the invitee712. In yet further embodiments, the context data714comprises user attributes including user profile data including user identification data, spending ranking, and buying preferences. In some embodiments, the videoconferencing platform718is configured to insert the user graphical representation722of corresponding participants710, generated from a live data feed captured by at least one camera724, into a 3D coordinate position of the videoconferencing space720and to combine the user graphical representation722therewith. The 3D virtual environment may comprise characteristics as described with reference toFIG.1. In yet further embodiments, the deep link generator708is configured to send the deep link to an invitee client device726; receive a message or other notification of a click on or other activation of the deep link by the invitee712via the invitee client device726, accepting the invitation to the videoconferencing session; and retrieve and position the user graphical representation728of the invitee712into the precise 3D coordinates within the 3D virtual environment, granting the invitee712access to the videoconferencing session. For example, a participant710may invite an invitee712to participate in a videoconferencing session within a videoconferencing space720. The deep link generator708receives the deep link generation request and encodes videoconferencing session context data714and a session communication instance716corresponding to the specific session to which the participant710takes part, information which is then included in the deep link that is generated by the deep link generator708. The context data714comprises the 3D coordinates of the user graphical representation722of the participant710within the 3D virtual environment and the desired 3D coordinates of the invitee712. The participant710may be present within the videoconferencing space718through a user graphical representation722generated from live feed data captured by at least one camera724and which is inserted by the videoconferencing platform718into a 3D coordinate of the videoconferencing space720and combines the user graphical representation722therewith. In one example, the participant710invites the invitee712in such a manner that the deep link includes a 3D coordinate set that positions the invitee712, through a corresponding invitee user graphical representation726, in a position that is close to that of the participant user graphical representation722, because the desired 3D coordinates of the invitee712may be restricted to a predefined radius around the participant710inviting the invitee712. Thus, the participant710may originally visit an area of the 3D virtual environment of the videoconferencing space718and, upon finding an area of interest of the 3D virtual environment, may decide to invite a friend to come and join him or her to enjoy that area of interest, for which the participant710sends a deep link that brings the invitee712directly in the vicinity of the participant710, e.g., in front of the participant. The participant710and invitee712may use client devices726comprising, for example, computers, headsets, mobile phones, glasses, transparent screens, tablets and generally input devices with cameras built-in or which may connect to cameras and receive data feed from said cameras. The client devices726may connect to each other and to the server702through a network730. In an embodiment, the deep link generator708is configured to encode each deep link an expiration factor, wherein the expiration factor is session-based, time-based, or click-based, or a combination thereof. FIGS.8A-8Bshow a videoconferencing platform802comprising a videoconferencing space804with a public virtual environment806and a plurality of proprietary virtual environments808, according to an embodiment. The public virtual environment806refers to a public communication instance that a plurality of users can use to join a public videoconferencing session in a 3D virtual environment where each user can see each other along with the public virtual 3D areas that are accessible to all participants. In one embodiment, as shown inFIG.8A, the public virtual environment806comprises a plurality of proprietary virtual environments808, such as proprietary virtual environments A-C, which may be publicly or privately accessible by participants. In one embodiment, responsive to a request from the participant, a host of the proprietary virtual environment808may generate, via the deep link generator, a deep link to an invitee configured to position the invitee in a desired 3D coordinate in the private session. InFIG.8B, an isometric view of the public virtual environment and plurality of proprietary virtual environments are displayed. In an example with reference toFIG.8B, a mall810may host a public videoconferencing session in a public 3D virtual environment representing the mall810. The communication instance is a publicly shared communication instance that all participants may use. The mall810may comprise a plurality of stores, such as a shoe store812and a clothes store814, each store comprising its own communication instance, which may be public or private. The public communication instance of a proprietary store refers to a communication instance that participants of a public communication instance (e.g., the mall810) may access by switching from the public communication instance. The private communication instance of a proprietary store refers to a communication instance that participants of a proprietary store may access only by invitation from, e.g., a host of the proprietary store. In one embodiment, a host816of a third-party proprietary virtual environment retrieves the buyer profile data of a participant entering the third-party proprietary virtual environment via a corresponding user graphical representation and sends a private invitation to the corresponding participant that opens up a private session between the host and the invited participant in a private communication instance. In the example ofFIG.8B, participants A and B have joined the mall public communication instance and can view the different areas of the mall, including the proprietary stores, and can also view and communicate to each other. In an example of a public communication instance of a proprietary virtual environment, the shoe store812proprietary virtual environment may enable mall visitors to walk into the store, triggering a switch in the communication instance from the public mall communication instance to the shoe public communication instance. Within the shoe store812, a plurality of other users (e.g., users C and D) may also visit the 3D virtual environment of the shoe store812, all of which may be visible to each other. In an example of a private communication instance of a proprietary virtual environment, a store clerk816, upon any of users C or D meeting one or more criteria, may decide to open an ad hoc communication channel creating a private communication instance between the store clerk816and any one of users C and D. In some embodiments, the criteria are based on user attributes included in the context data of the participant, such as user identification data, spending ranking, and buying preferences. For example, the store clerk816may find the spending ranking of user C suitable for a specific offer, and thus may open an ad hoc communication channel with user C to present the offer. User C may be required to confirm interest in such a communication, such as by clicking and approving a deep link sent by the store clerk in order to bring the participant to the private communication session. In a further example of a clothes store814, the corresponding store clerk816may, based on visitors of the clothes store (e.g., users E and F) meeting certain criteria, invite both users to a private communication instance if the users are shopping together, which can be inferred from the user attributes. In one embodiment, responsive to a request from one of the users, the store clerk816generates, via the deep link generator, a deep link to an invitee configured to position the invitee in a desired 3D coordinate in the private session within a proprietary store. For example, the store clerk816of the clothes store, upon being requested by user E, may generate a deep link invitation that is sent to user E, who may forward the invitation to a friend or acquaintance to join him or her on the private session in order to view one or more products of interest. FIG.9shows session context data900, according to an embodiment. The session context data900may comprise user attributes902including coordinates904comprising user 3D coordinates906and desired coordinates of invitee908. The session context data900may further comprise user profile data910including ID data912, spending ranking914and buying preferences916. The user 3D coordinates906refer to the actual latitude, longitude, and elevation of a user graphical representation of a participant within a 3D virtual environment of a videoconferencing space. The desired 3D coordinates of invitee908refer to the 3D coordinates where a participant may desire the invitee to arrive to when accessing the videoconferencing session. For example, the participant may prompt the invitee to appear in front of or next to the participant within the videoconferencing session. The ID data912is a specific user code that may be used to identify a corresponding participant, and which may point to a plurality of user personal data including at least spending ranking914and buying preferences916. The spending ranking refers to a ranking provided to the user based on how much he or her spends on products that may be purchased through videoconferencing platforms of the current disclosure, while the buying preferences refers to916product categories and characteristics that may reflect the buying preferences of the user when buying through videoconferencing platforms of the current disclosure. FIG.10shows a method1000for generating spatial deep links for virtual spaces, which may be implemented by a computer comprising at least one processor and memory comprising instructions configured to implement a plurality of steps. Method1000begins in step1002by receiving (e.g., by a deep link generator stored in memory) a deep link generation request that is triggered when a participant of a videoconferencing session hosted in a videoconferencing platform invites an invitee to join the videoconferencing session. The method continues in step1004by retrieving videoconferencing session context data and a session communication instance. In step1006, the method ends by generating a deep link comprising the videoconferencing session context data (e.g., in encoded form). The videoconferencing platform connects to the deep link generator and comprises at least one videoconferencing space hosting the videoconferencing session, wherein the at least one videoconferencing space is a 3D virtual environment. FIG.11shows a secure distributed deep link system, according to an embodiment. The distributed deep link system1100comprises at least one server computer1102of a server computer system comprising at least one processor1104and memory1106storing instructions executed by said at least one processor1104, which, when implemented by the at least one processor1104, implement a videoconferencing platform1008and a deep link manager1110. The deep link manager1110in turn implements a deep link generator1112configured to receive a deep link generation request, which may be sent, for example, by a client device1114of an inviter1116. Alternatively, the inviter1116may also send the deep link generation request by sending the request to an administrator or host client device (not shown) that may gather all requests to join a videoconferencing session in order to set up the videoconferencing session. The inviter1116sending the deep link generation request may be used in embodiments where the inviter1116has already joined a videoconferencing session and is willing to invite an invitee1118to participate in such a session. For example, the inviter may have joined a videoconferencing session in a 3D virtual environment of a public or private communication instance, may find content within the virtual environment to be of potential interest to an acquaintance (e.g., a virtual store, information of a conference, an exhibition, job fair, etc.), and may thereafter send the deep link generation request with the purpose of generating a deep link that may be used to invite the invitee1118to join the same videoconferencing session. Alternatively, the administrator can prepare the deep link generation request by gathering videoconferencing participation requests from a plurality of users and then sending the deep links accordingly employing the distributed deep link system1100. The deep link generator1108then triggers the generation of a deep link that corresponds to each videoconferencing meeting slot or three-dimensional space of a virtual environment within a videoconferencing session of the videoconferencing platform108. The deep link generator1112splits the deep link data1120into at least two data fragments, wherein a first fragment comprises a data majority1122and wherein a second data fragment comprises a minority1124of the data thereof. In some embodiments, the deep link data comprises the characters that form a deep link URL. In some embodiment, the majority of the deep link data represents between about 90% and about 99% of the deep link data1120, and the minority of data represents between about 10% and about 1% of the deep link data1120. In an illustrative scenario, a deep link URL having 1000 characters may be split into two fragments comprising a first portion of 900 characters and a second portion of 100 characters; three fragments comprising a first portion of 900 characters, a second portion of 50 characters, and a third portion of 50 characters; or some other number of fragments or distributions of characters among fragments. As a further alternative, a deep link may be split into fragments where no single fragment contains a majority of the data, such as two fragments that each include a 50% portion of the deep link data, or three fragments having 33%, 33%, and 34% portions of the deep link data, respectively. The deep link manager1110proceeds by distributing the at least two data fragments of the deep link to at least two different storage locations, wherein the data majority1122is stored in a first storage location (e.g., the memory1106of the server) and the data minority1124is stored in at least one second storage location1126. The deep link generator1112then proceeds by generating a link encoding a deep link assembling process1128that is sent to the inviter client device1114. Deep link data1120virtualization enabling the fragmentation of the deep link data1120may be performed by the deep link manager1110. Virtualization mechanisms enables storing different portions of the deep link data1120in virtual machines (VMs) without necessarily controlling where these VMs are physically assigned. The VMs can be assigned, for example, to one or more physical servers, which can be part of a larger network of servers comprised in cloud servers, cloudlets, or edge servers. Fragmentation takes data in memory that is broken up into many pieces that are not close together. Data in a file can be managed in units called blocks. Initially, the file blocks may be stored contiguously in a memory located in the private user storage area. However, when fragmenting the data, some of the data blocks can be separated and dispersed into different storage locations, such as into one or more data collector servers. As the storage in the current disclosure is virtualized, the data fragments may be stored in the virtual storage, meaning that the physical storage devices where the data fragments are stored is not of relevance to the system when fetching and assembling the data. The deep link manager1110is further configured to retrieve, upon validation from the inviter1116, the second data fragment including data minority1124of the deep link from the second storage location1126and the first data fragment including data majority1122of the deep link from the at least one first storage location. Such a validation may take place in the form of the inviter1116clicking on a link configured to, upon activation, initiate a deep link assembling process1128that assembles the deep link from the data fragments. Such a link may be referred to as an assembling link. The assembling link may include information to facilitate some form of authentication (e.g., biometric scanning, including face scanning, fingerprint scanning, voice recognition, and the like; password; PIN; or combinations thereof). The deep link manager1110is further configured to assemble the minority and majority portions1124and1122of the deep link data1120, and to grant access to the invitee1118to the videoconferencing session. The assembled deep link may then be sent to the invitee1118who, upon clicking on the link, may access the videoconferencing session. In some embodiments, the deep link manager1110may send the deep link directly to the invitee1118, or indirectly through the inviter1116. The invitee may then click on the link and access the videoconferencing session through a corresponding invitee client device1130. In some embodiments, having the deep link data1120stored in two or more different locations and having a deep link assembling process triggered upon validation from the inviter1116increases deep link security, as users are not able to validate the deep link assembling process without having the right information. In some embodiments, after the invitee1118clicks on the deep link, the videoconferencing platform1108inserts a user graphical representation of the invitee1118, generated from a live data feed captured by at least one camera, into a 3D coordinate position of a 3D virtual environment of the videoconferencing platform1108, and combines the user graphical representation therewith. In yet further embodiments, the videoconferencing session is a public videoconferencing session hosted in a public 3D virtual environment in a public communication instance. For example, once an invitee1118has clicked on an assembled deep link, the invitee may have his or her user graphical representation inserted into a 3D coordinate of a virtual mall ofFIG.8B, such as in a location close to the inviter1116. The deep link may also be created to invite an invitee1118to a videoconferencing space in a private videoconferencing session accessed through a private communication instance. In some embodiments, the at least one second storage location1126comprises one or more private user servers or client device local memories. The one or more private user servers may be located in data centers destined for the private usage of users for purposes of storing data fragments and hosting the user application. In other embodiments, the one or more private user storage areas may be configured within a user device, such as mobile devices, personal computers, game consoles, media centers, head-mounted displays, and see-through devices (e.g., smart contact lenses). In other embodiments, the at least one second storage location1126is configured within a distributed ledger network. In some embodiments, the at least one first storage location is the memory1106of the at least one server computer. The distributed ledger is a trusted database that can function as a record of value storage and exchange. The distributed ledger provides a decentralized network of transactions comprising information that is shared across different locations and people, eliminating the need of a central authority. Storage in a distributed ledger may include the use of encryption in order to keep the deep link data fragments securely stored in the different storage areas. In some embodiments, the deep link data fragments are encrypted by a symmetric or asymmetric key encryption mechanism. In the case of asymmetric key encryption, the data fragments are encrypted asymmetrically by a public key sent to the inviter client device1114by the deep link manager1110through a network1132and are decrypted by the deep link manager1110via a private key of the deep link manager1110stored in memory1106of the server1102. In other embodiments, data fragments are encrypted symmetrically by a private key of the inviter client device1114and are decrypted via the same private key by the deep link manager1110. In some embodiments, the deep link generator1112is further configured to encode in the deep link an expiration factor, wherein the expiration factor is one of a session-based, or time-based, or click-based expiration factor, or a combination thereof. In some embodiments, the videoconferencing platform is further configured to receive a videoconferencing meeting slot list, wherein each videoconferencing meeting slot comprises at least a location within the videoconferencing space; receive a participant list, wherein each participant comprises one or more attributes linked to the corresponding participant; create a meeting slot protocol by allocating a videoconferencing meeting slot to each participant based on the one or more attributes; provide one or more entitlements to each videoconferencing meeting slot; and provide the one or more entitlements to the participant of the corresponding videoconferencing meeting slot. FIG.12shows a secure distributed deep link method1200, according to an embodiment. Method1200may start in step1202by receiving (e.g., by a deep link generator stored in memory of at least one server computer) a deep link generation request. The deep link generator may be part of a deep link manager stored in memory. The memory may further store a videoconferencing platform that can be accessed by users through a network via corresponding client devices. In step1204, the method1200continues by generating a deep link that corresponds to (e.g., is unique for) each videoconferencing meeting slot of a videoconferencing session of the videoconferencing platform, each deep link including (e.g., in encoded form) at least the location of the videoconferencing meeting slot within a videoconferencing space. In step1206, the method1200proceeds by fragmenting the deep link into at least two data fragments, wherein at least one data fragment comprises a majority of the data of the deep link and wherein at least another data fragment comprises a minority of the data thereof. In some embodiments, the majority of the deep link data represents between about 99% and about 99.99% of the deep link data, and the minority of data represents between about 1% and about 0.01% of the deep link data. In step1208, the method proceeds by distributing the at least two data fragments of the deep link to at least two different storage locations, wherein the majority of the data is stored in memory of at least a first storage location and wherein the minority of the data is stored in memory of at least a second storage location. In step1210, the method1200proceeds by generating an assembling link that, when activated, initiates a deep link assembling process; and in step1212, the method1200ends by sending the assembling link to an inviter client device. In some embodiments, the at least one second storage location comprises one or more user servers or client device local memories. In other embodiments, the at least one second storage location is configured within a distributed ledger network. In some embodiments, the at least one first storage location is the memory of the at least one server computer. In some embodiments, the method1200further comprises encoding in the deep link an expiration factor, wherein the expiration factor is one of a session-based, or time-based, or click-based expiration factor, or a combination thereof. FIG.13shows a deep link assembling method1300, according to an embodiment. Steps from method1300may take place after method1200ofFIG.12. The method1300may start in step1302by retrieving, upon validation from the participant, at least one minority data fragment (second data fragment) of the deep link from the second storage location and the majority portion of the deep link (first data fragment) from the at least one first storage location. Such a validation may take place in the form of the inviter clicking on the assembling link initiating a deep link assembling process plus some form of authentication (e.g., biometric scanning, password; PIN; or combinations thereof). The method1300continues in step1304by assembling the minority and majority portions of the deep link. In step1306, the method1300ends by sending the assembled deep link to grant an invitee access to the videoconferencing session. In some embodiments, the method1300further comprises inserting a user graphical representation of the invitee, generated from a live data feed captured by at least one camera, into a 3D coordinate of the 3D virtual environment and to combine the user graphical representation therewith. In further embodiments, 3D virtual environment in a public communication instance, or is a private videoconferencing session accessed through a private communication instance. While certain embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that the invention is not limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those of ordinary skill in the art. The description is thus to be regarded as illustrative instead of limiting. | 69,901 |
11943267 | DETAILED DESCRIPTION A conferencing software, which may be standalone software or part of a software platform such as a unified communications as a service (UCaaS) platform, may allow conference participants to participate in audio-visual conferences. A conference participant may join a conference using a user device. The user device from which the conference participant joins the conference is said to be connected to the conference. Conference participants may generally be visually represented within individual or group tiles rendered within a user interface of a conference. A “tile” as used herein can mean a portion of a user interface presented (e.g., displayed or caused to be displayed) by the conferencing software and is associated with a conference participant. The tiles associated with different conference participants may have the same or different sizes. A conventional conferencing software may enable a conference participant who is joined to a conference to share content to or in the conference. To illustrate, at the user device via which a conference participant is joined to a conference, the conference participant can share content (e.g., a document, a program, a file, an image, or screen content) accessible to, or available at, the user device. However, in some situations, a conference participant may want to share content that is available at another device (referred to herein as a “secondary device”) that is different from the device (referred to herein as a “primary device”) that the conference participant used to join the conference. To illustrate, and without limitations, the primary device may be a desktop or a laptop computer and the secondary device may be a handheld device (e.g., a mobile phone). The conference participant may wish to share images available at the handheld device to the conference. In a first scenario, to share content that is at a secondary device, the conference participant transfers or makes available the content to the primary device. This can be time consuming for at least the conference participant (if done prior to the conference) and to all conference participants (if done during the conference). In a second scenario, the conference participant can additionally join the conference using the secondary device for the purpose of sharing the content. However, that one conference participant is joined to the conference more than once can be confusing to the other conference participants, especially when multiple conference participants join via more than one device. This scenario increases the number of connections that the conferencing software has to maintain and manage. Additionally, the conferencing software would have to transmit audio and/or video streams of the conference participants (i.e., obtained from respective devices of the conference participants) to the secondary devices. As such, the use of secondary devices to connect to conferences may result in increased resource utilization at the conferencing server therewith degrading the performance of the conferencing software and may cause some operations to fail due to resource exhaustion. The possibility for degraded performance and increased usage of the conferencing software may also include substantially increased investment in processing, memory, and storage resources for the conferencing software and may also result in increased energy expenditures (needed to operate those increased processing, memory, and storage resources, or for the network transmission of audio and/or video streams) and associated emissions that may result from the generation of that energy. Implementations of this disclosure address problems such as these by enabling a conference participant to connect two devices (a primary device and a secondary device) to a conference while being shown to other conference participants as joined to the conference only once. Additionally, the conference participant can share content from the secondary device to the conference. The content is shown to the other conference participants in a companion user interface space associated with the conference participant. To describe some implementations in greater detail, reference is first made to examples of hardware and software structures used to implement a system for joining a conference using a secondary device and/or sharing content to a conference from a secondary device.FIG.1is a block diagram of an example of an electronic computing and communications system100, which can be or include a distributed computing system (e.g., a client-server computing system), a cloud computing system, a clustered computing system, or the like. The system100includes one or more customers, such as customers102A through102B, which may each be a public entity, private entity, or another corporate entity or individual that purchases or otherwise uses software services, such as of a UCaaS platform provider. Each customer can include one or more clients. For example, as shown and without limitation, the customer102A can include clients104A through104B, and the customer102B can include clients104C through104D. A customer can include a customer network or domain. For example, and without limitation, the clients104A through104B can be associated or communicate with a customer network or domain for the customer102A and the clients104C through104D can be associated or communicate with a customer network or domain for the customer102B. A client, such as one of the clients104A through104D, may be or otherwise refer to one or both of a client device or a client application. Where a client is or refers to a client device, the client can comprise a computing system, which can include one or more computing devices, such as a mobile phone, a tablet computer, a laptop computer, a notebook computer, a desktop computer, or another suitable computing device or combination of computing devices. Where a client instead is or refers to a client application, the client can be an instance of software running on a customer device (e.g., a client device or another device). In some implementations, a client can be implemented as a single physical unit or as a combination of physical units. In some implementations, a single physical unit can include multiple clients. The system100can include a number of customers and/or clients or can have a configuration of customers or clients different from that generally illustrated inFIG.1. For example, and without limitation, the system100can include hundreds or thousands of customers, and at least some of the customers can include or be associated with a number of clients. The system100includes a datacenter106, which may include one or more servers. The datacenter106can represent a geographic location, which can include a facility, where the one or more servers are located. The system100can include a number of datacenters and servers or can include a configuration of datacenters and servers different from that generally illustrated inFIG.1. For example, and without limitation, the system100can include tens of datacenters, and at least some of the datacenters can include hundreds or another suitable number of servers. In some implementations, the datacenter106can be associated or communicate with one or more datacenter networks or domains, which can include domains other than the customer domains for the customers102A through102B. The datacenter106includes servers used for implementing software services of a UCaaS platform. The datacenter106as generally illustrated includes an application server108, a database server110, and a telephony server112. The servers108through112can each be a computing system, which can include one or more computing devices, such as a desktop computer, a server computer, or another computer capable of operating as a server, or a combination thereof. A suitable number of each of the servers108through112can be implemented at the datacenter106. The UCaaS platform uses a multi-tenant architecture in which installations or instantiations of the servers108through112is shared amongst the customers102A through102B. In some implementations, one or more of the servers108through112can be a non-hardware server implemented on a physical device, such as a hardware server. In some implementations, a combination of two or more of the application server108, the database server110, and the telephony server112can be implemented as a single hardware server or as a single non-hardware server implemented on a single hardware server. In some implementations, the datacenter106can include servers other than or in addition to the servers108through112, for example, a media server, a proxy server, or a web server. The application server108runs web-based software services deliverable to a client, such as one of the clients104A through104D. As described above, the software services may be of a UCaaS platform. For example, the application server108can implement all or a portion of a UCaaS platform, including conferencing software, messaging software, and/or other intra-party or inter-party communications software. The application server108may, for example, be or include a unitary Java Virtual Machine (JVM). In some implementations, the application server108can include an application node, which can be a process executed on the application server108. For example, and without limitation, the application node can be executed in order to deliver software services to a client, such as one of the clients104A through104D, as part of a software application. The application node can be implemented using processing threads, virtual machine instantiations, or other computing features of the application server108. In some such implementations, the application server108can include a suitable number of application nodes, depending upon a system load or other characteristics associated with the application server108. For example, and without limitation, the application server108can include two or more nodes forming a node cluster. In some such implementations, the application nodes implemented on a single application server108can run on different hardware servers. The database server110stores, manages, or otherwise provides data for delivering software services of the application server108to a client, such as one of the clients104A through104D. In particular, the database server110may implement one or more databases, tables, or other information sources suitable for use with a software application implemented using the application server108. The database server110may include a data storage unit accessible by software executed on the application server108. A database implemented by the database server110may be a relational database management system (RDBMS), an object database, an XML database, a configuration management database (CMDB), a management information base (MIB), one or more flat files, other suitable non-transient storage mechanisms, or a combination thereof. The system100can include one or more database servers, in which each database server can include one, two, three, or another suitable number of databases configured as or comprising a suitable database type or combination thereof. In some implementations, one or more databases, tables, other suitable information sources, or portions or combinations thereof may be stored, managed, or otherwise provided by one or more of the elements of the system100other than the database server110, for example, the client104or the application server108. The telephony server112enables network-based telephony and web communications from and to clients of a customer, such as the clients104A through104B for the customer102A or the clients104C through104D for the customer102B. Some or all of the clients104A through104D may be voice over internet protocol (VOIP)-enabled devices configured to send and receive calls over a network114. In particular, the telephony server112includes a session initiation protocol (SIP) zone and a web zone. The SIP zone enables a client of a customer, such as the customer102A or102B, to send and receive calls over the network114using SIP requests and responses. The web zone integrates telephony data with the application server108to enable telephony-based traffic access to software services run by the application server108. Given the combined functionality of the SIP zone and the web zone, the telephony server112may be or include a cloud-based private branch exchange (PBX) system. The SIP zone receives telephony traffic from a client of a customer and directs same to a destination device. The SIP zone may include one or more call switches for routing the telephony traffic. For example, to route a VOIP call from a first VOIP-enabled client of a customer to a second VOIP-enabled client of the same customer, the telephony server112may initiate a SIP transaction between a first client and the second client using a PBX for the customer. However, in another example, to route a VOIP call from a VOIP-enabled client of a customer to a client or non-client device (e.g., a desktop phone which is not configured for VOIP communication) which is not VOIP-enabled, the telephony server112may initiate a SIP transaction via a VOIP gateway that transmits the SIP signal to a public switched telephone network (PSTN) system for outbound communication to the non-VOIP-enabled client or non-client phone. Hence, the telephony server112may include a PSTN system and may in some cases access an external PSTN system. The telephony server112includes one or more session border controllers (SBCs) for interfacing the SIP zone with one or more aspects external to the telephony server112. In particular, an SBC can act as an intermediary to transmit and receive SIP requests and responses between clients or non-client devices of a given customer with clients or non-client devices external to that customer. When incoming telephony traffic for delivery to a client of a customer, such as one of the clients104A through104D, originating from outside the telephony server112is received, a SBC receives the traffic and forwards it to a call switch for routing to the client. In some implementations, the telephony server112, via the SIP zone, may enable one or more forms of peering to a carrier or customer premise. For example, Internet peering to a customer premise may be enabled to ease the migration of the customer from a legacy provider to a service provider operating the telephony server112. In another example, private peering to a customer premise may be enabled to leverage a private connection terminating at one end at the telephony server112and at the other end at a computing aspect of the customer environment. In yet another example, carrier peering may be enabled to leverage a connection of a peered carrier to the telephony server112. In some such implementations, a SBC or telephony gateway within the customer environment may operate as an intermediary between the SBC of the telephony server112and a PSTN for a peered carrier. When an external SBC is first registered with the telephony server112, a call from a client can be routed through the SBC to a load balancer of the SIP zone, which directs the traffic to a call switch of the telephony server112. Thereafter, the SBC may be configured to communicate directly with the call switch. The web zone receives telephony traffic from a client of a customer, via the SIP zone, and directs same to the application server108via one or more Domain Name System (DNS) resolutions. For example, a first DNS within the web zone may process a request received via the SIP zone and then deliver the processed request to a web service which connects to a second DNS at or otherwise associated with the application server108. Once the second DNS resolves the request, it is delivered to the destination service at the application server108. The web zone may also include a database for authenticating access to a software application for telephony traffic processed within the SIP zone, for example, a softphone. The clients104A through104D communicate with the servers108through112of the datacenter106via the network114. The network114can be or include, for example, the Internet, a local area network (LAN), a wide area network (WAN), a virtual private network (VPN), or another public or private means of electronic computer communication capable of transferring data between a client and one or more servers. In some implementations, a client can connect to the network114via a communal connection point, link, or path, or using a distinct connection point, link, or path. For example, a connection point, link, or path can be wired, wireless, use other communications technologies, or a combination thereof. The network114, the datacenter106, or another element, or combination of elements, of the system100can include network hardware such as routers, switches, other network devices, or combinations thereof. For example, the datacenter106can include a load balancer116for routing traffic from the network114to various servers associated with the datacenter106. The load balancer116can route, or direct, computing communications traffic, such as signals or messages, to respective elements of the datacenter106. For example, the load balancer116can operate as a proxy, or reverse proxy, for a service, such as a service provided to one or more remote clients, such as one or more of the clients104A through104D, by the application server108, the telephony server112, and/or another server. Routing functions of the load balancer116can be configured directly or via a DNS. The load balancer116can coordinate requests from remote clients and can simplify client access by masking the internal configuration of the datacenter106from the remote clients. In some implementations, the load balancer116can operate as a firewall, allowing or preventing communications based on configuration settings. Although the load balancer116is depicted inFIG.1as being within the datacenter106, in some implementations, the load balancer116can instead be located outside of the datacenter106, for example, when providing global routing for multiple datacenters. In some implementations, load balancers can be included both within and outside of the datacenter106. In some implementations, the load balancer116can be omitted. FIG.2is a block diagram of an example internal configuration of a computing device200of an electronic computing and communications system. In one configuration, the computing device200may implement one or more of the client104, the application server108, the database server110, or the telephony server112of the system100shown inFIG.1. The computing device200includes components or units, such as a processor202, a memory204, a bus206, a power source208, peripherals210, a user interface212, a network interface214, other suitable components, or a combination thereof. One or more of the memory204, the power source208, the peripherals210, the user interface212, or the network interface214can communicate with the processor202via the bus206. The processor202is a central processing unit, such as a microprocessor, and can include single or multiple processors having single or multiple processing cores. Alternatively, the processor202can include another type of device, or multiple devices, configured for manipulating or processing information. For example, the processor202can include multiple processors interconnected in one or more manners, including hardwired or networked. The operations of the processor202can be distributed across multiple devices or units that can be coupled directly or across a local area or other suitable type of network. The processor202can include a cache, or cache memory, for local storage of operating data or instructions. The memory204includes one or more memory components, which may each be volatile memory or non-volatile memory. For example, the volatile memory can be random access memory (RAM) (e.g., a DRAM module, such as DDR SDRAM). In another example, the non-volatile memory of the memory204can be a disk drive, a solid state drive, flash memory, or phase-change memory. In some implementations, the memory204can be distributed across multiple devices. For example, the memory204can include network-based memory or memory in multiple clients or servers performing the operations of those multiple devices. The memory204can include data for immediate access by the processor202. For example, the memory204can include executable instructions216, application data218, and an operating system220. The executable instructions216can include one or more application programs, which can be loaded or copied, in whole or in part, from non-volatile memory to volatile memory to be executed by the processor202. For example, the executable instructions216can include instructions for performing some or all of the techniques of this disclosure. The application data218can include user data, database data (e.g., database catalogs or dictionaries), or the like. In some implementations, the application data218can include functional programs, such as a web browser, a web server, a database server, another program, or a combination thereof. The operating system220can be, for example, Microsoft Windows®, Mac OS X®, or Linux®, an operating system for a mobile device, such as a smartphone or tablet device; or an operating system for a non-mobile device, such as a mainframe computer. The power source208provides power to the computing device200. For example, the power source208can be an interface to an external power distribution system. In another example, the power source208can be a battery, such as where the computing device200is a mobile device or is otherwise configured to operate independently of an external power distribution system. In some implementations, the computing device200may include or otherwise use multiple power sources. In some such implementations, the power source208can be a backup battery. The peripherals210includes one or more sensors, detectors, or other devices configured for monitoring the computing device200or the environment around the computing device200. For example, the peripherals210can include a geolocation component, such as a global positioning system location unit. In another example, the peripherals can include a temperature sensor for measuring temperatures of components of the computing device200, such as the processor202. In some implementations, the computing device200can omit the peripherals210. The user interface212includes one or more input interfaces and/or output interfaces. An input interface may, for example, be a positional input device, such as a mouse, touchpad, touchscreen, or the like; a keyboard; or another suitable human or machine interface device. An output interface may, for example, be a display, such as a liquid crystal display, a cathode-ray tube, a light emitting diode display, or other suitable display. The network interface214provides a connection or link to a network (e.g., the network114shown inFIG.1). The network interface214can be a wired network interface or a wireless network interface. The computing device200can communicate with other devices via the network interface214using one or more network protocols, such as using Ethernet, transmission control protocol (TCP), internet protocol (IP), power line communication, an IEEE 802.X protocol (e.g., Wi-Fi, Bluetooth, or ZigBee), infrared, visible light, general packet radio service (GPRS), global system for mobile communications (GSM), code-division multiple access (CDMA), Z-Wave, another protocol, or a combination thereof. FIG.3is a block diagram of an example of a software platform300implemented by an electronic computing and communications system, for example, the system100shown inFIG.1. The software platform300is a UCaaS platform accessible by clients of a customer of a UCaaS platform provider, for example, the clients104A through104B of the customer102A or the clients104C through104D of the customer102B shown inFIG.1. The software platform300may be a multi-tenant platform instantiated using one or more servers at one or more datacenters including, for example, the application server108, the database server110, and the telephony server112of the datacenter106shown inFIG.1. The software platform300includes software services accessible using one or more clients. For example, a customer302as shown includes four clients—a desk phone304, a computer306, a mobile device308, and a shared device310. The desk phone304is a desktop unit configured to at least send and receive calls and includes an input device for receiving a telephone number or extension to dial to and an output device for outputting audio and/or video for a call in progress. The computer306is a desktop, laptop, or tablet computer including an input device for receiving some form of user input and an output device for outputting information in an audio and/or visual format. The mobile device308is a smartphone, wearable device, or other mobile computing aspect including an input device for receiving some form of user input and an output device for outputting information in an audio and/or visual format. The desk phone304, the computer306, and the mobile device308may generally be considered personal devices configured for use by a single user. The shared device310is a desk phone, a computer, a mobile device, or a different device which may instead be configured for use by multiple specified or unspecified users. Each of the clients304through310includes or runs on a computing device configured to access at least a portion of the software platform300. In some implementations, the customer302may include additional clients not shown. For example, the customer302may include multiple clients of one or more client types (e.g., multiple desk phones or multiple computers) and/or one or more clients of a client type not shown inFIG.3(e.g., wearable devices or televisions other than as shared devices). For example, the customer302may have tens or hundreds of desk phones, computers, mobile devices, and/or shared devices. The software services of the software platform300generally relate to communications tools, but are in no way limited in scope. As shown, the software services of the software platform300include telephony software312, conferencing software314, messaging software316, and other software318. Some or all of the software312through318uses customer configurations320specific to the customer302. The customer configurations320may, for example, be data stored within a database or other data store at a database server, such as the database server110shown inFIG.1. The telephony software312enables telephony traffic between ones of the clients304through310and other telephony-enabled devices, which may be other ones of the clients304through310, other VOIP-enabled clients of the customer302, non-VOIP-enabled devices of the customer302, VOIP-enabled clients of another customer, non-VOIP-enabled devices of another customer, or other VOIP-enabled clients or non-VOIP-enabled devices. Calls sent or received using the telephony software312may, for example, be sent or received using the desk phone304, a softphone running on the computer306, a mobile application running on the mobile device308, or using the shared device310that includes telephony features. The telephony software312further enables phones that do not include a client application to connect to other software services of the software platform300. For example, the telephony software312may receive and process calls from phones not associated with the customer302to route that telephony traffic to one or more of the conferencing software314, the messaging software316, or the other software318. The conferencing software314enables audio, video, and/or other forms of conferences between multiple participants, such as to facilitate a conference between those participants. In some cases, the participants may all be physically present within a single location, for example, a conference room, in which the conferencing software314may facilitate a conference between only those participants and using one or more clients within the conference room. In some cases, one or more participants may be physically present within a single location and one or more other participants may be remote, in which the conferencing software314may facilitate a conference between all of those participants using one or more clients within the conference room and one or more remote clients. In some cases, the participants may all be remote, in which the conferencing software314may facilitate a conference between the participants using different clients for the participants. The conferencing software314can include functionality for hosting, presenting scheduling, joining, or otherwise participating in a conference. The conferencing software314may further include functionality for recording some or all of a conference and/or documenting a transcript for the conference. The messaging software316enables instant messaging, unified messaging, and other types of messaging communications between multiple devices, such as to facilitate a chat or other virtual conversation between users of those devices. The unified messaging functionality of the messaging software316may, for example, refer to email messaging which includes a voicemail transcription service delivered in email format. The other software318enables other functionality of the software platform300. Examples of the other software318include, but are not limited to, device management software, resource provisioning and deployment software, administrative software, third party integration software, and the like. In one particular example, the other software318can include a secondary device management software that can be used by a conference participant to join a conference using a secondary device and to share content to the conference from the secondary device. The software312through318may be implemented using one or more servers, for example, of a datacenter such as the datacenter106shown inFIG.1. For example, one or more of the software312through318may be implemented using an application server, a database server, and/or a telephony server, such as the servers108through112shown inFIG.1. In another example, one or more of the software312through318may be implemented using servers not shown inFIG.1, for example, a meeting server, a web server, or another server. In yet another example, one or more of the software312through318may be implemented using one or more of the servers108through112and one or more other servers. The software312through318may be implemented by different servers or by the same server. Features of the software services of the software platform300may be integrated with one another to provide a unified experience for users. For example, the messaging software316may include a user interface element configured to initiate a call with another user of the customer302. In another example, the telephony software312may include functionality for elevating a telephone call to a conference. In yet another example, the conferencing software314may include functionality for sending and receiving instant messages between participants and/or other users of the customer302. In yet another example, the conferencing software314may include functionality for file sharing between participants and/or other users of the customer302. In some implementations, some or all of the software312through318may be combined into a single software application run on clients of the customer, such as one or more of the clients304through310. FIG.4is a block diagram of an example of a system400for sharing content to a conference using a secondary device. The system400includes a server402that enables users, inter alia, to participate in (e.g., virtually join) audio-visual conferences, also referred to as conferences. As shown, the server402implements or includes some or all of a software platform404and a data store406. The server402can be one or more servers implemented by or included in a datacenter, such as the datacenter106ofFIG.1. While a single server (i.e., the server402) is shown, in some cases, multiple servers may be used to implement the software platform404, for example, by different servers implementing different or redundant functionality or services of the software platform404. The software platform404, via the server402, provides conferencing services (e.g., capabilities or functionality) via a conferencing software408. The software platform404can be or can be part of the software platform300ofFIG.3. The conferencing software408can be variously implemented in connection with the software platform404. In some implementations, the conferencing software408can be or can be integrated in the conferencing software314ofFIG.3. A primary device412and a user device414of respective users are shown as being connected to the server402. The connections to the server402indicate that the primary device412and the user device414are connected to a conference. As can be appreciated, many more user devices may simultaneously connect to a conference. Similarly, the software platform404implemented using the server402can enable many conferences to be concurrently active. The primary device412and the user device414can be devices of users who are configured (e.g., enabled) to or otherwise can join a conference. Each of the primary device412and the user device414may, for example, be one of the clients304through310ofFIG.3. Alternatively, each of the primary device412and the user device414may be devices other than a client. Output images obtained (e.g., generated or composed) with respect to one conference participant (e.g., the conference participant associated with the primary device412) can be transmitted, such as by the server402, to devices of other conference participants (e.g., the user device414). A conferencing software (not shown) of the user device414can cause the output images to be displayed on a display of the user device414. As mentioned above, the output images can be displayed, at the user device414, in a tile associated with the conference participant of the primary device412. An output image of a conference participant can include a foreground segment and a background segment. The foreground segment includes a representation of the conference participant. A “representation” of a conference participant, as used herein, broadly refers to or includes a representation indicative of the conference participant, such as a portrait, an image, a likeness, a body definition, a contour, a textual identifier, or any such representation of the conference participant. In an example, the representation can be a likeness of the conference participant as obtained from a foreground segment of a camera image that is obtained from the user device of the conference participant. The background segment can be or include the background as captured in the camera image, a virtual background (e.g., a replacement of the background), or some other background. The conference participant is joined to the conference using the primary device412. A secondary device416of the conference participant includes content that the conference participant shares during the conference. The secondary device416can be one of the clients304through310ofFIG.3. The conference participant can connect the secondary device416to the conference for the purpose of sharing the content. A secondary device management software410of the software platform404can obtain the content from the secondary device416for transmission and display at devices of conference participants, such as at least one of the primary device412or the user device414. In an example, the secondary device management software410can be part of the conferencing software408. The secondary device management software410can obtain the content from a content sharing software420implemented or executing at the secondary device416. In an example, the content sharing software420may transmit (e.g., stream) content to the secondary device management software410. In an example, the secondary device management software410may receive a request for content from the primary device412. In response to the request, the secondary device management software410may transmit the request to the content sharing software420, which in turn may transmit the content to the secondary device management software410. In an example, the primary device412(e.g., the conferencing software418therein) may transmit the request for content directly to the content sharing software420, which in turn transmits the content to the secondary device management software410. In an example, a request for content may be received by the secondary device management software410from the user device414(e.g., a conferencing software therein). In response to the request, the secondary device management software410may transmit the request to the secondary device416, which in response transmits the content to the secondary device management software410. As already mentioned, content received from the secondary device416can be displayed at devices of at least some conference participants in user interfaces (such as graphical user interfaces) associated with the conferencing software408, such as further described with respect toFIG.6. The data store406can store data related to conferences and data related to users who have participated or may participate in one or more conferences. The data store406can be included in or implemented by a database server, such as the database server110ofFIG.1. The data store406can include data related to scheduled or ongoing conferences and data related to users of the software platform404. FIG.5illustrates examples of user interfaces502,510, and520displayed at a user device that can be a secondary device, such as the secondary device416ofFIG.4. The user interfaces502,510, and520can be displayed by a content sharing software, such as the content sharing software420ofFIG.4. The user interface502illustrates that a conference participant, using another device, such as the primary device412ofFIG.4, is already joined to a conference hosted by a conferencing software, such as the conferencing software408ofFIG.4. After joining the conference via the primary device, the conference participant attempts to connect the user device to the conference. To connect the user device to the conference, the conference participant may cause a request to join the conference to be transmitted from the user device to the conferencing software. In an example, the request to join the conference can indicate or include that the conferencing software is not to indicate to other conference participants that the conference participant is joined to the conference more than once, as further described with respect to FIG.6. Said another way, the request to join the conference transmitted from the user device can indicate or include that the conferencing software is to connect the user device from which the request is received as a secondary device. In an example, and as further described with respect toFIG.7, if the conference participant is not already joined to the conference using another device (i.e., a primary device), then the conferencing software may reject the request to connect the user device to the conference as a secondary device. In an example, if the conferencing software determines, in response to receiving the request from the user device, where the request includes an identity of the conference participant, that the conference participant is already joined to the conference using another device (i.e., the primary device), the conferencing software can cause the content sharing software to display options504-508at the secondary device. In response to the conference participant selecting (e.g., choosing) the option504(i.e., “CONNECT AS SECONDARY”), a confirmation request is transmitted from the user device to the conferencing software to connect the user device as a secondary device. In response to the conference participant selecting the option506(i.e., “DISCONNECT OTHER DEVICE”), a request is transmitted from the user device to the conferencing software to disconnect the primary device from the conference and to connect the user device as the primary device. In response to the request, the conferencing software disconnects the other device and connects the user device as the primary device. In response to the conference participant selecting the option508(i.e., “CONNECT AGAIN”), a request is transmitted from the user device to the conferencing software to connect the user device as another primary device. As such, the conference participants can be shown as being joined twice to the conference. When one of the options506or508is selected, then the content sharing software at the user device can act as (e.g., perform equivalent functions of) the conferencing software418ofFIG.4. In response to a selection of the option504, the content sharing software causes the user interface510to be displayed at the user device. By selecting an option512(i.e., “SHARE MEDIA QUEUE”), the conference participant can select to share, to the conference, one or more pictures, videos, or other media content (e.g., files) available (e.g., stored) at the user device. Accordingly, the content sharing software can transmit the selected media content to a secondary device management software, which can be the secondary device management software410ofFIG.4. By selecting an option514(i.e., “SHARE SCREEN”), the conference participant can select to share the display (i.e., what is displayed at the display) of the user device. Accordingly, the content sharing software can transmit images of the display of the user device to the secondary device management software. By selecting an option516(i.e., “SHARE CAMERA VIEW”), the conference participant can select to share image data in the field-of-view of a camera of the user device to the conference. Accordingly, the content sharing software can transmit images (e.g., a video) captured by a camera of the user to the content sharing software. If the user device includes more than one camera, then the conference participant can select one of the cameras for active use and may also switch cameras for streaming to the conference. In response to a selection of the option512, the content sharing software causes the user interface520to be displayed at the user device. Via a control522, the conference participant can select an image or a collection of images to share to the conference. An image list524illustrates that the conference participant selected to share the images included in a folder named “2023 PARKS VISITED” and all of its subfolders. A marker526marks the image (i.e., image530) that is currently shown in a preview window528. A preview control532enables the conference participant to preview an image that the conference participants selects. A previous control534, when invoked by the conference participant, causes the content sharing software to transmit an image that precedes the currently previewed image in the image list524to the secondary device management software. A next control536, when invoked by the conference participant, causes the content sharing software to transmit an image that follows the currently previewed image in the image list524to the secondary device management software. An auto-play control538causes the content sharing software to transmit an image from the image list524to the secondary device management software, pause for a pause duration, and repeat the process for a next image in the image list524. The pause duration can be a predefined pause duration or can be provided by the conference participant. Other controls (not shown) may be available. For example, a stop-sharing control may be available, which enables the conference participant to cause the content sharing software to stop transmitting media content to the secondary device management software. For example, a pause/resume control (e.g., toggle) may be available, which enables the conference participant to pause an auto-play and to resume a paused auto-play. FIG.6is an example of a user interface600that illustrates companion tiles. The user interface600can be displayed on a user device of a conference participant, such as the primary device412or the user device414ofFIG.4. The user interface600illustrates a conference that includes three participants. Each of the conference participants can be represented by a respective tile, such as a participant602A tile, a participant602B tile, and a participant602C tile, respectively. For brevity and ease of description, statements such as “the <participant X> <verb>” should be understood to mean that the “participant represented by the participant X tile <verb>.” The user interface600can be displayed or caused to be displayed at a user device of a conference by a conferencing software. The conferencing software can be the conferencing software408ofFIG.4, a conferencing software implemented at a user device, such as the conferencing software418ofFIG.4, or a combination thereof. WhileFIG.6is mainly described with respect to sharing content from a secondary device, the disclosure herein is not so limited. For example, and as further described below, the content (e.g., images) to be shared by one of the conference participants can be available at and shared from the primary device of the conference participant and, as such, the conference participant may not use a secondary device. The user interface600illustrates that each of the participants602A and602C has turned on their respective cameras and, as such, respective output images of these conference participants are displayed in the corresponding tiles on respective devices of other participants. The participant602B has not turned on their camera. As such, the corresponding tile of the participant602B shows an identifier (e.g., “PARTICIPANT 2”) instead of an image of the participant602B. As such, the representation of the participant602B is the textual string “PARTICIPANT 2” displayed in a black-filled oval. The participants602A,602B, and602C are illustrated as being joined to the conference using respective primary devices. The user interface600also illustrates that each of the participants602A,602B, and602C has connected a respective secondary device to the conference and is sharing content from the respective secondary device to the conference. As such, respective companion tiles604A,604B, and604C are shown in the user interface600. Even though each of the participants602A,602B, and602C is connected via two respective devices (i.e., a primary device and a secondary device) to the conference, each of the participants602A,602B, and602C is shown as being joined only once. For example, the user interface600does not include two separate tiles showing representations for each of the participants602A,602B, and602C. The user interface600illustrates that the participant602A is sharing images via their secondary device. That is, the participant602A may have selected the option512in the user interface510ofFIG.5. The user interface600illustrates that the participant602B is streaming a camera view via their secondary device. That is, the participant602B may have selected the option516in the user interface510ofFIG.5. The user interface600illustrates that the participant602C is screensharing a view of the screen of their secondary device. That is, the participant602C may have selected the option514in the user interface510ofFIG.5. In an example, a companion tile may include an expand/collapse control, such as a control606, that a conference participant can use to expand (if not expanded) and to collapse (if expanded) a companion tile. Expanding a companion tile can mean growing the size of the companion tile so that it occupies a substantial portion of the user interface600. When a companion tile is expanded, at least the tiles showing the participants602A,602B, and602C tiles may be rearranged (such as reduced in size and moved to an edge of the user interface600). In an example, when a companion tile is expanded, any other companion tiles may become hidden. When a companion tile is collapsed, the arrangement of tiles in the user interface600is returned to the pre-expansion arrangement. In an example, a companion tile may include a hide/show control, such as a control608. If a companion tile is currently shown, the control608, when invoked with respect to a companion tile, causes the companion tile to become hidden (such as by animatedly sliding behind the corresponding participant tile). A state of the control608may be changed to indicate that the companion tile can be unhidden by invoking the control608. For example, as illustrated in a conference tile view610, a control608′ illustrates that the companion tile can be unhidden by invoking the control608′. In an example, a conference participant may enable other conference participants to control their companion tile. Controlling a companion tile includes controlling (e.g., modifying) the content displayed in the companion tile. For example, by invoking a toggle614, a conference participant enables other conference participants to control their companion tile. The user interface600illustrates that the participant602A has enabled (indicated by the state of toggle614being turned on) other conference participants to control the companion tile604A. Thus, if the user interface600is displayed on a device of the participant602B, then controls612are enabled for the participant602B with respect to the companion tile604A. The controls612can be used to show a previous image, to pause an auto-play mode, to resume an auto-play mode, and to show a next image in the image list being shared by the participant602A. When a control, such as a next image or a previous image command, is invoked, a request for the image is transmitted to a secondary device management software410ofFIG.4, which in turn transmits a request for the image to the secondary device, as further described with respect toFIG.8. A control616enables a conference participant to stop sharing content from their secondary device to the conference. In response to the control616being invoked, a request is transmitted to the secondary device management software to stop transmitting content received from the secondary device to devices of the conference participants. In an example, the secondary device management software transmits a command to the content sharing software to stop transmitting content to the secondary device management software. A control618enables a conference participant to indicate that other conference participants are allowed to download (e.g., save to their respective devices) images being displayed in their companion tile. In an example, to connect a secondary device to the conference, a conference participant may first obtain a key that the conference participant includes in the request to connect a secondary device to the conference. As such, a control620(e.g., “GET KEY” control) enables a conference participant to obtain the key by causing a get-key request to be transmitted to the secondary device management software410. In response to the get-key request, the secondary device management software transmits the key (e.g., a string of characters) for display in the user interface600. The conference participant can include the key in the request to connect the secondary device. In another example, the secondary device management software410may transmit the key or a connection string (i.e., a unique string that may include a key or other data uniquely identifying the conference participant) directly to the secondary device. For example, the conference participant may provide a telephone number of the secondary device or an email address accessible from the secondary device. The connection string, when invoked (e.g., clicked) at the secondary device, causes the content sharing software therein to transmit a request to connect the secondary device to the conference. The request to connect can include the unique data usable by the secondary device management software to identify the conference participant. The user interface600may include a chat window622. The chat window can be used by conference participants to send messages or other content with other participants. In an example, instead of sharing content (such as images) in a companion tile, a conference participant may configure content sharing to the chat window. In an example, a user interface, such as one of the user interfaces described with respect toFIG.5, may include an option that enables the conference participant to select whether to share content to a companion tile or to a shared tile (such as a chat window). FIG.7is an example of a technique700for connecting a user device to a conference as a secondary device. The technique700can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-6. The technique700can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the technique700or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. The technique700can be performed by a software platform, such as the software platform404ofFIG.4, and more specifically by a secondary device management software therein, such as the secondary device management software410ofFIG.4. At702, a request is received from a device to connect to a conference as a secondary device. The request may be received, for example, from the secondary device416ofFIG.4. More specifically, the request may be transmitted by the content sharing software420, such as in response to the user of the device causing the request to be transmitted. At704, the secondary device management software identifies a conference participant associated with the request. In an example, the request may include an identity (e.g., an identifier, such as a username) of the conference participant. For example, prior to submitting the request, the conference participant may be required to authenticate themselves via the content sharing software. In an example, the request can include data that can be used to identify the conference participant. For example, an invitation to join a conference may include a participant-specific key (e.g., a string of characters) that the conference participant uses to join the conference. Other ways of identifying the conference participant are possible, such as described with respect toFIG.6. At706, the secondary device management software determines whether the conference participant is already joined to the conference via another device. That is, the secondary device management software determines whether a primary device of the conference participant is already connected to the conference. If the conference participant is not already joined to the conference via a primary device, then the request to connect the secondary device is rejected at708. On the other hand, if the conference participant is already joined to the conference via a primary device, then, at710, the secondary device management software connects the device to the conference as a secondary device. FIG.8is an example of an interaction diagram800for sharing images to a conference from a secondary device. The interaction diagram800can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-6. The interaction diagram800illustrates that a conference participant is joined to a conference using a primary device802and a secondary device804. The conference is hosted by a server806, which includes a conferencing software platform, such as the software platform404ofFIG.4. At least one other conference participant is also joined to the conference via a user device808. The conference participant shares images available at the secondary device804to the conference. Sharing images to the conference can mean that images available at the secondary device804are transmitted to the server (e.g., to the conferencing software therein) which in turn transmits the images for display at devices of the conference participants, such as at the primary device802and the user device808. At810, a request to connect the primary device802to the conference is transmitted from the primary device802to the server. Said another way, a user of the primary device802causes the request to be transmitted so that the user can be joined to the conference. In response to the request, the server806connects the primary device802to the conference. At812, a request to connect the secondary device804to the conference is transmitted from the secondary device804to the server. In response to the request, the secondary device is connected to the conference. At814, the conference participant enables a companion tile. As such, a request can be transmitted from the primary device to the server806indicating that a companion tile is to be shown, in association with the conference participant, in user interfaces associated with the conferencing software, such as the user interface600ofFIG.6. At816, the conference participant selects, at the secondary device804, an image for sharing via the content sharing software420. The image may be an image of an image queue (e.g., a list of images) that the conference participant intends to share in the conference. At818, the image is transmitted to the server806. At820, the server806in turn transmits the image for display at respective devices of at least some of the conference participants, such as at the primary device810and the user device808. At822, a request is received at the server806from the user device808for a next image of the image queue. For example, the conference participant associated with the user device808may use a control of the controls612to request the next image. At824, the server806(i.e., the secondary device management software therein) transmits the request to the secondary device804(i.e., to the content sharing software therein). At826, the secondary device804transmits the requested image to the server806. At828, the requested image is transmitted for display at the user device820. As such, it is possible that the primary devices of different conference participants can request and display different images. To illustrate, at a first user device, an image may be obtained from the secondary device804and displayed within the companion tile and a different image may be simultaneously displayed in the companion tile at a second user device. In another example, and as shown at830, the server806can also transmit the image requested via the user device808to primary devices of the other conference participants, such as the primary device802. As such, regardless of the user device from which a request for an image is received, the contents of the companion tile are synchronized to show the same image. Other variations of the interaction diagram800, consistent with the disclosure herein, are possible. In a variant, the request to enable the companion tile, at814, may be received from the secondary device804. In a variant, the request to connect the secondary device804to the conference can be initiated from the primary device802. To illustrate, the conference participant may invoke a command at the primary device802to connect a secondary device. The secondary device can be identified in any number of ways, such as by explicit selection of the secondary device, via proximity detection, or in some other way. In the case of explicit selection, in response to the command, a user interface associated with the conferencing software and available at the primary device802may display devices (other than the primary device802) associated with the conference participant. Devices associated with the conference participant can be those devices via which the conference participant is logged in, at the time that the command is invoked, to the conferencing software using the same credentials as those that the conference participant used at the primary device802. In response to receiving a selection of a secondary device from conference participant, a request to connect the secondary device804to the conference may be initiated. In an example, the request may be transmitted from the primary device802to the secondary device804, which in turn transmits the request to connect at812. In another example, the request may be transmitted from the primary device802to the server806, which in turn connects the secondary device804to the conference. In an example, if the devices associated with the conference participant include only one device, then the user interface may not be displayed and the only one device can be automatically selected. In the proximity detection case, in response to the command, the primary device802can identify a proximal device associated with the conference participant. In an example, the primary device802may broadcast data, such as by transmitting beacon packets, that are specifically formatted to result in a response from a device associated with the conference participant. The beacon packets can be based on the Bluetooth Low Energy (BLE) beacon standard. However other communications protocols usable for communications between co-located devices can also be used, such as Infrared, Near-Field Communication (NFC), Li-Fi, low-power frequency modulation (FM), amplitude modulation (AM), or Single Side Band (SSB) radio signals, or the like. A device that receives the beacon packets can transmit a response. Based on the contents of the response, a responding device can be identified as a secondary device. In an example, the beacon packets may include data identifying the conference, which the secondary device can use to transmit the request to connect to the conference at812. FIG.9is an example of an interaction diagram900for sharing images to a conference. The interaction diagram900can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-6. The interaction diagram900illustrates that a conference participant is joined to a conference using a conference participant device902. The conference is hosted by a server904, which includes a conferencing software platform, such as the software platform404ofFIG.4. At least one other conference participant is also joined to the conference via a user device906. The conference participant shares images available at the conference participant device902to the conference. Sharing images to the conference can mean that images available at the conference participant device902are transmitted to the server904(e.g., to the conferencing software therein) which in turn transmits the images for display at devices of at least some of the conference participants, such as at the user device906. A difference between the interaction diagram800and the interaction diagram900is that the images (such as shown in the image list524ofFIG.4) to be shared are available at the primary device itself. At908, a request to connect the conference participant device902to a conference is transmitted from the conference participant device902to the server904. At910, the server904connects the device to the conference. That is, the conferencing software of the server904joins the user of the conference participant device902to the conference. At912, a location of a set of images is received. The location of the set of images can be such that the interaction diagram900can be used, inter alia, to obtain a listing of the individual images of the set of images, an ordering of the images, and/or a next or previous image based on the ordering. In an example, the location of the set of images can be received before the conference participant device902is connected to the conference. The location of the set of images can be received in any number of ways. Receiving the location of the set of images can be similar to that described with respect to the user interface520ofFIG.5. Via a user interface (not shown), a conferencing software, such as the conferencing software418ofFIG.4, or a content sharing software associated therewith or included therein, may enable the conference participant to navigate to, select, and provide a pointer to the location of the set of images. The location of the set of images may be a folder location available at or accessible via the conference participant device902. The location of the set of images may be a network-based (e.g., cloud-based) location (e.g., repository). The location can be a hyperlink that provides access to the set of images. Other locations of sets of images are possible. At914, a first image of the images is transmitted from the conference participant device902to the server904. In an example, the conference participant can select the first image for transmission to the server904. In an example, a listing of the images may be displayed to the conference participant in user interface similar to that of the user interface502ofFIG.5and the conference participant can select one of the images for sharing (e.g., transmission). In an example, the conference participant can use a control, such as one of the controls612ofFIG.6to select the first image. At916, the first image is received at the server904, which in turn transmits the first image to devices of other conference participants, such as the user device906. At918, the first image is displayed at the user device906in a companion tile associated with the conference participant device902. The first image can be displayed in the companion tile as described with respect toFIG.6. At920, a request for a second image of the images is transmitted from the user device906. In an example, and as described with respect toFIG.6, the conference participant of the user device906may use a control of the companion tile associated with the conference participant device902to transmit the request for the second image. At922, the request is received at the server904, which in turn transmits the request for the second image to the conference participant device902. At924, the second image is transmitted from the conference participant device902to the server904in response to the request. At926, the server904receives the second image and in turn transmits it to the user device906. At928, the second image is displayed at the user device906in the companion tile. To further describe some implementations in greater detail, reference is next made to examples of techniques which may be performed for joining a conference using a secondary device and sharing content to the conference from the secondary device.FIG.10is a flowchart of an example of a technique1000for connecting two devices of a conference participant to a conference. The technique1000can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-9. The technique1000can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the technique1000or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. The technique1000can be performed, at least in part, by a secondary device management software, such as the secondary device management software410ofFIG.4. At1002, a request is received to connect a first device to a conference. The first device is associated with a conference participant who is joined to the conference via a second device. The first device can be the secondary device416and the second device can be the primary device412ofFIG.4. The request can be received at the software platform404ofFIG.4. As described above, the request can be received from the first device or from the second device. At1004, the first device is connected to the conference. The first device is connected to the conference in such a way, and as described above, such as with respect toFIG.6, that the conference participant is not displayed (e.g., listed) as being joined to the conference more than once in a user interface that lists conference participants. At1006, content is received from the first device based on a command. In an example, the command can be one of the options504,506, or508ofFIG.5. In an example, the command can be received from the first device. In an example, the command can be received from the second device. For example, the conference participant can use their primary device (i.e., the second device) to transmit a request to the secondary device management software410ofFIG.4to obtain a next image from the first device (i.e., the secondary device). In an example, the command can be received from a third device associated with another conference participant. For example, the command can be received from the user device414ofFIG.4. For example, the command can be received in response to one of the controls612being invoked at the third device. For example, the command can be received as described with respect to822ofFIG.8. In an example, the content can be an image stored at the first device. In an example, the content can be media data streamed from a camera of the first device. In an example, the content can be data displayed at the first device. At1008, the content is transmitted for display in the user interface associated with the conference participant. FIG.11is a flowchart of an example of a technique1100for displaying content received from a secondary device in a companion tile. The technique1100can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-9. The technique1100can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the technique1100or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. At1102, a request is received to connect a primary device of a conference participant to a conference. The request can be received from the primary device. At1104, a request is received to connect a secondary device of the conference participant to the conference. The request can be received from the secondary device. In an example, the request can be received from the primary device. In another example, the request can be received from the secondary device. At1106, content is received from the secondary device for display in a companion tile associated with the primary device. As described with respect toFIG.6, each conference participant is represented by or associated with a respective tile in a user interface associated with the conference. The tile is said to be associated with the primary device of the conference participant. That is, the tile is associated with the first device of the conference participant connected to the conference. Said another way, the tile is associated with the first device that the conference participant used to join the conference. When a conference participant is sharing content from the secondary device, a companion tile is associated with the primary device. Said another way, the companion tile does not indicate that the conference participant is joined twice to the conference. As described above, the content can be received by a secondary device management software, such as the secondary device management software410ofFIG.4. At1108, the content is displayed in the companion tile. That the content is displayed in the companion tile can include that the content is transmitted to a user device of a conference participant where a conferencing software therein can cause the content to be displayed in the companion tile in a user interface associated with the conferencing software. In an example, a request for the content can be received from a user device connected to the conference and that is different from the primary device and the secondary device. The user device can be the user device414ofFIG.4. In an example, a command can be received from the primary device to enable another conference participant other than the conference participant to control the companion tile, such as described with respect to the toggle614ofFIG.6. In an example, the content can be a video stream obtained using a camera of the secondary device. In an example, the content can be screen content of the secondary device. In an example, the companion tile can be a chat window associated with the conference. In an example, where the content includes a first image and a second image, a request for the first image can be received from a first device of first conference participant and a request for the second image can be received from a second device of a second conference participant. Concurrently, the first image can displayed in the companion tile at the first device and the second image can be displayed in the companion tile at the second device. FIG.12is a flowchart of an example of a technique1200for displaying images in a companion tile. The images can be available to or accessible via a device (e.g., a first device) of a conference participant who selects to share the images to a conference. The technique1200can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-9. The technique1200can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the technique1200or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. At1202, an indication of images is received at the first device connected to the conference. The indication of the images can be a location of the images, as described above with respect toFIG.9. At1204, a first image of the images is transferred to a conferencing server based on a request to share the first image in the conference. For example, the conference participant can use one of the controls612or a control similar to one of the controls described with respect toFIG.5to select the first image or to cause the first image to be shared to the conference. At1206, the first image is displayed in a companion tile associated with the first device. The first image can be transmitted to the conferencing server, which in turn may transmit the first image to one or more other devices connected to the conference. As described with respect toFIG.6, the first image can be displayed at the one or more other devices in companion tiles associated with the first device. At1208, a request to share a second image of the images in the conference is received. In an example, the request to share the second image can be received from the first device. In another example, the request to share the second image can be received from another device, as described above. At1210, the first image is replaced in the companion tile with the second image. As described with respect to the toggle614ofFIG.6, a conference participant can enable other conference participants to control their companion tile. Thus, the request to share the second image can be received from another device if so enabled by the conference participant. While not specifically described above, thumbnails of the images can be generated at the device via which the image are shared. As such, in an example, thumbnails can be generated at the first device of the conference participant. For example, the thumbnails can be generated in response to receiving the indication of the images. In another example, the thumbnails can be generated on demand (e.g., in response to a request for the thumbnails). The thumbnails are small representations of the images and can be used to provide previews of the images. A thumbnail typically has a reduced size as compared to the corresponding original image and, as such, can use less network bandwidth to transfer and less screen real estate to display. In an example, thumbnails of at least some of the images can be transmitted to the conferencing server. The thumbnails can be displayed in the companion tile at the first device and/or one other device connected to the conference. A conference participant can select a thumbnail to an image to cause the second image to be shared or transmitted for display. In an example, a request can be received from a second device to download a third image to the second device. As described above with respect to the control618ofFIG.6, other conference participants can be allowed to download (e.g., save to their respective devices) images being displayed in their companion tile. As such, in an example, a configuration to enable other conference participants to download at least some of the images can be received, such as from the conference participant. In an example, other conference participants may be allowed (e.g., enabled or configured) to download images only for the duration of the conference. That is, other conference participants may not be permitted to retain downloaded images after the conference terminates or after they leave the conference. As such, in association with an image of the images transferred to a second device in response to a download command, a configuration that causes the image to be deleted from the second device when the second device disconnects from the conference may be transmitted to the second device. Said another way, when an image is transferred to another device, a configuration may be transferred with the image indicating to the other device (i.e., the conferencing software therein) to delete the image when the other device disconnects from the conference. In another example, an explicit command to delete a downloaded image may be transmitted to the other device. The command to delete may be transmitted in response to the conference participant invoking a control in a user interface that causes the command to delete to be transmitted. In an example, the conference participant can select an image that was downloaded and causes commands to delete to be transmitted to devices to which the image was downloaded. In an example, the conference participant can select one or more conference participants and cause commands to delete to be transmitted to the devices of the one or more conference participants. In response to receiving the command to delete at a device, the conferencing software therein deletes all images that were downloaded to the device during the conference. A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions. One general aspect includes a method. The method includes receiving a request to connect a primary device of a conference participant to a conference; receiving a request to connect a secondary device of the conference participant to the conference, receiving content from the secondary device for display in a companion tile associated with the primary device, and displaying the content in the companion tile. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. Implementations may include one or more of the following features. The method may include receiving a request for the content from a user device that is connected to the conference and that is different from the primary device and the secondary device. The content may include a first image and a second image and receiving the content from the secondary device for display in the companion tile associated with the primary device may include receiving a request for the first image from a first device of a first conference participant; and receiving a request for the second image from a second device of a second conference participant; and where displaying the content in the companion tile may include concurrently displaying the first image in the companion tile at the first device and the second image in the companion tile at the second device. The content may include a video stream obtained using a camera of the secondary device. The content may include screen content of the secondary device. The method may include receiving, from the primary device, a command to enable another conference participant other than the conference participant to control content displayed in the companion tile. The companion tile can be a chat window associated with the conference. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium. One general aspect includes a device. The device also includes a memory and a processor. The processor can be configured to execute instructions stored in the memory to receive a request to connect a primary device of a conference participant to a conference; receive a request to connect a secondary device of the conference participant to the conference; receive content from the secondary device for display in a companion tile associated with the primary device; and display the content in the companion tile. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. Implementations may include one or more of the following features. The device where the processor can be further configured to execute instructions stored in the memory to receive a request for the content from a user device that is connected to the conference and that is different from the primary device and the secondary device. The content may include a first image and a second image; where the instructions to receive the content from the secondary device for display in the companion tile associated with the primary device may include instructions to receive a request for the first image from a first device of a first conference participant; and receive a request for the second image from a second device of a second conference participant; and where the instructions to display the content in the companion tile may include instructions to: concurrently display the first image in the companion tile at the first device and the second image in the companion tile at the second device. The content may include a video stream obtained using a camera of the secondary device. The content may include screen content of the secondary device. The processor can be further configured to execute instructions stored in the memory to receive, from the primary device, a command to enable another conference participant other than the conference participant to control the companion tile. The companion tile can be a chat window associated with the conference. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium. One general aspect includes a non-transitory computer readable medium storing instructions operable to cause one or more processors to perform operations. The operations include receiving a request to connect a primary device of a conference participant to a conference; receiving a request to connect a secondary device of the conference participant to the conference, receiving content from the secondary device for display in a companion tile associated with the primary device, and displaying the content in the companion tile. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. Implementations may include one or more of the following features. The non-transitory computer readable medium where the operations may further include receiving a request for the content from a user device that is connected to the conference and that is different from the primary device and the secondary device. The content may include a first image and a second image; where receiving the content from the secondary device for display in the companion tile associated with the primary device may include: receiving a request for the first image from a first device of a first conference participant; and receiving a request for the second image from a second device of a second conference participant; and where displaying the content in the companion tile may include: concurrently displaying the first image in the companion tile at the first device and the second image in the companion tile at the second device. The content may include a video stream obtained using a camera of the secondary device. The content may include screen content of the secondary device. The operations may further include receiving, from the primary device, a command to enable another conference participant other than the conference participant to control the companion tile. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium. For simplicity of explanation, the techniques700,1000,1100, and1200ofFIGS.7,10,11, and12, respectively are depicted and described herein as respective series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement a technique in accordance with the disclosed subject matter. The implementations of this disclosure can be described in terms of functional block components and various processing operations. Such functional block components can be realized by a number of hardware or software components that perform the specified functions. For example, the disclosed implementations can employ various integrated circuit components (e.g., memory elements, processing elements, logic elements, look-up tables, and the like), which can carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, where the elements of the disclosed implementations are implemented using software programming or software elements, the systems and techniques can be implemented with a programming or scripting language, such as C, C++, Java, JavaScript, assembler, or the like, with the various algorithms being implemented with a combination of data structures, objects, processes, routines, or other programming elements. Functional aspects can be implemented in algorithms that execute on one or more processors. Furthermore, the implementations of the systems and techniques disclosed herein could employ a number of conventional techniques for electronics configuration, signal processing or control, data processing, and the like. The words “mechanism” and “component” are used broadly and are not limited to mechanical or physical implementations, but can include software routines in conjunction with processors, etc. Likewise, the terms “system” or “tool” as used herein and in the figures, but in any event based on their context, may be understood as corresponding to a functional unit implemented using software, hardware (e.g., an integrated circuit, such as an ASIC), or a combination of software and hardware. In certain contexts, such systems or mechanisms may be understood to be a processor-implemented software system or processor-implemented software mechanism that is part of or callable by an executable program, which may itself be wholly or partly composed of such linked systems or mechanisms. Implementations or portions of implementations of the above disclosure can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be a device that can, for example, tangibly contain, store, communicate, or transport a program or data structure for use by or in connection with a processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or semiconductor device. Other suitable mediums are also available. Such computer-usable or computer-readable media can be referred to as non-transitory memory or media, and can include volatile memory or non-volatile memory that can change over time. The quality of memory or media being non-transitory refers to such memory or media storing data for some period of time or otherwise based on device power or a device power cycle. A memory of an apparatus described herein, unless otherwise specified, does not have to be physically contained by the apparatus, but is one that can be accessed remotely by the apparatus, and does not have to be contiguous with other memory that might be physically contained by the apparatus. While the disclosure has been described in connection with certain implementations, it is to be understood that the disclosure is not to be limited to the disclosed implementations but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law. | 91,197 |
11943268 | DETAILED DESCRIPTION The inventor has identified disadvantages with conventional approaches to determining various information related to individual or aggregated status reports regarding the actual delivery of scheduled interstitial media presentations via a multichannel media distribution platform. In particular, the inventor has recognized that conventional approaches to distribution and review of status information regarding the delivery and display of such scheduled media presentations (also termed herein as “Media Delivery Notifications” or “MDNs”) fail to allow timely and accurate assessment of the delivery of those scheduled media presentations, as well as additional information regarding the status of hardware devices and distribution channels associated with them. In response to the inventor's recognition of these disadvantages, the inventor has conceived and reduced to practice a software and/or hardware facility (“the facility”) for timely and accurately decompressing, decrypting, parsing, ingesting, and/or displaying status information regarding actual delivery of scheduled media presentations via a multichannel media distribution platform. In certain scenarios, one or more operations performed via the facility in accordance with techniques described herein may be performed by one or more embodiments of a Multichannel Media Distribution (MMD) System. In certain embodiments, a media programming broker service provides media timeslot data and/or other scheduling information regarding the scheduled future display of multiple media assets to an MMD platform (e.g., a satellite television service provider or cable television service provider) associated with a large plurality of media content users (interchangeably termed “subscribers” herein), with each of those media content users having one or more STB devices located at a customer premises location. Based on the provided scheduling information, the MMD platform provides (“spools”) one or more media asset files containing the multiple media assets to some or all of those STB devices in advance of the scheduled future display, such as via satellite transmission, wired transmission, one or more network connections, etc. For example, in an exemplary scenario and embodiment, the MMD platform may at various times spool up to four days' interstitial media assets (including but not necessarily limited to advertisement media assets) to a subset of STB devices (such as STB devices associated with some or all customers located in a particular geographical region) in advance of the scheduled display of those interstitial advertisements. In this manner, a particular STB device may store all such interstitial media assets scheduled for presentation within the next four days. In various embodiments, spooling of the interstitial media assets may occur at regular intervals (such as daily or semi-daily), in response to one or more events (such as responsive to receiving scheduling information regarding one or more new interstitial media assets, to receiving an indication that the STB device is in a low-activity state, or other event) or other time. Following the scheduled time for the presentation of one or more interstitial media assets, each of the plurality of STB devices may provide a status report regarding each of one or more scheduled presentations of each interstitial media asset. In various embodiments and scenarios, such status reports may be provided at regular intervals (such as daily or semi-daily), in response to one or more events (such as responsive to an indication that the STB device is in a low-activity state, or other event) or other time. For example, an STB device may provide a report associated with multiple past scheduled presentations of an interstitial media asset, with each of the multiple scheduled presentations indicating one of a finite number of status codes. In at least one embodiment, such status codes may include (as non-limiting examples): Success (indicating that the interstitial media asset was displayed as scheduled); Error (indicating that the interstitial media asset was either unavailable for display or otherwise prevented from being displayed as scheduled); Warning (indicating one or more issues arising from the scheduled display of the interstitial media asset that did not prevent its scheduled display); Debug (indicating a response to one or more test codes provided to the STB device); or Invalid (indicating that the scheduling information identifying the interstitial media asset was determined to be incorrect). It will be appreciated that any number or arrangement of status codes or other indicative schema regarding the results of scheduled media presentations may be utilized in accordance with the techniques described herein. Status reports provided from one or more STB devices may in various scenarios be sent directly to the MMD platform, and/or to one or more media asset data service providers. In an exemplary embodiment, such status reports are provided to a media asset data service provider that aggregates, compresses, encrypts, and transmits one or more resulting media delivery notification (MDN) data files to the MMD platform. However, such MDN data files fail to provide substantive analysis, qualitative information, or various performance metrics for the MMD platform, the media asset data service providers, or the STB devices responsible for displaying the scheduled interstitial media assets. In one or more embodiments, a media asset data service provider transmits (such as via one or more computer networks or other transmission medium) one or more MDN data files to an MMD computing system (“MMD system”) associated with the MMD platform for analysis. In such embodiments, the MMD system (or other embodiment of the facility) receives the MDN data files and performs various operations to generate, visualize, and display additional information regarding the individual and aggregated status reports provided by the plurality of STB devices regarding the actual presentation of the scheduled interstitial media assets. As non-limiting examples, in various embodiments such operations may include one or more of the following: decompressing the one or more MDN data files (if, for example, the data files have been compressed in order to conserve network or other transmission bandwidth); decrypting the one or more MDN data files in accordance with one or more decryption protocols, such that the decrypted MDN data files may include (for each of the multiple status reports) at least a media presentation identifier and a presentation status indicator; generating one or more databases containing information related to the included status reports, as well as to the scheduled interstitial media assets and corresponding scheduling information; and parsing the decrypted MDN data files, such as to generate one or more database entries corresponding to each included status report, the associated STB device, and/or the scheduled interstitial media asset. In various embodiments, for example, generating such database entries may include generating information regarding multiple distinct success rates for the scheduled interstitial media asset presentations (e.g., a “raw” success rate, an “average” or other statistically calculated success rate, etc.); generating one or more visualizations of a success rate for at least some of the scheduled interstitial media asset presentations, including one or more visualizations of such success rates over an indicated time period; applying one or more “tags” to a subset of STB devices based at least in part on the included status reports, such as may be utilized by the MMD system, advertisers, or other entities to distinguish various targetable sets of STB devices based on characteristics of media content users associated with those STB devices; generating information based on one or more geographical locations or regions that include a subset of the corresponding STB devices; generating information regarding transmission times associated with providing the scheduled interstitial media assets to some or all of the corresponding STB devices; and other information. In various embodiments, the facility may provide various functionality to enable presentation of one or more aspects of data and/or databases generated by the facility based on the provided status reports. As non-limiting examples, in various embodiments such functionality may include one or more of the following: providing a user interface—such as a command-line query interface, a graphical user interface (“GUI”), or Application Program Interface (“API”)—to allow one or more users to execute queries based on the generated databases; to generate and display one or more reports regarding various subsets of data included in the generated databases; to generate and display, such as in real-time or with respect to recent subsets of data included in the generated databases, a graphical “dashboard” that includes the display of selected aspects of such data; to generate and display one or more visualizations of subsets of data included in the generated databases; etc. By performing these or other operations in accordance with techniques described herein, the facility enables users of the facility to timely and accurately determine individual and/or aggregated status reports regarding the delivery and/or display of scheduled media presentations, such as interstitial media programming. Also, the facility improves the functioning of computer or other hardware, such as by reducing the dynamic display area, processing, storage, and/or data transmission resources needed to perform various tasks, thereby enabling the tasks to be performed by less capable, capacious, and/or expensive hardware devices, and/or be performed with less latency, and/or preserving more of the conserved resources for use in performing other tasks or additional instances of the same task. The facility also prevents the expenditure of human and computing resources that would otherwise be utilized to overcome the limitations of raw data provided by a plurality of STB devices with respect to timely and accurate assessment of such data. FIG.1is an overview block diagram illustrating an exemplary networked environment100that includes a Multichannel Media Distribution (computing) System110, a media asset data service provider180, a media programming broker service190, and a plurality of media content users150that are each associated with at least one STB device151. For purposes of clarity, the exemplary networked environment100includes a single media asset data service provider180and a single media programming broker service190; it will be appreciated that in various scenarios and embodiments, multiple such entities may be communicatively connected with, and provide one or more services to, the exemplary MMD system110. In operation, the MMD system110provides interstitial media programming to the plurality of media content users150via one or more media asset data files in accordance with presentation schedules provided by the media programming broker service190. In the depicted networked environment100, the MMD system provides the media content users150with a multitude of video and/or data programming (herein, collectively “programming”) via the associated STB device151. The programming may include any type of media content, including, but not limited to: television shows, news, movies, sporting events, advertisements, etc. In various embodiments, any of this programming may be provided as a type of programming referred to as streaming media content, which is generally digital multimedia data that is substantially constantly received by and presented to an end-user or presented on a device while being delivered by a provider from a stored file source. Its verb form, “to stream,” refers to the process of delivering media in this manner. The term refers to how the media is delivered rather than the media itself. During operation, the media programming broker service190maintains media timeslot data192and media content data194, and based on that maintained data provides scheduling information for future interstitial media asset presentations to the MMD system, such as to provide indications of indicated timeslots for interstitial advertisements that have been purchased by advertisers or their representatives for display during “breaks” in other scheduled programming. Based at least in part on the provided scheduling information, the MMD system spools sets (or “packages”) of multiple corresponding interstitial media assets to multiple STB devices151. In this manner, the MMD system may store all interstitial media assets scheduled for display during a preselected time period (e.g., for multiple upcoming days at a time) as at least part of the media asset data stored by STB devices151via media asset data storage152. During that preselected time period, and in accordance with scheduling information provided to each STB device151by the MMD system110via asset insertion manager112, each STB device151initiates the insertion of an indicated interstitial media asset into each of one or more such breaks occurring during programming being presented to one or more associated media content users150via a corresponding display device156. In at least the depicted embodiment, each STB device151additionally generates a status report message regarding each attempt to initiate insertion of an indicated interstitial media asset, including a status code reflecting one or more types of success or failure in displaying the interstitial media asset. The generated status report messages are provided to media asset data service provider180, which aggregates and packages the status report messages from one or more pluralities of STB devices151as described in greater detail elsewhere herein. In certain scenarios and embodiments, for example, the packaged MDN data files may be stored via one or more distinct formats, one or more distinct encryption schema, and/or one or more distinct compression algorithms. The resulting MDN data files may be stored or otherwise maintained by the media asset data service provider via MDN database188and/or media asset acquisition database189. The media asset data service provider180then provides the packaged MDN data files to the MMD system, such as via network(s)101and/or a dedicated data connection102a. After receiving the MDN data files from the media asset data service provider180, the MMD system110performs one or more operations (described in greater detail elsewhere herein) to decompress and/or decrypt (via decryption/decompression engine142), parse (via parsing engine144), and ingest (via ingestion engine146) the MDN data files in order to generate one or more database entries corresponding to each status report message included therein, as well as to generate, visualize, and display additional information related to those generated database entries, such as via one or more of report generator114, GUI122, Web application server118, and/or API120. In the depicted exemplary networked environment100, the media asset data service provider, media programming broker service190, and STB devices151are each communicatively coupled to the MMD system110via one or more intervening networks101, which may comprise one or more computer networks, one or more wired or wireless networks, satellite transmission media, one or more cellular networks, or some combination thereof. The network(s)101may include a publicly accessible network of linked networks, possibly operated by various distinct parties, such as the Internet. The network101may include other network types, such as one or more private networks (e.g., corporate or university networks that are wholly or partially inaccessible to non-privileged users), and may include combinations thereof, such that (for example) one or more of the private networks have access to and/or from one or more of the public networks. Furthermore, the network101may include various types of wired and/or wireless networks in various situations, including satellite transmission. In addition, users within the exemplary networked environment100may utilize additional client computing systems and/or other client devices (not shown) to interact with the MMD system110to obtain various described functionality via the network(s)101, and in doing so may provide various types of information to the MMD system110. In certain implementations, the various users and providers of the networked environment100may interact with the MMD system and/or one or more other users and providers using an optional private or dedicated connection, such as one of dedicated connections102. In the depicted embodiment, the MMD system110includes an asset insertion manager112, a report generator114, a media spool manager116, and a web application server118. In addition, the MMD system includes an Application Program Interface (“API”)120, a Graphical User Interface (“GUI”)122, and one or more database generators140, each of which includes a decryption/decompression engine142, a parsing engine144, and an ingestion engine146. The MMD system is communicatively coupled (locally or remotely) to storage facility130, which includes asset insertion information database132, subscriber information database134, and media asset information database136. In certain implementations, the storage facility130may be incorporated within or otherwise directly operated by the MMD system; in other implementations, some or all of the functionality provided by the storage facility may be provided by one or more third-party network-accessible storage service providers. The storage facility130may also comprise multiple separate storage facilities and streaming media content servers geographically separated from each other, each of which may provide stored media content to particular media content user locations based on a number of factors, such as geographical proximity, load balancing parameters, current demand on the storage facility130and/or the networks101, capacity of the storage facility and/or the network(s), etc. The interactions of the MMD system110with the media asset data service provider180, media programming broker service190, and media content users150may occur in various ways, such as in an interactive manner via a graphical user interface122that is provided by the MMD system to users and associated client devices (not shown) via at least some Web pages of a MMD system Web site, such as may be facilitated and supported by one or both of GUI122and web application server118. Information may also be provided in a programmatic manner by one or more client software applications via the Application Program Interface (“API”)120provided by the MMD system that allows computing systems and/or programs to invoke such functionality programmatically, such as using Web services or other network communication protocols. Similarly, interactions with the media asset data service provider may be provided in a programmatic manner by one or more client software applications via API184. Each STB device151interconnects to one or more communications media or sources. For example, the various media content may be delivered as data via a packet-switched network such as the Internet or other packet-switched network, via satellite transmission, or other manner. The underlying connection carrying such data may be via a cable head-end, satellite antenna, telephone company switch, cellular telephone system, Ethernet portal, off-air antenna, or the like. The STB device151may receive a plurality of programming by way of the communications media or sources, or may only receive programming via a particular channel or source. In some embodiments, based upon selection by a user, the STB device151processes and communicates the selected programming to the display device156. Also, in some embodiments, the display device156may also be a STB device151or have a STB device151integrated within it. In various embodiments, examples of an STB device151include, but are not limited to, one or a combination of the following: a “television converter,” “receiver,” “set-top box,” “television receiving device,” “television receiver,” “television,” “television recording device,” “satellite set-top box,” “satellite receiver,” “cable set-top box,” “cable receiver,” “media player,” “digital video recorder (DVR),” “digital versatile disk (DVD) Player,” “computer,” “mobile device,” “tablet computer,” “smart phone,” “MP3 Player,” “handheld computer,” and/or “television tuner,” etc. Accordingly, the STB device151may be any suitable converter device or electronic equipment that is operable to receive programming via a connection to a satellite or cable television service provider outside the media content user premises and communicate that programming to another device over a network. Further, the STB device151may itself include user interface devices, such as buttons or switches. In at least the depicted embodiment, the STB device151is configured via DRM-enabled interface154to receive and decrypt content received from the MMD system according to various digital rights management and other access control technologies and architectures. Furthermore, in at least some embodiments, the STB device151may include an API that provides programmatic access to one or more functions of the STB device151. For example, such an API may provide a programmatic interface to one or more functions that may be invoked by any other program, a remote control (not shown), one or more content providers and/or program distributors, one or more information providers, a local and/or remote content storage system, or some other module. In this manner, the API may facilitate the development of third-party software, such as various different on-demand service applications, user interfaces, plug-ins, adapters (e.g., for integrating functions of the STB device151into desktop applications), and other functionality. In at least the depicted embodiment, the DRM-enabled interface154may facilitate the receiving, decrypting, decoding, processing, selecting, recording, playback and displaying of programming, as well as the establishing of an Internet Layer end-to-end security connection, such as a secure IP tunnel. The DRM-enabled interface154may also facilitate on-demand media services (e.g., video-on-demand or “VOD” services), on-demand program ordering, processing, and DRM and key management and storage corresponding to processing received streaming media content and other programming. In some embodiments, recorded or buffered programming received by the STB devices151as spooled or streaming media content, or other types of programming, may reside within media asset data storage152, either in decrypted or encrypted form as applicable for securely storing, processing and displaying of the received media content according to any applicable DRM associated with the particular programming. The media asset data storage152may also store various program metadata associated with the recorded or buffered programming stored by the STB device151, such as that including, but not limited to, DRM data, tags, codes, identifiers, format indicators, timestamps, user identifications, authorization codes, digital signatures, etc. In addition, the media asset data storage152may include user profiles, preferences and configuration data, etc. In at least the depicted embodiment, the STB device151is configured to process media content (including media programming as well as interstitial media assets) and render the media content for display on the display device156. As part of such processing, the STB device151, in some embodiments working in conjunction with a media content decryption and encryption engine and/or a data transmission module, may encode, decode, encrypt, decrypt, compress, decompress, format, translate, perform digital signal processing, adjust data rate and/or complexity or perform other processing on the data representing received programming and other media content as applicable for presenting the received content in real time on the display device as it is being received by the STB device151. In various embodiments, examples of a display device156may include, but are not limited to, one or a combination of the following: a television (“TV”), a monitor, a personal computer (“PC”), a sound system receiver, a digital video recorder (“DVR”), a compact disk (“CD”) device, DVD Player, game system, tablet device, smart phone, mobile device or other computing device or media player, and the like. Each of the display devices156typically employs a display, one or more speakers, and/or other output devices to communicate video and/or audio content to a user. In many implementations, one or more display devices156reside in or near a media content user's premises and are communicatively coupled, directly or indirectly, to the STB device151. Further, the STB device151and the display device156may be integrated into a single device. Such a single device may have the above-described functionality of the STB device151and the display device156, or may even have additional functionality. In certain embodiments, the MMD system may receive at least some programming content, such as television content, via one or more third-party content providers or associated media distributors (not depicted for purposes of clarity). Exemplary content providers and associated media distributors include television stations, which provide local or national television programming; and special content providers, which provide premium-based programming, pay-per-view programming, and on-demand programming. Encryption and decryption described herein may be performed as applicable according to one or more of any number of currently available or subsequently developed encryption methods, processes, standards, protocols, and/or algorithms, including but not limited to: encryption processes utilizing a public-key infrastructure (PKI), encryption processes utilizing digital certificates, the Data Encryption Standard (DES), the Advanced Encryption Standard (AES128, AES192, AES256, etc.), the Common Scrambling Algorithm (CSA), encryption algorithms supporting Transport Layer Security 1.0, 1.1, and/or 1.2, encryption algorithms supporting the Extended Validation (EV) Certificate, etc. The above description of the exemplary networked environment100and the various service providers, systems, networks, and devices therein is intended as a broad, non-limiting overview of an exemplary environment in which various embodiments of the facility may be implemented.FIG.1illustrates just one example of a multichannel media distribution system110, its users, and service providers communicatively coupled thereto, and the various embodiments discussed herein are not limited to such environments. In particular, exemplary networked environment100may contain other devices, systems and/or media not specifically described herein. FIG.2is a block diagram illustrating an embodiment of an MMD server computing system200that is suitable for performing at least some of the described techniques, such as by executing an embodiment of an MMD system. The MMD computing system200includes one or more central processing units (“CPU”) or other processors205, various input/output (“I/O”) components210, storage220, and memory250, with the illustrated I/O components210including a display211, a network connection212, a computer-readable media drive213, and other I/O devices215(e.g., keyboards, mice or other pointing devices, microphones, speakers, GPS receivers, etc.). The server computing system200and MMD system240may communicate with other computing systems via one or more networks299(which generally function as described with respect to network(s)101ofFIG.1), such as MMD client computing systems260, STB devices270, media asset data service provider (MADSP) computing systems280, media programming broker service (MPBS) computing systems290, and other computing systems295. Some or all of the other computing systems may similarly include some or all of the types of components illustrated for MMD computing system200(e.g., to have an MMD system client application269executing in memory267of a client computing system260in a manner analogous to MMD system240in memory250, with the client computing system260further including I/O components262and computer-readable storage264). In the illustrated embodiment, an embodiment of the MMD system240executes in memory250in order to perform at least some of the described techniques, such as by using the processor(s)205to execute software instructions of the system240in a manner that configures the processor(s)205and computing system200to perform automated operations that implement those described techniques. As part of such execution, the MMD system240operates in conjunction with multiple submodules to support the described techniques. In particular, in the depicted embodiment the MMD system240includes asset insertion manager module242; report generation manager module244; Web server245; subscriber manager module246; a media spool manager247; one or more interface manager modules247; one or more database generators248; and may further include one or more other modules249. As part of such automated operations, the system240, its depicted components modules, and/or other optional programs or modules249executing in memory230may store and/or retrieve various types of data, including in the exemplary database data structures of storage220. In this example, the data used may include various types of asset insertion information in database (“DB”)222, various types of media asset information in DB224, various types of subscriber information in DB226, and/or various types of other information in DB(s)228, such as various information related to one or more media asset data service providers and/or media programming broker services. It will be appreciated that computing system200and devices/systems260,270,280,290, and295are merely illustrative and are not intended to limit the scope of the present invention. The systems and/or devices may instead each include multiple interacting computing systems or devices, and may be connected to other devices that are not specifically illustrated, including through one or more networks such as the Internet, via the Web, via satellite transmission, or via private networks (e.g., mobile communication networks, etc.). More generally, a device or other computing system may comprise any combination of hardware that may interact and perform the described types of functionality, optionally when programmed or otherwise configured with particular software instructions and/or data structures, including without limitation desktop or other computers (e.g., tablets, slates, etc.), database servers, network storage devices and other network devices, smart phones and other cell phones, consumer electronics, wearable and other fitness tracking devices, biometric monitoring devices, digital music player devices, handheld gaming devices, PDAs, wireless phones, pagers, electronic organizers, Internet appliances, television systems, and various other consumer products that include appropriate communication capabilities. In addition, the functionality provided by the illustrated MMD system240may in some embodiments be distributed in various modules. Similarly, in some embodiments, some of the functionality of the MMD system240may not be provided and/or other additional functionality may be available. It will also be appreciated that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software modules and/or systems may execute in memory on another device and communicate with the illustrated computing systems via inter-computer communication. Thus, in some embodiments, some or all of the described techniques may be performed by hardware means that include one or more processors and/or memory and/or storage when configured by one or more software programs (e.g., the MMD system240and/or MMD client software executing on devices260,270,280,290, and/or295) and/or data structures, such as by execution of software instructions of the one or more software programs and/or by storage of such software instructions and/or data structures. Furthermore, in some embodiments, some or all of the systems and/or modules may be implemented or provided in other manners, such as by consisting of one or more means that are implemented at least partially in firmware and/or hardware (e.g., rather than as a means implemented in whole or in part by software instructions that configure a particular CPU or other processor), including, but not limited to, one or more application-specific integrated circuits (ASICs), standard integrated circuits, controllers (e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers), field-programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), etc. Some or all of the modules, systems and data structures may also be stored (e.g., as software instructions or structured data) on a non-transitory computer-readable storage mediums, such as a hard disk or flash drive or other non-volatile storage device, volatile or non-volatile memory (e.g., RAM or flash RAM), a network storage device, or a portable media article (e.g., a DVD disk, a CD disk, an optical disk, a flash memory device, etc.) to be read by an appropriate drive or via an appropriate connection. The systems, modules and data structures may also in some embodiments be transmitted via generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission mediums, including wireless-based and wired/cable-based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). Such computer program products may also take other forms in other embodiments. Accordingly, embodiments of the present disclosure may be practiced with other computer system configurations. In various embodiments, one or more components/modules of the facility, as well as various components/modules of the computing systems described herein, may be implemented using standard programming techniques. For example, such components/modules may be implemented as a “native” executable running on one or more processors (such as CPU(s)205and/or CPU(s)261ofFIG.2), along with one or more static or dynamic libraries. In other embodiments, such components/modules may be implemented as instructions processed by a virtual machine that executes as another program. In general, a range of programming languages known in the art may be employed for implementing such example embodiments, including representative implementations of various programming language paradigms, including but not limited to, object-oriented (e.g., Java, C++, C #, Visual Basic.NET, Smalltalk, and the like), functional (e.g., ML, Lisp, Scheme, and the like), procedural (e.g., C, Pascal, Ada, Modula, and the like), scripting (e.g., Perl, Ruby, Python, JavaScript, VBScript, PHP, jQuery, and the like), or declarative (e.g., SQL, Prolog, and the like). The embodiments described herein may also use well-known or other synchronous or asynchronous client-server computing techniques. However, the various components may be implemented using more monolithic programming techniques as well, for example, as an executable running on a single CPU computer system, or alternatively decomposed using a variety of structuring techniques known in the art, including but not limited to, multiprogramming, multithreading, client-server, or peer-to-peer, running on one or more computer systems each having one or more CPUs or other processors. Some embodiments may execute concurrently and asynchronously, and communicate using message passing techniques. Equivalent synchronous embodiments are also supported. Also, other functions could be implemented and/or performed by each component/module, and in different orders, and by different components/modules, yet still achieve desired functions. In addition, programming interfaces described herein may be available by standard mechanisms such as through C, C++, C #, and Java APIs; libraries for accessing files, databases, or other data repositories; scripting languages such as XML; or Web servers, FTP servers, NFS file servers, or other types of servers providing access to stored data. As non-limiting examples, storage facility130ofFIG.1and/or storage220ofFIG.2may be implemented as one or more database systems, file systems, or any other technique for storing such information, or any combination of the above, including implementations using distributed computing techniques. Different configurations and locations of programs and data are contemplated for use with techniques described herein. A variety of distributed computing techniques are appropriate for implementing the components of the illustrated embodiments in a distributed manner including but not limited to TCP/IP sockets, RPC, RMI, HTTP, and Web Services (XML-RPC, JAX-RPC, SOAP, and the like). Other variations are possible. Other functionality could also be provided by each component/module, or existing functionality could be distributed amongst the components/modules in different ways, yet still achieve desired functions. Various exemplary presentations for information generated by an MMD system will now be provided with respect to particular embodiments shown for illustrative purposes, although it will be appreciated that other embodiments may include more and/or less information, and that various types of illustrated information may be replaced with other information. It will be appreciated that throughout these figures, various numerical or identifying data may have been replaced by textual variable identifiers in the corresponding figures in order to maintain the confidentiality of certain specific data. FIG.3depicts a “dashboard” presentation300of various data generated by an embodiment of the facility (such as by the MMD system110ofFIG.1and/or the MMD computing system200ofFIG.2), such as responsive to the receipt, decompression, decryption, parsing, and ingestion of one or more MDN data files. In the depicted embodiment, the dashboard presentation300includes multiple distinct panels301a-301hof related information, which may be displayed on one or more display devices communicatively coupled to one or more client computing systems (e.g., MMD client computing systems260ofFIG.2). By displaying the generated information in this manner, the facility may significantly improve the timely depiction of relevant and/or selected status reports provided from one or more indicated pluralities of STB devices associated with the MMD system, as well as from one or more media asset data service providers and media programming broker services. In the depicted embodiment, the dashboard presentation300includes a first dashboard panel301a, comprising information related to various time information regarding the respective durations of multiple selected processes (colloquially termed “wrap times”), executed on the current date of presentation, for spooling interstitial media assets to one or more sets of STB devices in advance of the scheduled display of those interstitial media assets. Additional details regarding the generated information displayed by the first dashboard panel301ais provided below with respect toFIG.3A. The dashboard presentation300further includes a second dashboard panel301b, comprising information related to all media delivery notification data received for the current date from a plurality of STB devices, such as may have been provided via one or more MDN data files from one or more media asset data service providers. Additional details regarding the generated information displayed by the second dashboard panel301bis provided below with respect toFIG.3B. The dashboard presentation300further includes a third dashboard panel301c, comprising information related to interstitial media asset order information for the current day, with each order reflecting advertiser-placed or other orders for the scheduled presentation during the current day of a specified interstitial media asset. Additional details regarding the generated information displayed by the third dashboard panel301cis provided below with respect toFIG.3C. The dashboard presentation300further includes a fourth dashboard panel301d, comprising information related to one or more “tags” related to media content users, such as may be imported by an exemplary embodiment of an MMD system based on information received from one or more sources, including a media asset data service provider or other source. Such tags may be useful, for example, to distinguish various targetable sets of STB devices based on characteristics of media content users associated with those STB devices. Additional details regarding the generated information displayed by the fourth dashboard panel301dis provided below with respect toFIG.3D. The dashboard presentation300further includes a fifth dashboard panel301e, comprising information related to success rates for scheduled presentations during the current date of one or more interstitial media assets. Additional details regarding the generated information displayed by the fifth dashboard panel301eis provided below with respect toFIG.3E. The dashboard presentation300further includes a sixth dashboard panel301f, comprising information related to interstitial media assets being re-spooled (that is, spooled for storage despite nominally being considered previously stored) to a plurality of STB devices on the current date. Additional details regarding the generated information displayed by the sixth dashboard panel301fis provided below with respect toFIG.3F. The dashboard presentation300further includes a seventh dashboard panel301g, comprising information related to breaks during selected media programming for the current date, including information based on multiple geographical areas, during which interstitial media assets have been scheduled for presentation. Additional details regarding the generated information displayed by the seventh dashboard panel301gis provided below with respect toFIG.3G. The dashboard presentation300further includes an eighth dashboard panel301h, comprising information related to one or more “tags” related to media content users, such as may be imported by the MMD system based on information received from a specified source. In a manner similar to that described above with respect to the fourth dashboard panel301d, tags may be useful to (as a non-limiting example) distinguish various targetable sets of STB devices based on characteristics of media content users associated with those STB devices. Additional details regarding the generated information displayed by the eighth dashboard panel301his provided below with respect toFIG.3H. FIG.3Adepicts a detailed view of first dashboard panel301afromFIG.3, which includes information related to various time information regarding the respective durations of multiple selected processes (colloquially termed “wrap times”), executed on the current date of presentation, for spooling interstitial media assets to one or more sets of STB devices in advance of the scheduled display of those interstitial media assets. In particular, the first dashboard panel301aincludes a media format selection field303, indicating that the current display relates to those interstitial media assets formatted as MPEG4 files. In certain embodiments, the media format selection may allow a user to selectively view information related to a particular type of interstitial media content, such as to distinguish between standard-definition (“SD”), high-definition (“HD”), or other content type. The first dashboard panel301afurther includes a breakdown of four distinct collections of scheduled interstitial media assets that have been spooled to a quantity of STB devices, each with an indicated Type305a, indicating a respective timeframe during which the corresponding interstitial media assets have been spooled; an indicated Wrap Time305b, indicating the duration of the spooling processes required to transmit the corresponding interstitial media assets to the selected set of STB devices; and an indicated quantity of Assets305c, indicating the number of interstitial media assets that were spooled during those corresponding spooling processes. The current display of the first dashboard panel301aindicates the following: collection type307adelineates those interstitial media assets spooled during the current date, with a wrap time designated as variable identifierTIME_1 day and a quantity of interstitial media assets designated as variable identifierASST_1 day; collection type307bdelineates those interstitial media assets spooled during the previous two days, with a collective wrap time designated as variable identifierTIME_2 day and a quantity of interstitial media assets designated as variable identifierASST_2 day; collection type307cdelineates those interstitial media assets still indicated as stored by the corresponding STB devices but that were spooled prior to that two-day time period, with a wrap time designated as variable identifierTIME_aged and a quantity of interstitial media assets designated as variable identifierASST_aged; and collection type307ddelineates those interstitial media assets that have been spooled during the current date and that were not spooled during the previous date, with a wrap time designated as variable identifierTIME_new and a quantity of interstitial media assets designated as variable identifierASST_new. FIG.3Bdepicts a detailed view of second dashboard panel301bfromFIG.3, which includes information related to all media delivery notification data received for the current date from a plurality of STB devices, such as may have been provided via one or more MDN data files from one or more media asset data service providers. In particular, the second dashboard panel301bincludes an STB device count309, indicating a quantity of STB devices (designated as variable identifier STB_COUNT1) that have reported the status of one or more scheduled presentations of interstitial media assets for the current date or, in certain embodiments and scenarios, within the past 24 hours; an MDN file count, designated as variable identifierFILE_COUNT1 and indicating a quantity of MDN data files via which the indicated status reports were received by the MMD system; an MDN count, designated as variable identifier MDN_COUNT1 and indicating a quantity of individual status reports received for the current date; and status counts315a-315e, each of which denotes a total count of MDN reports having the indicated status identifier (“Success,” “Errors,” “Warnings,” “Debug,” and “Invalid,” respectively), with the actual count designated as a respective variable (SUCC_COUNT1, ERR_COUNT1, WARN_COUNT1, DBUG_COUNT1, andINVAL_COUNT1, respectively). In addition, each of the indicated data fields309,311,313, and315a-315eare depicted with a corresponding comparative indicator317, which provides the viewer with an indication of how the current depicted quantity respectively compares with that quantity from the prior date. For example, the MMD system has identified that the STB device count309, corresponding to the current date and designated as variable identifier STB_COUNT1, is 1.05% higher than yesterday's corresponding STB device count. In contrast, the MMD system has identified that the quantity of MDN reports having the “Success” status identifier is 7.55% lower than the corresponding number from yesterday. FIG.3Cdepicts a detailed view of third dashboard panel301cfromFIG.3, which includes information related to interstitial media asset order information for the current day, such as may reflect advertiser-placed or other orders for the scheduled presentation during the current day of individual interstitial media assets. In particular, the third dashboard panel301cincludes an active order count319a, designated as variable identifierORD_ACTand indicating a quantity of active orders for the current date; a new order count319b, designated as variable identifierORD_NEWand indicating a quantity of those orders that are new for the current date (in contrast, for example, with recurring orders placed for the current date but also for one or more previous dates); ending order count319c, designated as variable identifierORD_ENDand indicating a quantity of orders for the current date that are not currently scheduled for future dates; and extended order count319d, designated as variable identifierORD_EXTand indicating a quantity of orders for the current date that have been extended to one or more future dates. FIG.3Ddepicts a detailed view of fourth dashboard panel301dfromFIG.3, which includes information related to one or more “tags” related to media content users, such as may be imported by the MMD system based on information received from one or more sources. As non-limiting examples, such tags may be used to characterize STB devices associated with media content users having various characteristics, such as: program viewing habits, program recording habits, online activities, a quantity of household members, age or other demographic information associated with one or more household members, and other characteristics. In the depicted embodiment, the fourth dashboard panel301dincludes a current household tag count321a, designated as variable identifierHTAG_COUNTand indicating a count of those tags imported during the current date; a new household tag count321b, designated as variable identifierHTAG_NEWand indicating a count of those tags imported during the current date that were not imported for the previous date; a current STB device count323a, designated as variable identifierSTB_COUNTand indicating a quantity of STB devices to which the current household tags have been applied; a new STB device count323b, designated as variable identifierSTB_NEWand indicating a quantity of STB devices to which newly imported household tags have been applied; a process status indicator325, currently indicating that the importation process for the indicated household tags was successfully completed; an error count327a, currently indicating that zero errors have been reported with respect to the importation process for the indicated household tags; a run time indicator327b, indicating that the importation process for the indicated household tags was completed in 1 hour, 26 minutes and 44 seconds; and an execution timestamp329, designated as variable identifierEXEC_TIMESTAMPand indicating the time at which the importation process was initiated. FIG.3Edepicts a detailed view of fifth dashboard panel301efromFIG.3, which includes information related to success rates for scheduled presentations during the current date of one or more interstitial media assets. In particular, the fifth dashboard panel301eincludes an overall run rate indicator331a, designated as variable identifier RUNRATEand indicating a percentage success rate for scheduled interstitial media asset presentations for the current date; a run rate comparison indicator331b, indicating a comparative percentage of the current success rate with that identified from the previous date; a run count indicator333a, designated as variable identifierRUN_COUNTand indicating a quantity of distinct interstitial media assets reportedly presented on the current date; a run count comparison indicator333b, indicating a comparative percentage of the current run count with that identified from the previous date; run rate success count335a, designated as variable identifierRR_SUCCand indicating a total number of distinct presentations of interstitial media assets performed on the current date; and run rate failure count335b, designated as variable identifierRR_FAILand indicating a total number of distinct failed instances of initiating the presentation of scheduled interstitial media assets on the current date. FIG.3Fdepicts a detailed view of sixth dashboard panel301ffromFIG.3, which includes information related to all interstitial media assets being re-spooled (that is, spooled for storage despite nominally being considered previously stored) on the current date to a plurality of STB devices. In particular, the sixth dashboard panel301fincludes a total re-spool count337, designated as variable identifierRSPL_CNTand indicating a total quantity of interstitial media assets being re-spooled on the current date; a unique re-spool count339, designated as variable identifierRSPL_UNQand indicating a quantity of unique scheduled presentations of interstitial media assets being re-spooled on the current date; unique asset count341, designated as variable identifierASST_UNQand indicating a quantity of unique interstitial media assets being re-spooled on the current date; re-spool asset threshold343, designated as variable identifierASST_RSPLand indicating a threshold quantity of interstitial media assets to re-spool on the current date (such as may be configured by one or more administrators of the MMD system); asset file count345, designated as variable identifierASST_RENand indicating the quantity of asset package files via which the interstitial media assets being re-spooled on the current date are stored and transmitted; cutoff date347, designated as variable identifierRSPL_CUTand indicating the earliest date (typically between 1 and 10 days prior to the current date) for which the interstitial media assets are being re-spooled; and re-spool timestamp349, designated as variable identifierRSPL_TIMESTAMPand indicating the most recent time at which the indicated re-spooling has completed. FIG.3Gdepicts a detailed view of seventh dashboard panel301gfromFIG.3, which includes information related to breaks during selected media programming for the current date, including information based on multiple geographical areas, during which interstitial media assets have been scheduled for presentation. In particular, the seventh dashboard panel301gdisplays generated data indicators indicating a quantity of breaks associated with media programming for the current date as presented by a plurality of STB devices communicatively coupled to the MMD system. In the depicted embodiment, the breaks have been delineated based on only two separate geographic areas (a “Western arc” and “Eastern arc,” respectively); it will be appreciated that in various other embodiments, any preferred schema of designated geographical areas may be used. In particular, in the current embodiment, the seventh dashboard panel301gincludes an overall quantity351aof scheduled breaks, designated as variable identifierBRK_SCHO; a quantity351bof such breaks that aired as scheduled, designated as variable identifierBRK_AIRO; and a quantity351cof such breaks that were missed (such as breaks in one or more segments of scheduled media programming that did not occur for one or more reasons, such as programming preemption, programming cancellation, etc.), designated as variable identifierBRK_MISO. In addition, the seventh dashboard panel includes information regarding such scheduled breaks delineated according to the Western and Eastern designated geographic areas: an overall quantity353aof scheduled breaks for the Western region, designated as variable identifierBRK_SCHW; a quantity353bof such breaks that aired as scheduled for the Western region, designated as variable identifierBRK_AIRW; and a quantity353cof such breaks that were missed in the Western region, designated as variable identifierBRK_MISW; an overall quantity355aof scheduled breaks for the Western region, designated as variable identifierBRK_SCHE; a quantity355bof such breaks that aired as scheduled for the Western region, designated as variable identifierBRK_AIRE; a quantity355cof such breaks that were missed in the Western region, designated as variable identifierBRK_MISE: and a recency indicator357, designated as variable identifier RFSH_TIMESTAMPand indicating the most recent time at which the other fields of the dashboard panel301ghave been updated. FIG.3Hdepicts a detailed view of eighth dashboard panel301hfromFIG.3, which includes information related to one or more “tags” related to media content users, such as may be imported by the MMD system based on information received from a specified source entity designated here simply as [Provider]. As non-limiting examples, specified source entities from which such tags may be imported include one or more financial entities, credit reporting agency entities, advertiser entities, affiliates or partners of an entity operating the MMD system, etc. It will be appreciated that in various scenarios and embodiments, tags imported via the process from which information was generated with respect to the fourth dashboard panel301d, as well as tags imported from a specified source entity and presented in this eighth dashboard panel301h, may both be commonly applied to one or more STB devices. In the depicted embodiment, the eighth dashboard panel301hincludes a current provider tag count359a, designated as variable identifierPTAG_COUNTand indicating a count of those tags imported from the designated source entity during the current date; a new provider tag count359b, designated as variable identifierPTAG_NEWand indicating a count of those tags imported from the designated source entity during the current date that were not similarly imported for the previous date; a current STB device count361a, designated as variable identifierSTB_COUNTand indicating a quantity of STB devices to which the current tags imported from the designated source entity have been applied; a new STB device count361b, designated as variable identifierSTB_NEWand indicating a quantity of STB devices to which provider tags newly imported from the designated source entity have been applied; a process status indicator363, currently indicating that the importation process for the indicated provider tags was successfully completed; an error count365a, currently indicating that zero errors have been reported with respect to the process for importing the indicated tags from the designated source entity; a run time indicator365b, indicating that the importation process for the indicated tags was completed in 2 hours, 43 minutes and 30 seconds; and an execution timestamp367, designated as variable identifier PROVTAG_TIMESTAMPand indicating the time at which the importation process was initiated. FIGS.4A through4Cillustrate examples of interactive reporting functionality provided by an exemplary Multichannel Media Distribution system. Such reporting functionality may be provided, for example, by MMD system110ofFIG.1via one or more of report generator114, GUI122, and web application server118; and/or by MMD computing system200ofFIG.2via one or more of report generation manager module244, Web server245, and interface manager modules247. FIG.4Adepicts an exemplary interactive reporting facility400a, which displays selected information generated by the MMD system in accordance with the various techniques described herein. In particular, in the depicted embodiment the interactive reporting facility400aprovides multiple run rate data across a time span of eight days (Date1 through Date8, identified via date identifiers403) for a plurality of STB devices located in six distinct geographic regions405respectively denoted by identifiers REG1, REG2, . . . , REG8). As a non-limiting example, in certain embodiments each of the distinct geographic regions405may represent a one or more distinct television media markets. For each respective date and geographic region, three distinct run rate success data types407are provided: a “Raw” success rate, an “MPBS” success rate (such as may be calculated in accordance with requirements or reporting parameters specified by or for a particular media programming broker service), and an “Adjusted MPBS” success rate (such as may be calculated in accordance with MPBS-specific reporting parameters as adjusted by the MMD system per additional specified parameters). As one example, the raw success rate for all scheduled presentations of interstitial media assets on Date8 for region REG3 is provided by data segment413aas being 97.66%; the corresponding MPBS success rate is provided by data segment413bas being 98.65%; and the corresponding Adjusted MPBS success rate is provided by data segment413cas also being 98.65%. In the depicted embodiment, a user viewing the interactive reporting facility400amay specify the particular date range for the reporting facility via starting date selection control409a, ending date selection control409b, and date selection submission control411. It will be appreciated that in certain embodiments, other interactivity may be provided by the interactive reporting facility generated by the MMD system; alternatively, in certain embodiments the MMD system may generate similar visualizations for such data in one or more non-interactive formats. FIG.4Bdepicts an exemplary interactive reporting facility400b, detailing run rate success data per interstitial media asset for region REG3 on date DATE8. In certain embodiments, the interactive reporting facility400bmay be displayed in response to one or more user actions, such as if a user viewing the interactive reporting facility400aofFIG.4Aselected any of data segments413a,413b, or413cin order to view additional details regarding the selected data segment. It will be appreciated that the term “selects,” “selected,” or “selecting,” when used herein in relation to one or more elements of a graphical user interface or other electronic display, may include a variety of user actions taken with respect to various input control devices available depending on the client computing device used to interact with the display, such as one or more clicks using a mouse or other pointing device, one or more tapping interactions using a touch screen of a client device, etc. In the depicted embodiment, the interactive reporting facility400bincludes a listing of interstitial media assets respectively identified by a unique interstitial media asset identifier415, with various generated information being presented in association with each of those listed interstitial media assets and with respect to the indicated region REG3 and indicated date DATE8. In particular, each identified interstitial media asset is respectively associated with a raw success rate percentage417; an MPBS success rate419; an adjusted MPBS success rate421; a successful presentation count423; a failed presentation count425; a channel count427, indicating a quantity of distinct media channels on which each interstitial media asset was scheduled for display; a run count429; an invalid status report count431; a “no data” status report count433; a missed break count435; an MPEG2 processed date437, indicating the date on which the associated interstitial media asset was processed in accordance with the indicated MPEG2 media format; an MPEG4 processed date field439, indicating the date on which the associated interstitial media asset was processed in accordance with the indicated MPEG4 media format; and a “Last Spooled” date field441, indicating the most recent date on which the associated interstitial media asset was spooled to one or more STB devices in the given region. It will be appreciated that in various scenarios and embodiments, a wide variety of information may be presented in the exemplified manner other than those specifically depicted in the embodiment ofFIG.4Bwithout deviating from the techniques described herein. FIG.4Cdepicts an exemplary interactive reporting facility400c, detailing run rate success data for a specific interstitial media asset associated with identifier ASST77837 in region REG3 on date DATE8. In certain embodiments, the interactive reporting facility400cmay be displayed in response to one or more user actions, such as if a user viewing the interactive reporting facility400bofFIG.4bselected interstitial media asset identifier415fin order to view additional details regarding scheduled presentations of the selected interstitial media asset. In the depicted embodiment, the interactive reporting facility400cincludes a listing of four distinct programming breaks in which one or more STB devices attempted to initiate presentation of the interstitial media asset ASST77837. For each such programming break, the interactive reporting facility provides a break status445, indicating whether the corresponding programming break occurred as scheduled; break validity status447, indicating whether the identified programming break accepted initiation of an interstitial media asset; run code449, indicating whether any STB devices attempted to initiate presentation of the scheduled interstitial media asset during the identified programming break; break identifier451, indicating a unique identifier for the corresponding programming break; order line identifier453, indicating a unique identifier for the order (such as an advertising order) that specified the interstitial media asset was to be presented during the corresponding programming break; service identifier455, typically indicating a television channel and/or type (e.g., “ABCH” to indicate a high-definition television channel carrying content provided by the ABC television network) on which the interstitial media asset was to be presented; success rate457; success count459; failure count461; window end time463, indicating the time by which the indicated order specified that the interstitial media asset was to be presented; and airtime indicator465, indicating the time at which the interstitial media asset was actually presented. FIG.5depicts an additional exemplary interactive reporting facility500as generated by an exemplary MMD system, detailing a visualization of all status codes received from a plurality of STB devices in an indicated date range from a first date Date1 to a second date Date10, inclusively. In various scenarios and embodiments, the interactive reporting facility may be displayed automatically based on one or more event criteria, or in response to one or more user actions or queries of a database of information generated by the MMD system based at least in part on one or more MDN data files received for the indicated date range. In the depicted embodiment, the interactive reporting facility500includes data selection controls501, allowing a user to select starting and ending dates for the date range, as well as to selectively review data based on a model type of the reporting STB devices and on one or more types of status reports received from those devices. The interactive reporting facility further includes visualized reporting region503, which provides a graphical display of millions of status reports for the selected date range based on the status types reported by millions of corresponding STB devices. As depicted inFIG.5, the two most common status types for those status reports are “Warning,” depicted via line graph504a, and “Success,” depicted via line graph504b; each of several additional, less commonly reported status types (“Error,” “Debug,” and “Invalid,” respectively) are similarly depicted via corresponding line graphs towards the bottom of the visualized reporting region503. In addition, the interactive reporting facility500includes a data count tabular display505, providing a table with delineated quantities of the various status types reported for each of the dates in the indicated date range, as well as a download selection control507to enable the user to download the provided data directly in a common “CSV” format. As with the other reporting facilities described with respect toFIGS.3A-3H and4A-4C, it will be appreciated that in certain embodiments, other interactivity may be provided, and that the MMD system may additionally generate similar visualizations for such data in one or more non-interactive formats. FIG.6is a flow diagram showing an exemplary routine600performed by the facility, such as via a multichannel media distribution system, to facilitate the timely and accurate decompression, decryption, parsing, ingestion, and display of status information regarding actual delivery of scheduled media presentations by multiple STB devices. The routine600begins at block602, in which one or more data files containing multiple media delivery notifications are received, such as from one or more media asset data service providers. The routine proceeds to block603to begin processing the received data files. At block603, the routine optionally decompresses the one or more MDN data files, such as in response to detecting that the data files have been compressed in order to conserve transmission bandwidth and/or other resources. The routine proceeds to block604. At block604, the routine decrypts the received (and potentially decompressed) one or more MDN data files in order to decipher the multiple interstitial media asset status reports contained therein. The routine proceeds to block606. At block606, the routine parses the decrypted multiple interstitial media asset status reports. In certain embodiments, each such report includes an indication of multiple aspects of a scheduled presentation of an interstitial media asset. As non-limiting examples, such aspects may include: an STB device identifier; an STB device type/model; a current software and/or firmware version associated with the STB device; one or more geographic area identifiers associated with the STB device; a presentation status type, such as to identify whether the corresponding scheduled presentation was successful, resulted in a warning, or associated with one or more other presentation status types; a media asset identifier; an order identifier associated with the corresponding scheduled interstitial media asset presentation; a channel or service type associated with the corresponding scheduled presentation; a programming break identifier associated with the corresponding scheduled presentation; a presentation initiation time; a presentation completion time; a viewing mode or type associated with the corresponding scheduled presentation; or other aspects. The routine then proceeds to block608. At block608, the routine initiates ingestion of the parsed media asset presentation data from the one or more received MDN data files, such as to generate additional information related to the multiple parsed status reports. Details and non-limiting examples regarding such additional information that may be generated in the course of ingesting the parsed media asset presentation data are described in greater detail elsewhere herein. The routine then proceeds to block610. At block610, the routine initiates generation of one or more database entries regarding the parsed media asset presentation data, such as may include one or more aspects of the parsed presentation data itself, as well as additional information generated by the facility. In certain scenarios and embodiments, the generation of the one or more database entries may include generating one or more databases; alternatively, the facility may generate database entries for one or more existing databases, such as the facility may have previously caused to be stored and/or maintained. The routine proceeds to block612. At block612, the routine receives one or more reporting requests for specified data related to the generated database entries. In various scenarios and embodiments, such reporting requests may be manually initiated by one or more users of the facility, such as via one or more database query parameters; may be automatically generated in response to one or more events, such as in response to the expiration of one or more regular intervals, or based on one or more scheduling parameters for the MMD system; and/or may be received from one or more remote computing systems communicatively coupled to the MMD system. The routine proceeds to block614. At block614, the routine determines whether the received reporting request is for one or more graphical displays, such as to create or update one or more segments of a graphical dashboard display regarding the scheduled media asset presentation data and/or other information, or to generate one or more interactive reporting facilities based on such information. If so, the routine proceeds to block616; otherwise, the routine proceeds to block618. At block616, the routine provides graphical data visualization output of specified media asset presentation data based on a type and/or contents of the received reporting request. The routine proceeds to block618. At block618, the routine provides specified media asset presentation data or generated information based on contents of one or more databases maintained by the facility, either in conjunction with the provided graphical data visualization output discussed with respect to block616or (such as if no graphical output was requested) separately. The routine proceeds to block690. At block690, the routine determines whether to continue, such as in response to an express request to terminate. If the routine is to continue, it returns to block602to await additional MDN data files, or to block612in order to handle any additional reporting requests that have been received related to the existing media asset presentation databases. Otherwise, the routine proceeds to block699and ends. Those skilled in the art will appreciate that the various operations depicted viaFIG.6, as well as those described elsewhere herein, may be altered in a variety of ways. For example, the particular order of the operations may be rearranged; some operations may be performed in parallel; shown operations may be omitted, or other operations may be included; a shown operation may be divided into one or more component operations, or multiple shown operations may be combined into a single operation, etc. The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure. | 75,820 |
11943269 | DETAILED DESCRIPTION Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency. In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description. Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements. In general, embodiments of the disclosure initiate transactions over live media. A host user and a guest user may communicate with each other using a host application and a guest application executing on their respective computing devices. The host application and the guest application include videotelephony applications for live video and audio communication between the host user and the guest user with a multimedia stream. The host user and a guest user may discuss the transaction and then use the host application and the guest application to initiate the transaction, which may be executed with a separate transaction application. A transaction is a transfer of a numerical value from one data record to another data record. Transaction data is the metadata related to a transaction. For example, transaction data may include a numerical amount that identifies the numerical value being transferred, a first identifier for a first data record, a second identifier for a second data record, and multiple text descriptions that describe the transaction and may be related to the first and second identifiers. Host data may include text descriptions for the first identifier (name, contact information, etc.). Guest data may include text descriptions for the second identifier (name, contact information, etc.). A multimedia stream is a group of media streams that transfer media data (audio and/or video) between two computing devices, which may be done live on a real-time basis. The media stream may include metadata that includes stream definitions that define the types and parameters for individual streams. As an example, a multimedia stream between a smartphone and a personal computer may include the audio and video streams captured with the cameras and microphones of both devices and transferred between the devices so that each devices may display the audio and video captured with the other device. FIG.1AandFIG.1Bshow diagrams of embodiments that are in accordance with the disclosure.FIG.1Ashows an example of a client server implementation.FIG.1Bshows an example of a peer to peer implementation. The embodiments ofFIG.1AandFIG.1Bmay be combined and may include or be included within the features and embodiments described in the other figures of the application. The features and elements ofFIG.1AandFIG.1Bare, individually and as a combination, improvements to the technology of machine learning. The various elements, systems, and components shown inFIG.1AandFIG.1Bmay be omitted, repeated, combined, and/or altered as shown fromFIG.1AandFIG.1B. Accordingly, the scope of the present disclosure should not be considered limited to the specific arrangements shown inFIG.1AandFIG.1B. Turning toFIG.1A, the system (100) may process transactions over live streaming media. The system (100) includes the host client (102), the guest client (122), the intermediary server (142) and the transaction server (162). Users of the host client (102) and the guest client (122) establish a streaming media connection, (a voice call, a conference call, a video call, etc.) and initiate a transaction. Details of the transaction may be overlaid onto streaming media in the user interfaces presented by the host client (102) and the guest client (122). As an example, the host user of the client (102) may be a retail services provider and the guest user of the guest client (122) may be customer of the host user. After discussing the services to be provided over the streaming media connection, the host user may initiate a transaction that is displayed in the user interfaces of both the host client (102) and the guest client (122). The intermediary application (144) records the state of the transaction with the intermediary state machine (148). After the transaction details are finalized, the guest client uses provider data generated by the intermediary server to execute the transaction with the transaction server (162). The provider data includes host data used by the guest client (122) to perform a transaction utilizing the transaction server (162). The provider data may include an identifier linked to the host user or host client, which may include a uniform resource locator (URL), a token, a digital signature, a certificate, parameter values, etc. that are used by the transaction application (164) to process a transaction. The host client (102) is a computing system in accordance with those described inFIGS.5A and5Band, for example, may be a smartphone or a desktop computer. The host client (102) receives streaming media from the guest client (122), receives transaction data from the intermediary server (142), and transmits host data to the intermediary server (142). The host client (102) may initiate a transaction with the transaction server (162) that is completed with the guest client (122). The host client (102) includes the host capture device (104), the host display device (106) and the host application (108). In one embodiment, the host user of the host client (102) may be the seller involved in the transaction. The host capture device (104) is a media capture device that records the video and audio data that form the host streaming media sent from the host client (102) to the guest client (122). The host capture device (104) may include multiple cameras and microphones incorporated into or attached to the host client (102). As examples, the host capture device (104) may include the cameras and microphones of a smartphone and may include the cameras and microphones of a webcam attached to a personal computer. The host display device (106) is a media presentation device that presents audio and video information to the host user. The host display device (106) may include multiple monitors, displays, and speakers that output audio and/or video information. For example, host display device (106) may include the touchscreen display and speaker output of a smartphone and the monitor and speakers of a personal computer. The host application (108) is a set of programs stored in the memory of the host client (102) and executing on the processors of the host client (102) that handles the multimedia streaming between the host client (102) and the guest client (122), receives host data for a transaction, and communicates with the intermediary application (144) to initiate the transaction. The host application (108) includes the host user interface (110) and the host local cache (112). The host user interface (110) is the interface displayed on the host display device (106) that the host user uses to interact with the host application (108). The host user interface (110) may inject the transaction data into the presentation of media by the host client (102). For example, the host user interface (110) may combine streaming video from the guest client (122) with host video captured with the host capture device (104) into a single video stream onto which transaction data is overlaid and the displayed on the host display device (106). The host local cache (112) is a memory that stores a local copy of the transaction data, which is primarily stored in the primary data store (150) of the intermediary application (144). Changes to the transaction data in the host local cache (112) may be pushed to the host user interface (110) in real time so that the display of information on the host display device (106) is continuously updated with the most recent information. The guest client (122) is a computing system in accordance with those described inFIGS.5A and5Band, for example, may be a smartphone or a desktop computer. The guest client (122) receives streaming media from the host client (102), receives transaction data from the intermediary serer (142), transmits host data to the intermediary server (142), and may execute transactions with the transaction server (162). The guest client (122) includes the guest capture device (124), the guest display device (126), the guest application (128), the guest user interface (130), and the guest local cache (132), which are comparable to the similarly named components from the host client (102) and may operate in an analogous fashion. The host application (108) and the guest application (128) may be different instances of the same application installed on different devices, i.e., the host client (102) and the guest client (122). The intermediary server (142) is a computing system in accordance with those described inFIGS.5A and5Band, for example, may be a server hosted in by a cloud computing environment. The intermediary server (142) receives host data from the host application (108), receives guest data from the guest application (128), processes the host data and guest data to update the intermediary state machine (148) and generate transaction data, propagates the transaction data to the host application (108) and the guest application (128). The intermediary server (142) may send provider data to the guest application (128) for the guest client (122) to complete a transaction. The intermediary server (142) may initiate a transaction with the server (162) that is completed with the guest client (122). The intermediary server (142) includes the intermediary application (144). The intermediary application (144) is a set of programs stored in the memory of the intermediary server (142) and executing on the processors of the intermediary server (142) that handle the propagation of transaction data between the host application (108) and the guest application (128), that update the intermediary state machine (148), and that store transaction data to the primary data store (150). The intermediary application (144) includes the intermediary application programming interface (146), the intermediary state machine (148), and the primary data store (150). The intermediary application programming interface (146) is the interface used by the host application (108) and the guest application (128) to transfer transaction data and update the intermediary state machine (148). The intermediary application programming interface (146) may be a web application programming interface (API) using interactions based on representational state transfer (REST) or simple object access protocol (SOAP) standards. A RESTful web API may use HTTP methods to access resources or services using uniform resource locator (URL)-encoded parameters with the use of JavaScript object notation (JSON) or extensible markup language (XML) formatted text to transmit data and invoke the methods of the web API. The intermediary state machine (148) is a data structure that includes multiple states that may be transitioned between based on inputs received by the intermediary application (144). The transitions may identify data that is used to proceed from one state to a different state. As an example, one embodiment of the intermediary state machine (148) may include the states in the table below. TABLE 1StateDescriptionState 1Streaming connection established: the streamingmedia connection has been established between thehost client (102) and the guest client (122)but the transaction has not been defined.State 2Numerical value identified: a numerical value for thetransaction has been defined but the host andguest information (i.e., host data and guest data)has not been updated.State 3Host and guest information completed: the host and guestinformation has been received, processed, and completed,but the transaction has not been performed and verifiedState 4Transaction verified: the transaction has beenprocessed and verified. The primary data store (150) is a memory that stores the primary copy of the transaction data and may store the state of the intermediary state machine (148). The primary data store (150) is updated with the host data and guest data received from the host application (108) and the guest application (128) by the intermediary application (144). After transaction data is stored in the primary data store (150), the transaction data may be pushed out to the host local cache (112) and the guest local cache (132). The transaction server (162) is a computing system in accordance with those described inFIGS.5A and5Band, for example, may be a server hosted in by a cloud computing environment. The transaction sever hosts the transaction application (164). The transaction application (164) executes transactions that have been discussed by the host user and the guest users over live streaming media using the transaction application programming interface (166). The transaction application programming interface (166) may be a web API using interactions based on representational state transfer (REST) or simple object access protocol (SOAP) standards with methods invoked using HTTP with data in JSON or XML format. In one embodiment, the transaction application (164) may include a web-based payment processing service that processes the transaction as a payment transaction. For example, the host user of the host client (102) may be a retailer that sells goods (products or services) to the guest user of the guest client (122). After discussion the transaction, the users agree to an amount and the guest user executes a payment transaction with the transaction server (162) using the guest client (122). In one embodiment, the transaction application (164) may include cloud services portal that processes the transaction as a cloud services request. For example, the host user of the host client (102) may be a programmer and the guest user is looking to grant access for the host user to the cloud-based servers and data maintained by the guest user. After discussing the transaction, e.g., the limits of access and scope of work for the host user, the users agree to the transaction (e.g., granting a certain number of days of access to the cloud based servers) and the guest user executes the transaction to give the host user access to the cloud based servers. Turning toFIG.1B, the system (170) may process transactions over live streaming media. The system (170) operates in a peer to peer fashion as compared to the system (100) ofFIG.1A, which operates in a client server fashion. The system (170) includes the host client (172), the guest client (122), and the transaction server (162). The guest client (122) and the transaction server (162) and related components operate as described inFIG.1A. The host client (172) includes the host capture device (104), the host display device (106), the host application (178), and the intermediary application (184). The host capture device (104), and the host display device (106) operate as described with respect toFIG.1A. The host application (178) and the intermediary application (184) operate similar to the host application (108) and the intermediary application (144) fromFIG.1Abut without the use of the intermediary server (142). FIG.2shows a flowchart of the process (200) in accordance with the disclosure. The process (200) ofFIG.2initiates transactions over live media. The embodiment ofFIG.2may be combined and may include or be included within the features and embodiments described in the other figures of the application. The features ofFIG.2are, individually and as an ordered combination, improvements to the technology of computing systems and streaming media systems. While the various steps in the flowcharts are presented and described sequentially, one of ordinary skill will appreciate that at least some of the steps may be executed in different orders, may be combined or omitted, and at least some of the steps may be executed in parallel. Furthermore, the steps may be performed actively or passively. For example, some steps may be performed using polling or be interrupt driven. By way of an example, determination steps may not have a processor process an instruction unless an interrupt is received to signify that condition exists. As another example, determinations may be performed by performing a test, such as checking a data value to test whether the value is consistent with the tested condition. Turing toFIG.2, in Step202, host data is received from a host application and a state machine is updated using the host data. The host data is received by an intermediary application that manages the state machine for initiating the transaction between the host user and the guest user. In one embodiment, the host data includes a numerical value for the transaction and may include host identification information, such as a host account identifier, a host name, host contact information, etc. In one embodiment, the state machine may be updated by transitioning between different states based on the host data. As an example, when the host data includes a numerical value, the state machine may transition from State 1 (“streamlining connection established”) to State 2 (“numerical value identified”). The host application may capture host media. The host media may include audio and video captured with the cameras and microphones of the host client. The host media may be transmitted to the guest client. Combinations of the host media, the guest media, and the transaction data may be displayed live in real-time by the host application on the host client. In Step204, guest data is received from a guest application and the state machine is updated using the guest data. The guest data is received by the intermediary application that manages the state machine. In one embodiment, the guest data includes contact information of the guest user. In one embodiment, the state machine may be updated by transitioning from State 2 (“numerical value identified”) to State 3 (“host and guest information completed”). The guest application may capture guest media. The guest media may include audio and video captured with the cameras and microphones of the guest client. The guest media may be transmitted to the host client. Combinations of the guest media, the host media, and the transaction data may be displayed live in real-time by the guest application on the guest client. In Step206, transaction data is propagated between the host application and the guest application. The transaction data includes the host data and the guest data. Each update to the state machine and each update to the primary data store may trigger the intermediary application to send the updated information to the other applications. For example, after receiving the host data, the intermediary application may update the state machine using the host data, store the host data in the primary data store, and then send transaction data that includes the host data to the guest application. Similarly, after receiving the guest data, the intermediary application may update the state machine using the guest data, store the guest data in the primary data store, and then send transaction data that includes the guest data to the host application. To propagate the transaction data, the intermediary application may send the transaction data (updated with host data or guest data) to both the host application and the guest application, which update the host local cache and the guest local cache, respectively, with the updated transaction data. The transaction data may be presented with the multimedia stream on the host client and the guest client. The guest application may overlay the host data onto a display of the multimedia stream in real time on the guest client and the host application may overlay the guest data onto a display of the multimedia stream in real time on the host client. For example, the host client may combine guest media from the guest client with host media from the host client and with an overlay of the transaction data into a single media stream that is presented with the host client (e.g., by displaying video and/or playing audio). Similarly, the guest client may combine host media from the host client with guest media from the guest client and with an overlay of the transaction data into a single media stream that is presented with the guest client. Additional methods of combining the transaction data with the multimedia stream include side-by-side or under-and-under configurations without overlaying the transaction data on the multimedia stream. With a side-by-side configuration, the transaction data may be shown on one side of a user interface (e.g., a left side, right side, top side, or bottom side) and the multimedia stream on another side (e.g., on the opposite side). With an under-and-under configuration, the transaction data may be shown below the multimedia stream. With each configuration, the transaction data is updated in real time during transmission of the multimedia stream. Updates to the transaction data may be propagated live in real-time between the components of the system. For example, the host user may update the numerical value using keyboard inputs and the updates to the numerical value are sent to the intermediary application. The intermediary application processes the updates and propagates the updated numerical value by sending the updated numerical value to the host application and the guest application, which respectively store the updated numerical value in the host local cache and the guest local cache. The update to the guest local cache may trigger an update to the display of the transaction data overlaid onto the multimedia stream by the guest application to reflect the updated transaction data stored in the guest local cache. In one embodiment, the state machine may be updated using multimedia stream data. For example, the host data or guest data may include a timestamp, frame of video, a hash value generated from the multimedia stream, etc. that is stored to the primary data store and required by the state machine to transition to the next state. The multimedia stream data may be used to indicate that the multimedia stream is active between the host user and the client user and verify that both the host user and the client user participated in initiating the transaction. Updates to the state machine may be stored with or without using a hash chain. As an example using a hash chain, an update to the state machine may be stored in a block of the hash chain, which is an immutable block whose value cannot be changed without corrupting the hash values of subsequent blocks of the hash chain. The block includes a payload and a hash value. The payload includes transaction data, host data, guest data, an identifier for the present state of the state machine, an identifier for the next state of the state machine, etc. The payload and parts of the data within the payload may also be encrypted to protect the private information within the payload. The hash value is generated by hashing the payload and a previous hash value, which is the hash value from a previous block from the hash chain. In Step208, provider data is generated responsive to updating the state machine with the host data and the guest data. After all of the information for initiating a transaction has been received by the intermediary application with the state machine being appropriately updated, the intermediary application may generate the provider information. The provider data may identify a data record of the host user that is used to perform a transaction. In one embodiment, the intermediary application may look up a uniform resource locator (URL) comprising a domain of the transaction server and further identifying an account of the host user. When the URL is opened by the guest client, the transaction application may conduct the transaction. In one embodiment, the intermediary application may generate a URL with one or more parameters appended to the URL. The URL is served by the transaction server and the transaction application initiates the transaction responsive to the guest client opening the URL that includes the parameters. As an example, the parameters may include tokens, certificates, signatures, etc. of the host user and the guest user for performing the transaction. In Step210, the provider data is sent to the guest client. The provider data is presented with the multimedia stream by the guest application on the guest client. The provider data may be overlaid on top of the media (including the host media) displayed by the guest application on the guest client. The guest client may execute the transaction using the provider data. For example, the guest client may open a link of the provider data in a browser application that is separate from the guest application. The guest user may follow the steps used by the website hosted by the URL from the provider data to perform the transaction. As an example, when the transaction is a payment transaction, the guest user may be familiar with the website of a certain payment processing services provider and use a browser executing on the guest client to conduct the transaction with the familiar website. The guest client may execute the transaction while maintaining the multimedia stream. As an example, an external website of a transaction provider may be overlaid on top of the multimedia stream presented by the guest application on the guest client. In this manner, a single application may be presented to the user for initiating and executing the transaction. FIG.3andFIG.4AthroughFIG.4Eshow examples of sequences, systems, and interfaces in accordance with the disclosure.FIG.3shows a sequence diagram for performing transactions over live media.FIGS.4A through4Eshow an example of a user interface used to perform transactions over live media. The embodiments ofFIG.3andFIG.4AthroughFIG.4Emay be combined and may include or be included within the features and embodiments described in the other figures of the application. The features and elements ofFIG.3andFIG.4AthroughFIG.4Eare, individually and as a combination, improvements to the technology of computing systems and streaming media systems. The various features, elements, widgets, components, and interfaces shown inFIG.3andFIG.4AthroughFIG.4Emay be omitted, repeated, combined, and/or altered as shown. Accordingly, the scope of the present disclosure should not be considered limited to the specific arrangements shown inFIG.3andFIG.4AthroughFIG.4E. Turning toFIG.3, the sequence (300) is performed with the intermediary application (302), the host application (303), the guest application (304), and the transaction application (305). The applications may execute on computing system in accordance with those described inFIGS.5A and5Band, for example, may be a smartphone, a desktop computer, a cloud based server computer, etc. In Step322a multimedia stream is established between the host application (303) and the guest application (304). The multimedia stream may be established using protocols and standards that include the session initiation protocol (SIP), H.323, real-time protocol (RTP), user datagram protocol (UDP), etc. The host user or guest user may share a uniform resource locator (URL) that, when opened, causes the host application (303) and the guest application (304) set up the multimedia stream between the host client and the guest client running the host application (303) and the guest application (304). In Step324, the host user interacts with the host user interface of the host application (303). The interaction may include providing host information (account data, contact information, numerical values for a transaction, etc.) to the host application (303). In Step326, transaction data and host data are sent between the intermediary application (302) and the host application (303). The transaction data may include host data that was previously sent by the host application (303) and may include guest data that was sent to the intermediary application (302) by the guest application (304). The transaction data may be stored in a local cache of the host application (303). In Step328, the guest user interacts with the guest user interface of the guest application (304). The interaction may include providing guest information (account data, contact information, numerical values for a transaction, etc.) to the guest application (304). In Step330, transaction data and guest data are sent between the intermediary application (302) and the guest application (304). The transaction data may include guest data that was previously sent by the guest application (304) and may include host data that was sent to the intermediary application (302) by the host application (303). The transaction data may be stored in a local cache of the guest application (304). In Step332, transaction data is processed by the intermediary application (302). The intermediary application (302) receives host data and guest data from the host application (303) and the guest application (304) and stores the host data and guest data in a primary data store as transaction data. The intermediary application (302) updates a state machine using the host data and guest data. The intermediary application (302) sends the transaction data back out to the host application (303) and the guest application (304) to update the local caches and the displays of the host application (303) and the guest application (304). In Step334, transaction provider data (also referred to as provider data) is sent to the guest application (304). The transaction provider data includes and identifier for the transaction application (305) so that the guest application may execute a transaction with the transaction application (305). In Step336, the transaction provider data is displayed by the guest application (304). The transaction provider data may include identifiers that reference multiple different transaction providers. The guest user may select one of the transaction providers (e.g., the transaction provider that maintains the transaction application305) with which to perform the transaction. At Step338, a transaction is executed. In one embodiment, the transaction may be executed using the guest application. In another embodiment, a different application may be executed to execute the transaction with the transaction application. Execution of the transaction may involve sending identifying information about the transaction, the guest user, the host user, etc. to the transaction application (305). At Step340, a transaction confirmation may be sent to the host application from the transaction application. As an example, the host user may receive an email that identifies that the transaction has been executed between the guest user and the host user. A notification of the confirmation may be displayed in the host application (303). Turning toFIG.4A, the host application (400) and the guest application (450) are used to initiate a transaction between the host user and the guest user over a live multimedia stream. The host application (400) and the guest application (450) each include a web browser that has opened a videotelephony website to provide video and audio communication between the host user and the guest user for the host user and guest user to discuss a transaction. The video and audio communication is provided with a multimedia streaming connection between the host application (400) and the guest application (450). The host application (400) displays the guest media (452), which was captured with the guest client that executes the guest application (450). The guest media (452) includes video and audio of the guest user. In a corner of the display of the host application (400), the host media (402) is displayed with video of the host user. The host media (402) is captured with the host client that executes the host application (400). On top of the media being displayed, the host transaction overlay (408) is presented with the host data (404). The host transaction overlay (408) is a user interface with multiple user interface elements to collect host data and display transaction data that includes guest data. The host data (404) includes a numerical value (“(100)”) for the transaction being discussed between the host user and the guest user over the multimedia stream. The guest application (450) displays the host media (402) and the guest media (452). On top of the media being displayed, the guest transaction overlay (458) is presented with transaction data that includes the host data (“(100)”). The guest transaction overlay (458) is a user interface with multiple user interface elements to collect guest data and display transaction data that includes host data. Turning toFIG.4B, the host media (402) and the guest media (452) continue to stream and be displayed between the host application (400) and the guest application (450). The host transaction overlay (408) and the guest transaction overlay (458) are updated after the guest user approved the numerical value by selecting a button from the guest transaction overlay (458). The state machine of the intermediary application is updated to indicate that the numerical amount is accepted, the numerical value is store in the primary data store of the intermediary application. The user interface elements of the host transaction overlay (408) and the guest transaction overlay (458) are updated in response to the updates to the state machine. The host transaction overlay (408) is updated to include a request button that, when selected, sends a request to the guest user to provide additional information. Turning toFIG.4C, the host transaction overlay (408) and the guest transaction overlay (458) are updated after the request button (shown inFIG.4B) was selected by the host user. The state machine may be updated from an “accepted numerical amount” state to a “guest information requested” state. The guest user inputs guest data (454) (name, address, email address, phone number, etc.) into the “Buyer Info” section of the guest transaction overlay (458). As the guest data is entered, the guest data is transmitted to the intermediary application, processed by the intermediary application, sent to the host application (400), and displayed by the host application (400) in the host transaction overlay (408). Turning toFIG.4D, the host transaction overlay (408) and the guest transaction overlay (458) are updated in response to selection of the “Payment” section from either the host transaction overlay (408) or the guest transaction overlay (458). The “Payment” section of the guest transaction overlay (458) includes the transaction link (460) that identifies the transaction application for processing the transaction. The transaction link (460) may identify an account of the host user for initiating the transaction. Turning toFIG.4E, the browser of the guest client is updated to display the transaction application (470) in response to selection of the transaction link (460) (shown inFIG.4D). The transaction application (470) is presented in a separate tab of the browser application that presented the guest application. The guest user may interact with the transaction application (470) to execute the transaction that was discussed using the host application (400) and the guest application (450) (shown inFIG.4D). Embodiments of the invention may be implemented on a computing system. Any combination of a mobile, a desktop, a server, a router, a switch, an embedded device, or other types of hardware may be used. For example, as shown inFIG.5A, the computing system (500) may include one or more computer processor(s) (502), non-persistent storage (504) (e.g., volatile memory, such as a random access memory (RAM), cache memory), persistent storage (506) (e.g., a hard disk, an optical drive such as a compact disk (CD) drive or a digital versatile disk (DVD) drive, a flash memory, etc.), a communication interface (512) (e.g., Bluetooth interface, infrared interface, network interface, optical interface, etc.), and numerous other elements and functionalities. The computer processor(s) (502) may be an integrated circuit for processing instructions. For example, the computer processor(s) (502) may be one or more cores or micro-cores of a processor. The computing system (500) may also include one or more input device(s) (510), such as a touchscreen, a keyboard, a mouse, a microphone, a touchpad, an electronic pen, or any other type of input device. The communication interface (512) may include an integrated circuit for connecting the computing system (500) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, a mobile network, or any other type of network) and/or to another device, such as another computing device. Further, the computing system (500) may include one or more output device(s) (508), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, a touchscreen, a cathode ray tube (CRT) monitor, a projector, or other display device), a printer, an external storage, or any other output device. One or more of the output device(s) (508) may be the same or different from the input device(s) (510). The input and output device(s) (510and508) may be locally or remotely connected to the computer processor(s) (502), non-persistent storage (504), and persistent storage (506). Many different types of computing systems exist, and the aforementioned input and output device(s) (510and508) may take other forms. Software instructions in the form of computer readable program code to perform embodiments of the invention may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a CD, a DVD, a storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium. Specifically, the software instructions may correspond to computer readable program code that, when executed by a processor(s), is configured to perform one or more embodiments of the invention. The computing system (500) inFIG.5Amay be connected to or be a part of a network. For example, as shown inFIG.5B, the network (520) may include multiple nodes (e.g., node X (522), node Y (524)). Each node may correspond to a computing system, such as the computing system (500) shown inFIG.5A, or a group of nodes combined may correspond to the computing system (500) shown inFIG.5A. By way of an example, embodiments of the invention may be implemented on a node of a distributed system that is connected to other nodes. By way of another example, embodiments of the invention may be implemented on a distributed computing system having multiple nodes, where each portion of the invention may be located on a different node within the distributed computing system. Further, one or more elements of the aforementioned computing system (500) may be located at a remote location and connected to the other elements over a network. Although not shown inFIG.5B, the node may correspond to a blade in a server chassis that is connected to other nodes via a backplane. By way of another example, the node may correspond to a server in a data center. By way of another example, the node may correspond to a computer processor or micro-core of a computer processor with shared memory and/or resources. The nodes (e.g., node X (522), node Y (524)) in the network (520) may be configured to provide services for a client device (526). For example, the nodes may be part of a cloud computing system. The nodes may include functionality to receive requests from the client device (526) and transmit responses to the client device (526). The client device (526) may be a computing system, such as the computing system (500) shown inFIG.5A. Further, the client device (526) may include and/or perform all or a portion of one or more embodiments of the invention. The computing system (500) or group of computing systems described inFIGS.5A and5Bmay include functionality to perform a variety of operations disclosed herein. For example, the computing system(s) may perform communication between processes on the same or different system. A variety of mechanisms, employing some form of active or passive communication, may facilitate the exchange of data between processes on the same device. Examples representative of these inter-process communications include, but are not limited to, the implementation of a file, a signal, a socket, a message queue, a pipeline, a semaphore, shared memory, message passing, and a memory-mapped file. Further details pertaining to a couple of these non-limiting examples are provided below. Based on the client-server networking model, sockets may serve as interfaces or communication channel end-points enabling bidirectional data transfer between processes on the same device. Foremost, following the client-server networking model, a server process (e.g., a process that provides data) may create a first socket object. Next, the server process binds the first socket object, thereby associating the first socket object with a unique name and/or address. After creating and binding the first socket object, the server process then waits and listens for incoming connection requests from one or more client processes (e.g., processes that seek data). At this point, when a client process wishes to obtain data from a server process, the client process starts by creating a second socket object. The client process then proceeds to generate a connection request that includes at least the second socket object and the unique name and/or address associated with the first socket object. The client process then transmits the connection request to the server process. Depending on availability, the server process may accept the connection request, establishing a communication channel with the client process, or the server process, busy in handling other operations, may queue the connection request in a buffer until server process is ready. An established connection informs the client process that communications may commence. In response, the client process may generate a data request specifying the data that the client process wishes to obtain. The data request is subsequently transmitted to the server process. Upon receiving the data request, the server process analyzes the request and gathers the requested data. Finally, the server process then generates a reply including at least the requested data and transmits the reply to the client process. The data may be transferred, more commonly, as datagrams or a stream of characters (e.g., bytes). Shared memory refers to the allocation of virtual memory space in order to substantiate a mechanism for which data may be communicated and/or accessed by multiple processes. In implementing shared memory, an initializing process first creates a shareable segment in persistent or non-persistent storage. Post creation, the initializing process then mounts the shareable segment, subsequently mapping the shareable segment into the address space associated with the initializing process. Following the mounting, the initializing process proceeds to identify and grant access permission to one or more authorized processes that may also write and read data to and from the shareable segment. Changes made to the data in the shareable segment by one process may immediately affect other processes, which are also linked to the shareable segment. Further, when one of the authorized processes accesses the shareable segment, the shareable segment maps to the address space of that authorized process. Often, only one authorized process may mount the shareable segment, other than the initializing process, at any given time. Other techniques may be used to share data, such as the various data described in the present application, between processes without departing from the scope of the invention. The processes may be part of the same or different application and may execute on the same or different computing system. Rather than or in addition to sharing data between processes, the computing system performing one or more embodiments of the invention may include functionality to receive data from a user. For example, in one or more embodiments, a user may submit data via a graphical user interface (GUI) on the user device. Data may be submitted via the graphical user interface by a user selecting one or more graphical user interface widgets or inserting text and other data into graphical user interface widgets using a touchpad, a keyboard, a mouse, or any other input device. In response to selecting a particular item, information regarding the particular item may be obtained from persistent or non-persistent storage by the computer processor. Upon selection of the item by the user, the contents of the obtained data regarding the particular item may be displayed on the user device in response to the user's selection. By way of another example, a request to obtain data regarding the particular item may be sent to a server operatively connected to the user device through a network. For example, the user may select a uniform resource locator (URL) link within a web client of the user device, thereby initiating a Hypertext Transfer Protocol (HTTP) or other protocol request being sent to the network host associated with the URL. In response to the request, the server may extract the data regarding the particular selected item and send the data to the device that initiated the request. Once the user device has received the data regarding the particular item, the contents of the received data regarding the particular item may be displayed on the user device in response to the user's selection. Further to the above example, the data received from the server after selecting the URL link may provide a web page in Hyper Text Markup Language (HTML) that may be rendered by the web client and displayed on the user device. Once data is obtained, such as by using techniques described above or from storage, the computing system, in performing one or more embodiments of the invention, may extract one or more data items from the obtained data. For example, the extraction may be performed as follows by the computing system (500) inFIG.5A. First, the organizing pattern (e.g., grammar, schema, layout) of the data is determined, which may be based on one or more of the following: position (e.g., bit or column position, Nth token in a data stream, etc.), attribute (where the attribute is associated with one or more values), or a hierarchical/tree structure (consisting of layers of nodes at different levels of detail-such as in nested packet headers or nested document sections). Then, the raw, unprocessed stream of data symbols is parsed, in the context of the organizing pattern, into a stream (or layered structure) of tokens (where each token may have an associated token “type”). Next, extraction criteria are used to extract one or more data items from the token stream or structure, where the extraction criteria are processed according to the organizing pattern to extract one or more tokens (or nodes from a layered structure). For position-based data, the token(s) at the position(s) identified by the extraction criteria are extracted. For attribute/value-based data, the token(s) and/or node(s) associated with the attribute(s) satisfying the extraction criteria are extracted. For hierarchical/layered data, the token(s) associated with the node(s) matching the extraction criteria are extracted. The extraction criteria may be as simple as an identifier string or may be a query presented to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML). The extracted data may be used for further processing by the computing system. For example, the computing system (500) ofFIG.5A, while performing one or more embodiments of the invention, may perform data comparison. Data comparison may be used to compare two or more data values (e.g., A, B). For example, one or more embodiments may determine whether A>B, A=B, A!=B, A<B, etc. The comparison may be performed by submitting A, B, and an opcode specifying an operation related to the comparison into an arithmetic logic unit (ALU) (i.e., circuitry that performs arithmetic and/or bitwise logical operations on the two data values). The ALU outputs the numerical result of the operation and/or one or more status flags related to the numerical result. For example, the status flags may indicate whether the numerical result is a positive number, a negative number, zero, etc. By selecting the proper opcode and then reading the numerical results and/or status flags, the comparison may be executed. For example, in order to determine if A>B, B may be subtracted from A (i.e., A−B), and the status flags may be read to determine if the result is positive (i.e., if A>B, then A—B>0). In one or more embodiments, B may be considered a threshold, and A is deemed to satisfy the threshold if A=B or if A>B, as determined using the ALU. In one or more embodiments of the invention, A and B may be vectors, and comparing A with B requires comparing the first element of vector A with the first element of vector B, the second element of vector A with the second element of vector B, etc. In one or more embodiments, if A and B are strings, the binary values of the strings may be compared. The computing system (500) inFIG.5Amay implement and/or be connected to a data repository. For example, one type of data repository is a database. A database is a collection of information configured for ease of data retrieval, modification, re-organization, and deletion. A Database Management System (DBMS) is a software application that provides an interface for users to define, create, query, update, or administer databases. The user, or software application, may submit a statement or query into the DBMS. Then the DBMS interprets the statement. The statement may be a select statement to request information, update statement, create statement, delete statement, etc. Moreover, the statement may include parameters that specify data, or data container (database, table, record, column, view, etc.), identifier(s), conditions (comparison operators), functions (e.g. join, full join, count, average, etc.), sort (e.g. ascending, descending), or others. The DBMS may execute the statement. For example, the DBMS may access a memory buffer, a reference or index a file for read, write, deletion, or any combination thereof, for responding to the statement. The DBMS may load the data from persistent or non-persistent storage and perform computations to respond to the query. The DBMS may return the result(s) to the user or software application. The computing system (500) ofFIG.5Amay include functionality to present raw and/or processed data, such as results of comparisons and other processing. For example, presenting data may be accomplished through various presenting methods. Specifically, data may be presented through a user interface provided by a computing device. The user interface may include a GUI that displays information on a display device, such as a computer monitor or a touchscreen on a handheld computer device. The GUI may include various GUI widgets that organize what data is shown as well as how data is presented to a user. Furthermore, the GUI may present data directly to the user, e.g., data presented as actual data values through text, or rendered by the computing device into a visual representation of the data, such as through visualizing a data model. For example, a GUI may first obtain a notification from a software application requesting that a particular data object be presented within the GUI. Next, the GUI may determine a data object type associated with the particular data object, e.g., by obtaining data from a data attribute within the data object that identifies the data object type. Then, the GUI may determine any rules designated for displaying that data object type, e.g., rules specified by a software framework for a data object class or according to any local parameters defined by the GUI for presenting that data object type. Finally, the GUI may obtain data values from the particular data object and render a visual representation of the data values within a display device according to the designated rules for that data object type. Data may also be presented through various audio methods. In particular, data may be rendered into an audio format and presented as sound through one or more speakers operably connected to a computing device. Data may also be presented to a user through haptic methods. For example, haptic methods may include vibrations or other physical signals generated by the computing system. For example, data may be presented to a user using a vibration generated by a handheld computer device with a predefined duration and intensity of the vibration to communicate the data. The above description of functions presents only a few examples of functions performed by the computing system (500) ofFIG.5Aand the nodes (e.g., node X (522), node Y (524)) and/or client device (526) inFIG.5B. Other functions may be performed using one or more embodiments of the invention. While the disclosure has described a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the disclosure. Accordingly, the scope of the subject matter should be limited only by the attached claims. | 54,999 |
11943270 | DETAILED DESCRIPTION Referring now to the drawings and in particularFIG.1, a system overview of the operating components of a method and apparatus for coviewing is shown. A coviewing control system10is provided to perform server and administrator functions for a plurality of set top boxes11that perform client functions and provide user interfaces. The coviewing control system10is utilized to provide a call manager, which is implemented through a hardware and/or a software configuration. The call manager is used to provide phone call address information required for phone call setup and may work in conjunction with the processor of set top boxes11for setting up all calls, both incoming and outgoing, between set top boxes11or a set top box11and another device. It is contemplated that the call manager may be implemented through a dedicated system or by leveraging existing telephony or conferencing systems. The coviewing control system10and the set top boxes11are configured to maintain a network connection to the Internet12. In this manner, the coviewing control system10and the set top boxes11are adapted to communicate data in real time with each other as well as access data from or transmit data to any specific location on the Internet12. For example, the set top boxes11may utilize existing protocols to access provider video content through its connection to the Internet, whether at the direction of or independent of any action of the coviewing control system10. Similarly, set top boxes11may connect phone calls through a VoIP interface to publically switched telephone network destination or an Internet based destination. In this regard, it is contemplated that set top boxes11may be configured to handle incoming and outgoing phone calls or bridge such phone calls to services such as Google Voice, Skype, and Facebook. Referring now toFIG.2, the primary components of a set top box11built in accordance with the present invention are defined as interface components and processing components. The interface components include a networking interface100, an output interface101, a content input interface102, an audio input interface103and a video input interface104. The network interface100provides the network connection between the set top box11and the Internet so as to enable the set top box11to communicate data to locations on the world wide web. The network interface100is defined as a single connection which provides Internet access to a plurality of components in the set top box11in one embodiment. In alternate embodiments, the network interface100defines a plurality of connections which include wired connections, wireless connections or possibly wired and wireless connections. In other embodiments, connection protocols such as USB, Ethernet, WiFi, LTE, WiMax, 3G/4G/Cellular data may be utilized in addition or in the alternative by the network interface100. It is contemplated that other categories of media players/receives can be utilized in the alternative to, or in addition to, a set top box11. Alternative media players/receivers, such as game consoles, Internet media players/receives, and TVs configured to include media playing/receiving components may be configured in accordance with the present invention as described above. The output interface101is configured to provide signals which would allow a television or other audio/video display device to display and otherwise output audio and video signals relating the provider video content, phone call communications, or other content received from the Internet. In one embodiment, the output interface101is defined as a HDMI outlet. In alternate embodiments, audio and video outputs may be provided through the output interface101via a single or multiple connectors which may be wired, wireless, or a combination of wired and wireless. In other embodiments, connection protocols such as coaxial, USB, Ethernet, WiFi, WiMax, TOSLINK, SP/DIF, HDMI, component, and composite may be used by the output interface101to deliver audio and video signals to the desired audio/video display device. The content input interface102provides a supplementary input for provider video content which is not being received directly from the Internet12through the network interface100. The content input interface102utilizes one or more connection ports such as coaxial, component, composite, HDMI, SCART, TOSLINK, SP/DIF to receive input signals from providers of provider video content, such as Cable TV signals, satellite signals, analog or digital antenna inputs. It is contemplated that a plurality of one or more these connection ports may be provided. This signal can be used supplement a provider video content signal which is being received from the Internet12. In the alternative, the set top box11can directly use the broad video content signal from the content input interface102while supplementing that signal with data from the Internet12. The audio input interface103allows for audio from the area surrounding the set top box11to be captured and utilized in the operations of the set top box11. The audio input interface103defines an audio pick up device, which may be a single microphone or a microphone array to enable additional echo cancelling capability. In one embodiment, the audio pick up device is an internal component of the set top box11while in other embodiments, the audio pick up device is an external component connected to the set top box or a combination of internal and external components. The video input interface104captures video from the area surrounding the set top box11to be utilized in the operations of the set top box11. The video input interface104is defined in one embodiment by a internally mounted video camera. In other embodiments, the video camera may be externally disposed and connected to the set top box11via a connection, which may be either wired, or wireless, or a combination of both. The processing components include a central processing unit105, a local video encoder106, a local audio preamp107, a echo canceller108, a content decoder109, a transmission encoder110, an audio video decoder component111, an audio video combiner112, a combiner/mixer113, a call in filter114, coviewing audio video filter115, a graphics processor116, and a third party app processor117. In operation, the central processing unit105acts as a gatekeeper/taskmaster and regulates the system to ensure all interactions are taking place at the correct time and in the correct order. To handle the initial processing of locally generated inputs, the local video encoder106receives and encodes video signals from the video input interface104and then transmits the encoded local video signal to the audio video combiner112. In addition, the local audio preamp107is connected to the audio input interface103so as to receive audio signals from the audio input interface103, amplify them, and transmit the amplified signals to the echo canceller108. Furthermore, the content decoder109receives encoded provider video content from the content input interface102, decodes it, and transmits the decoded signals to the echo canceller108. To handle the initial processing of remotely generated inputs, the call in filter114filters out data relating to active, requested or pending phone calls in the signals received from the Internet through the networking interface100. This phone call data is then transmitted to the audio video decoder component111to be decoded and sent to the echo canceller108and the combiner/mixer113. The coviewing audio video filter115filters out data relating active requested, or pending coviewing sessions to enable it to be directed to the audio video decoder component111to be decoded and sent to the echo canceller108and the combiner/mixer113. The graphics processor116provides menu overlay information to the combiner/mixer113to enable the display of the same on the local video output. The third party app processor117enables the operation of third party apps which provide additional provider video content on the set top box11. The third part app processor117transmits received provider video content to the audio video decoder component111to be decoded and sent to the echo canceller108and the combiner/mixer113. It is contemplated that the audio video decoder component may be embodied as a single decoder or a plurality of audio/video decoders. Once the locally and remotely generate inputs are received and initially processed, the outputs of the set top box11are configured through the echo canceller108, the transmission encoder110, audio/video combiner112, and the combiner/mixer113. The echo canceller108receives audio data from a plurality of sources relating to noise which is being generated around the set top box11generates a signal which enables echo cancellation and unwanted noise suppression. When a point to point audio call, audio/visual call, or co-viewing call is made the echo canceller108takes the audio signal that would emit from the television or display device, or audio system that is outputting an audio signal coming through the set top box11, and generate a cancelling signal that would mix with the signal coming into the set top box11via the audio pick up device, thus cancelling the unwanted audio signal and preventing unwanted feedback. The transmission encoder110encodes data from the echo canceller108so as to prepare it to be transmitted to a desired networked target. The audio/video combiner112merges the audio and video outbound feeds together. This is accomplished by taking the audio signals from the echo canceller108and combining them with the video signal generated by the local video encoder106. In other embodiments, the transmission encoder110is not included and the audio and video feeds are transmitted as separate signals. The combiner/mixer113combines different sources of the audio/video signals which are to be provided to the television118or desired display device and routes the combined signal to the output interface101. It is contemplated that SMS, gaming audio/video, audio call, video call, video/audio call, interactive menu displays, co-viewing displays, and others may be provided to the output interface101by the combiner/mixer113. In some embodiments, the combiner/mixer113includes an encoder, while in other embodiments, an encoder is provided in the output interface101. Referring now toFIG.3, the process of initiating a coviewing session between two devices built in accordance with the present invention begins with a first user requesting a coviewing phone call connection with a second user at a specified or otherwise specific location (such as phone number, email address, or other unique identifier) on the first user's device interface. The request is transmitted to the coviewing control system, which locates the second user's through the provided unique identifier. The request is sent to the second user's device interface, and an option to accept or decline the request to provided to the second user's device interface. If the request is not accepted on the second user's device interface, the coviewing control system terminates the request and notifies the first user's device interface that the request has been unfulfilled. If the second user's device interface accepts the request, the coviewing control system begins provisioning the call request. Such provisioning enables a phone call signal which will be exchanged in real time by the respective user's device interfaces to be generated. During the provisioning of the phone call request, the coviewing control system notifies each user's device interface that a call is provision and causes the display a message advising that the call is connecting. Upon receiving this notification, each user's device interface transmits information relating to any provider video content currently being displayed by the respective user's device interface. This information typically includes the current program, the current channel and a time stamp indicating where in program duration wise the user's device interface is. The coviewing control system processes this information by generating a signal which synchronizes the provider video content being watched on the user's respective device interface. The synchronized provider video content signal is then integrated with the phone call signal to form an integrated coviewing signal. The coviewing signal is then transmitted to each user's device interface, which causes the user's device interface to each actuate a coviewing session. In actuating a coviewing session, the user's device interfaces begin to display or otherwise play the provider video content and phone call communications as directed by the coviewing control system and transmit capture phone call related audio and video information to the coviewing control system. In one embodiment, the coviewing control system merely integrates a pointer to a source of provider video content in the step of integrating provider video content and phone call. In such an embodiment, the provider video content is provided to each of the user's device interfaces directly from a provider or through a form of network address translation traversal. In one embodiment, the step of processing provider video content includes allowing the user's device interfaces to select a particular program or channel to watch during the coviewing session. Referring now toFIG.4, when a coviewing session is active, a user in a first location400and a user in a second location401can watch the same program and/or channel on a television screen402while having a video phone call, wherein the program and the video feed403a,403bfrom the other user are both visible on the user's respective television screen402. In embodiments where video call is not provided for, users can still utilize coviewing to participate in a phone call while watching synchronized provider video content in remote locations. Referring now toFIG.5, in one embodiment, the video500from the phone call communication of a coviewing session is displayed as an overlay on top of the provider video content501being displayed on the display device502. In some embodiments, the coviewing session provides in addition to the remotely generated phone call communication video500the locally generated video503. Referring now toFIG.6, in another embodiment, during a coviewing session, the provider video content video600is squeezed back on the display device601to provide a space in which the video602from the phone call communication of a coviewing session is displayed. It is contemplated that the video can be squeezed back on the edge of a display device so as to provide space on an opposing edge or squeezed back to a more centrally located position on the display device to provide space along the perimeter of the provider video content600. In this embodiment, the coviewing session may again provides in addition to the remotely generated phone call communication video602the locally generated video603. In one embodiment, the provider video content and/or user interfaces and/or video from phone call communications may be selectively displayed on a selected device or a plurality of selected devices. For example, a coviewing system may enable the video from provider video content to be displayed on a first display device, such as a mobile electronic device (tablet, smartphone, etc.), while the video from the phone call communication of a coviewing session is displayed on a second display device, such as a TV. Similarly, a coviewing system may enable menus and other user interface elements on a mobile device and video on a TV. Further, a coviewing system may enable all video and user interfaces to be displayed on a mobile device. The instant invention has been shown and described herein in what is considered to be the most practical and preferred embodiment. It is recognized, however, that departures may be made therefrom within the scope of the invention and that obvious modifications will occur to a person skilled in the art. | 16,117 |
11943271 | DETAILED DESCRIPTION Detailed embodiments of the claimed structures and methods are disclosed herein; however, it can be understood that the disclosed embodiments are merely illustrative of the claimed structures and methods that may be embodied in various forms. Those structures and methods may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope to those skilled in the art. In the description, details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the presented embodiments. Embodiments relate generally to the field of data processing, and more particularly to video coding. The techniques described herein allow for a 2D coded video stream to signal scene-specific neural network models in order for a network to ingest a 2D video source of media including one or more (usually small) number of views and adapt the source of 2D media into one or more streamable “distribution formats” to accommodate a variety of heterogeneous client end-point devices, their differing features and capabilities, and the requirements of the applications being used on the client end-points, prior to actually distributing the formatted media to the variety of client end-points. The network model may be embedded directly into the scene-specific coded video stream of a coded bitstream by means of an SEI structured field, or the SEI may signal the use of a specific model that is stored elsewhere on the distribution network, but available to the neural network process for access. The ability to reformat a 2D media source into a variety of streamable distribution formats enables a network to simultaneously service a variety of client end-points with various capabilities, and available compute resources, and enables the support of emerging immersive client end-points such as holographic, and light field displays in commercial networks. Moreover, the ability to adapt the scene-specific 2D media source based on a scene specific neural network model improves the final visual quality. Such an ability to adapt a 2D media source is especially important when there is no immersive media source that is available, and when the client cannot support a distribution format that is based on 2D media. In this scenario, a neural network-based approach can be more optimally used on specific scenes that exist within the 2D media by carrying scene-specific neural network models that are trained with priors that are generally similar to the objects within the specific scene or the for the context of the specific scene. This improves the network's ability to infer depth-based information about the specific scene so that it can adapt the 2D media into a scene-specific volumetric format suitable for the target client end-point. As previously described, “Immersive Media” generally refers to media that stimulates any or all human sensory systems (visual, auditory, somatosensory, olfactory, and possibly gustatory) to create or enhance the perception of the user being physically present in the experience of the media, i.e., beyond what is distributed over existing commercial networks for timed two-dimensional (2D) video and corresponding audio which is known as “legacy media”. Both immersive media and legacy media can be characterized as either timed or untimed. Timed media refers to media that is structured and presented according to time. Examples include movie features, news reports, episodic content, all of which are organized according to periods of time. Legacy video and audio are generally considered to be timed media. Untimed media is media that is not structured by time; but rather structured by logical, spatial, and/or temporal relationships. An example includes a video game where the user has control over the experience created by the gaming device. Another example of untimed media is a still image photograph taken by a camera. Untimed media may incorporate timed media, for example, in a continuously looped audio or video segment of a scene for a video game. Conversely, timed media may incorporate untimed media, for example a video with a fixed still image as background. Immersive media-capable devices may refer to devices equipped with abilities to access, interpret, and present immersive media. Such media and devices are heterogeneous in terms of the quantity and formats of the media, and numbers and types of network resources required to distribute such media at scale, i.e., to achieve distribution equivalent to that of legacy video and audio media over networks. In contrast, legacy devices such as laptop displays, televisions, and mobile handset displays are homogenous in their capabilities since all of these devices are comprised of rectangular display screens, and consume 2D rectangular video or still images as their primary media formats. The distribution of any media over networks may employ media delivery systems and architectures that reformat the media from an input or network “ingest” format to a final distribution format where that distribution format is not only suitable for the targeted client device and its applications, but is also conducive to being streamed over the network. “Streaming” of media broadly refers to the fragmenting and packetizing of the source media so that it can be delivered over the network in consecutive smaller-sized “chunks” logically organized and sequenced according to either or both the media's temporal or spatial structure. In such distribution architectures and systems, the media may undergo compression or layering processes so that only the most salient media information is delivered first to the client. In some cases, the client must receive all of the salient media information for some portion of the media before the client is able to present any of the same media portion to the end user. The process of reformatting an input media to match the capabilities of a target client end-point may employ a neural network process that takes a network model that may encapsulate some prior knowledge of the specific media being reformatted. For example, a specific model may be tuned to recognize outdoor park scenes (with trees, plants, grass, and other objects common to a park scene), whereas yet a different specific model may be tuned to recognize an indoor dinner scene (with a dinner table, serving utensils, persons seated at the table, and so on). Those skilled in the art will recognize that a network model that is tuned to recognize objects from a particular context, e.g., park scene objects, will recognize that a neural network process equipped with a network model that is tuned to match the contents of a specific scene, will produce better visual results than a network model that is not so tuned. Hence, there is a benefit of providing scene-specific network models to a neural network process that is tasked with reformatting the input media to match the capabilities of a target client end-point. The mechanism to associate a neural network model to a specific scene for 2D media may be accomplished by optionally compressing the network model and inserting it directly into the 2D coded bitstream for a visual scene by means of a Supplemental Enhancement Information (SEI) structured field commonly used to attach metadata to coded video streams in H.264, H.265, and H.266 video compression formats. The presence of an SEI message containing a specific neural network model within the context of a portion of a coded video bitstream may be used to indicate that the network model is to be used to interpret and adapt the video contents within the portion of the bitstream in which the model is embedded. Alternatively, the SEI message may be used to signal, by means of an identifier for a network model, which neural network model(s) may be used in the absence of the actual model itself. The mechanism to associate an appropriate neural network for immersive media may be accomplished by the immersive media itself referencing the appropriate neural network model to use. This reference may be accomplished by directly embedding the network model and its parameters on an object by object basis, or scene by scene basis, or by some combination thereof. Alternatively, rather than embedding the one or more neural network models within the media, the media objects or scenes may reference the particular neural network models by identifiers. Yet another alternative mechanism to reference an appropriate neural network for adaptation of media for streaming to a client end-point is for the specific client end-point itself to provide at least one neural network model, and corresponding parameters, to the Adaptation process to use. Such a mechanism may be implemented by way of the client providing the neural network model(s) in a communication with the Adaptation process, for example, when the client attaches itself to the network. Following the adaptation of the video to the target client end-point, an Adaptation process within the network may then choose to apply a compression algorithm to the result. In addition, the compression algorithm may optionally separate the adapted video signal into layers that correspond to the most salient to the least salient portions of the visual signal. An example of a compression and layering process is the Progressive format of the JPEG standard (ISO/IEC 10918 Part 1) which separates the image into layers that cause the entire image to be presented first with only basic shapes and colors that are initially out of focus, i.e. from the lower-order DCT coefficients for the entire image scan, followed by additional layers of detail that cause the image to come into focus, i.e. from the higher-order DCT coefficients of the image scan. The process of breaking media into smaller portions, organizing them into the payload portions of consecutive network protocol packets, and distributing these protocol packets is referred to as “streaming” of the media whereas the process of converting the media into a format that is suitable for presentation on one of a variety of heterogenous client end-points that is operating one of a variety of heterogenous applications is known as “adapting” the media. Definitions Scene graph: general data structure commonly used by vector-based graphics editing applications and modern computer games, which arranges the logical and often (but not necessarily) spatial representation of a graphical scene; a collection of nodes and vertices in a graph structure. Node: fundamental element of the scene graph comprised of information related to the logical or spatial or temporal representation of visual, audio, haptic, olfactory, gustatory, or related processing information; each node shall have at most one output edge, zero or more input edges, and at least one edge (either input or output) connected to it. Base Layer: a nominal representation of an asset, usually formulated to minimize the compute resources or time needed to render the asset, or the time to transmit the asset over a network. Enhancement Layer: a set of information that when applied to the base layer representation of an asset, augments the base layer to include features or capabilities that are not supported in the base layer. Attribute: metadata associated with a node used to describe a particular characteristic or feature of that node either in a canonical or more complex form (e.g. in terms of another node). Container: a serialized format to store and exchange information to represent all natural, all synthetic, or a mixture of synthetic and natural scenes including a scene graph and all of the media resources that are required for rendering of the scene Serialization: the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later (possibly in a different computer environment). When the resulting series of bits is reread according to the serialization format, it can be used to create a semantically identical clone of the original object. Renderer: a (typically software-based) application or process, based on a selective mixture of disciplines related to: acoustic physics, light physics, visual perception, audio perception, mathematics, and software development, that, given an input scene graph and asset container, emits a typically visual and/or audio signal suitable for presentation on a targeted device or conforming to the desired properties as specified by attributes of a render target node in the scene graph. For visual-based media assets, a renderer may emit a visual signal suitable for a targeted display, or for storage as an intermediate asset (e.g. repackaged into another container i.e. used in a series of rendering processes in a graphics pipeline); for audio-based media assets, a renderer may emit an audio signal for presentation in a multi-channel loudspeaker and/or binauralized headphones, or for repackaging into another (output) container. Popular examples of renderers include: Unity, Unreal. Evaluate: produces a result (e.g. similar to evaluation of a Document Object Model for a webpage) that causes the output to move from an abstract to a concrete result. Scripting language: An interpreted programming language that can be executed by a renderer at runtime to process dynamic input and variable state changes made to the scene graph nodes, which affect rendering and evaluation of spatial and temporal object topology (including physical forces, constraints, IK, deformation, collisions), and energy propagation and transport (light, sound). Shader: a type of computer program that was originally used for shading (the production of appropriate levels of light, darkness, and color within an image) but which now performs a variety of specialized functions in various fields of computer graphics special effects or does video post-processing unrelated to shading, or even functions unrelated to graphics at all. Path Tracing: a computer graphics method of rendering three-dimensional scenes such that the illumination of the scene is faithful to reality. Timed media: Media that is ordered by time; e.g., with a start and end time according to a particular clock. Untimed media: Media that is organized by spatial, logical, or temporal relationships; e.g., as in an interactive experience that is realized according to the actions taken by the user(s). Neural Network Model: a collection of parameters and tensors (e.g., matrices) that define weights (i.e., numerical values) used in well defined mathematical operations applied to the visual signal to arrive at an improved visual output which may include the interpolation of new views for the visual signal that were not explicitly provided by the original signal. Immersive media can be regarded as one or more types of media that, when presented to a human by an immersive media-capable device, stimulates any of the five senses of sight, sound, taste, touch, and hearing, in a way that is more realistic and consistent with a human's understanding of experiences within the natural world, i.e.; stimulation beyond that which would have otherwise been achieved with legacy media presented by legacy devices. In this context, the term “legacy media” refers to two-dimensional (2D) visual media, either still or moving picture frames, and/or corresponding audio for which the user's ability to interact is limited to pause, play, fast-forward, or rewind; “legacy devices” refers to televisions, laptops, displays, and mobile devices that are constrained in their capabilities to the presentation of only legacy media. In consumer-facing application scenarios, the presentation device for the immersive media (i.e., an immersive media-capable device) is a consumer-facing hardware device that is especially equipped with the capabilities to leverage specific information that is embodied by the immersive media such that the device can create a presentation that more closely approximates the human's understanding of, and interaction with, the physical world, i.e., beyond the capabilities of a legacy device to do so. Legacy devices are constrained in their abilities to present only legacy media, whereas immersive media devices are not likewise constrained. In the last decade, a number of immersive media-capable devices have been introduced into the consumer market, including head-mounted displays, augmented-reality glasses, hand-held controllers, haptic gloves, and game consoles. Likewise, holographic displays and other forms of volumetric displays are poised to emerge within the next decade. Despite the immediate or imminent availability of these devices, a coherent end-to-end ecosystem for the distribution of immersive media over commercial networks has failed to materialize for several reasons. One of those reasons is the lack of a single standard representation for immersive media that can address the two major use cases relative to the current distribution of media at scale, over commercial networks: 1) real-time distribution for live action events, i.e., where the content is created and distributed to the client end-point in or near real-time, and 2) non-real-time distribution, where there is no requirement to distribute the content in real-time, i.e., as the content is being physically captured or created. Respectively, these two use cases may be comparably compared to “broadcast” and “on-demand” formats of distribution as they exist today. For real-time distribution, the content can be captured by one or more camera(s), or created using computer generation techniques. Content that is captured by camera(s) is herein referred to as “natural” content, whereas content that is created using computer generation techniques is herein referred to as “synthetic” content. The media formats to represent synthetic content can be formats used by the 3D modelling, visual effects, and CAD/CAM industries and can include object formats and tools such as meshes, textures, point clouds, structured volumes, amorphous volumes (e.g., for fire, smoke, and fog), shaders, procedurally generated geometry, materials, lighting, virtual camera definitions, and animations. While synthetic content is computer generated, synthetic media formats can be used for both natural and synthetic content, however, the process to convert natural content into synthetic media formats (e.g., into synthetic representations) can be a time and compute intensive process, and therefore may be impractical for real-time applications and use cases. For real-time distribution of natural content, camera-captured content can be distributed in a raster format, which is suitable for legacy display devices because many of such devices are likewise designed to display raster formats. That is, given that legacy displays are designed homogenously to display raster formats, the distribution of raster formats is therefore optimally suitable for displays that are capable of displaying only raster formats. Immersive media-capable displays, however, are not necessarily constrained to the display of raster-based formats. Moreover, some immersive-media capable displays are unable to present media that is available only in raster-based formats. The availability of displays that are optimized to create immersive experiences based on formats other than raster-based formats is another significant reason why there is not yet a coherent end-to-end ecosystem for the distribution of immersive media. Yet another problem with creating a coherent distribution system for multiple different immersive media devices is that the current and emerging immersive media-capable devices themselves can vary significantly. For example, some immersive media devices are explicitly designed to be used by only one user at a time, e.g., head-mounted displays. Other immersive media devices are designed so that they can be used by more than one user simultaneously, e.g., the “Looking Glass Factory 8K display” (henceforth called “lenticular light field display”) can display content that can be viewed by up to 12 users simultaneously, where each user is experiencing his or her own unique perspective (i.e., view) of the content that is being displayed. Further complicating the development of a coherent distribution system is that the number of unique views that each display is capable of producing can vary greatly. In most cases, legacy displays can create only a single view of the content. Whereas, the lenticular light field display can support multiple users with each user experiencing unique views of the same visual scene. To accomplish this creation of multiple views of the same scene, the lenticular light field display creates a specific volumetric viewing frustum in which 45 unique views of the same scene are required as input to the display. This means that 45 slightly different unique raster representations of the same scene need to be captured and distributed to the display in a format that is specific to this one particular display, i.e., its viewing frustum. In contrast, the viewing frustum of legacy displays is limited to a single two-dimensional plane, and hence there is no way to present more than one viewing perspective of the content via the display's viewing frustum regardless of the number of simultaneous viewers that are experiencing the display. In general, immersive media displays can vary significantly according to these following characteristics of all displays: the dimensions and volume of the viewing frustum, the number of viewers supported simultaneously, the optical technology used to fill the viewing frustum which can be point-based, ray-based, or wave-based technologies, the density of the units-of-light (either points, rays, or waves) that occupy the viewing frustum, the availability of compute power and type of compute (CPU or GPU), the source and availability of power (battery or wire), the amount of local storage or cache, and access to auxiliary resources such as cloud-based compute and storage. These characteristics contribute to the heterogeneity of immersive media displays, which in contrast to the homogeneity of legacy displays, complicates the development of a single distribution system that can support all of them, including both legacy and immersive types of displays. The disclosed subject matter addresses the development of a network-based media distribution system that can support both legacy and immersive media displays as client end-points within the context of a single network. Specifically, a mechanism to adapt an input immersive media source into a format that is suitable to the specific characteristics of a client end-point device, including the application that is currently executing on that client end-point device, is presented herein. Such a mechanism of adapting an input immersive media source includes reconciling the characteristics of the input immersive media with the characteristics of the target end-point client device, including the application that is executing on the client device, and then adapting the input immersive media into a format suitable for the target end point and its application. Moreover, the adaptation process may include interpolating additional views, e.g., novel views, from the input media to create additional views that are required by the client end-point. Such interpolation may be performed with the aid of a neural network process. Note that the remainder of the disclosed subject matter assumes, without loss of generality, that the process of adapting an input immersive media source to a specific end-point client device is the same as, or similar to, the process of adapting the same input immersive media source to the specific application that is being executed on the specific client end-point device. That is, the problem of adapting an input media source to the characteristics of an end-point device are of the same complexity as the problem to adapt a specific input media source to the characteristics of a specific application. Legacy devices, supported by legacy media, have achieved wide-scale consumer adoption because they are likewise supported by an ecosystem of legacy media content providers that produce standards-based representations of legacy media, and commercial network service providers that provide network infrastructure to connect legacy devices to sources of standard legacy content. Beyond the role of distributing legacy media over networks, commercial network service providers may also facilitate the pairing of legacy client devices with access to legacy content on content distribution networks (CDNs). Once paired with access to suitable forms of content, the legacy client device can then request, or “pull,” the legacy content from the content server to the device for presentation to the end user. Nevertheless, an architecture where the network server “pushes” the appropriate media to the appropriate client is equally relevant without incurring additional complexity to the overall architecture and solution design. Aspects are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer readable media according to the various embodiments. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. The following described exemplary embodiments relate to architectures, structures and components for systems and networks that distribute media, including video, audio, geometric (3D) objects, haptics, associated metadata, or other content for a client device. Particular embodiments are directed systems, structures, and architectures for distribution of media content to heterogenous immersive and interactive client device. FIG.1is an example illustration of the end-to-end process of timed legacy media distribution. InFIG.1, timed audio-visual content is either captured by a camera or microphone in101A or generated by a computer in101B, creating a sequence102of 2D images and associated audio that are input to a Preparation Module103. The output of103is edited content (e.g. for post-production including language translations, subtitles, other editing functions), referred to as a Master Format that is ready to be converted to a standard Mezzanine Format, e.g., for on-demand media, or as a standard Contribution Format, e.g., for live events, by a Converter Module104. The media is “ingested” by the Commercial Network Service Provider and an Adaptation Module105packages the media into various bitrates, temporal resolutions (frame rates) or spatial resolutions (frame sizes) that are packaged into a standard Distribution Format. The resulting adaptations are stored onto a Content Distribution Network106from which various clients108make pull-requests107to fetch and present the media to the end user. It is important to note that the Master Format may consist of a hybrid of media from both101A or101B, and that the format101A may be obtained in real-time, e.g., such as media that is obtained from a live sporting event. Furthermore, clients108are responsible for choosing the specific adaptations107that are best suited for the client's configuration and/or for the current network conditions, but it is equally possible that the network server (not shown inFIG.1) could determine and subsequently “push” the appropriate content to the clients108. FIG.2is an example of a standard media format used for distribution of legacy timed media, e.g., video, audio, and supporting metadata (including timed text such as used for subtitles). As noted in item106inFIG.1, the media is stored in a standards-based distribution format onto CDNs201. The standards-based format is shown as MPD202, which consists of multiple sections encompassing timed Periods203with a start and end time corresponding to a clock. Each Period203refers to one or more Adaptation Sets204. Each Adaptation Set204is generally used for a single type of media, e.g. video, audio, or timed text. For any given Period203, multiple Adaptation Sets204may be provided, e.g., one for video, and multiple for audio such as used for translations into various languages. Each Adaptation Set204refers to one or more Representations205that provide information about the frame resolution (for video), frame-rate, and bitrate of the media. Multiple Representations205may be used to provide access to, for example, a Representation205each for Ultra-High-Definition, High Definition, or Standard Definition video. Each Representation205refers to one or more Segment Files206where the media is actually stored for fetching by the client (as shown as108inFIG.1) or for distribution (in a “push-based” architecture) by the network media server (not shown inFIG.1). FIG.3is an example representation of a streamable format for heterogenous immersive media that is timed.FIG.4is an example representation of a streamable format for heterogeneous immersive media that is untimed. Both figures refer to a Scene;FIG.3refers to Scene301for timed media andFIG.4refers to Scene401for untimed media. For both cases, the Scene may be embodied by various scene representations, or scene descriptions. For example, in some immersive media designs, a scene may be embodied by a Scene Graph, or as a Multi-Plane Image (MPI), or as a Multi-Spherical Image (MSI). Both the MPI and MSI techniques are examples of technologies that aid in the creation of display-agnostic scene representations for natural content, i.e., images of the real world captured simultaneously from one or more cameras. Scene Graph technologies, on the other hand, may be employed to represent both natural and computer-generated imagery in the form of synthetic representations, however, such representations are especially compute-intensive to create for the case when the content is captured as natural scenes by one or more cameras. That is, scene graph representations of naturally-captured content are both time and compute-intensive to create, requiring complex analysis of natural images with techniques of photogrammetry or deep learning or both, in order to create synthetic representations that can subsequently be used to interpolate sufficient and adequate numbers of views to fill a target immersive client display's viewing frustum. As a result, such synthetic representations are presently impractical to consider as candidates for representing natural content, because they cannot practically be created in real-time for consideration of use cases that require real-time distribution. Nevertheless, at present, the best candidate representations for computer generated imagery is to employ the use of a scene graph with synthetic models, as computer generated imagery is created using 3D modeling processes and tools. Such a dichotomy in optimal representations of both natural and computer generated content suggests that the optimal ingest format for naturally-captured content is different from the optimal ingest format for computer generated content or for natural content that is not essential for real-time distribution applications. Therefore, the disclosed subject matter targets to be robust enough to support multiple ingest formats for visually immersive media, whether they are created naturally or by computer. The following are example technologies that embody scene graphs as a format suitable for representing visual immersive media that is created using computer generated techniques, or naturally captured content for which deep learning or photogrammetry techniques are employed to create the corresponding synthetic representations of a natural scene, i.e., not essential for real-time distribution applications. 1. ORBX® by OTOY ORBX by OTOY is one of several scene graph technologies that is able to support any type of visual media, timed or untimed, including ray-traceable, legacy (frame-based), volumetric, and other types of synthetic or vector-based visual formats. ORBX is unique from other scene graphs because ORBX provides native support for freely available and/or open source formats for meshes, point clouds, and textures. ORBX is a scene graph that has been intentionally designed with the goal of facilitating interchange across multiple vendor technologies that operate on scene graphs. Moreover, ORBX provides a rich materials system, support for Open Shader Language, a robust camera system, and support for Lua Scripts. ORBX is also the basis of the Immersive Technologies Media Format published for license under royalty-free terms by the Immersive Digital Experiences Alliance (IDEA). In the context of real time distribution of media, the ability to create and distribute an ORBX representation of a natural scene is a function of the availability of compute resources to perform a complex analysis of the camera-captured data and synthesis of the same data into synthetic representations. To date, the availability of sufficient compute for real-time distribution is not practical, but nevertheless, not impossible. 2. Universal Scene Description by Pixar Universal Scene Description (USD) by Pixar is another well-known, and mature scene graph that is popular in the VFX and professional content production communities. USD is integrated into Nvidia's Omniverse platform which is a set of tools for developers for 3D model creation and rendering with Nvidia's GPUs. A subset of USD was published by Apple and Pixar as USDZ. USDZ is supported by Apple's ARKit. 3. g1TF2.0 by Khronos g1TF2.0 is the most recent version of the “Graphics Language Transmission Format” specification written by the Khronos 3D Group. This format supports a simple scene graph format that is generally capable of supporting static (untimed) objects in scenes, including “png” and “jpeg” image formats. g1TF2.0 supports simple animations, including support for translate, rotate, and scale, of basic shapes described using the g1TF primitives, i.e. for geometric objects. g1TF2.0 does not support timed media, and hence does not support video nor audio. These known designs for scene representations of immersive visual media are provided for example only, and do not limit the disclosed subject matter in its ability to specify a process to adapt an input immersive media source into a format that is suitable to the specific characteristics of a client end-point device. Moreover, any or all of the above example media representations either currently employ or may employ deep learning techniques to train and create a neural network model that enables or facilitates the selection of specific views to fill a particular display's viewing frustum based on the specific dimensions of the frustum. The views that are chosen for the particular display's viewing frustum may be interpolated from existing views that are explicitly provided in the scene representation, e.g., from the MSI or MPI techniques, or they may be directly rendered from render engines based on specific virtual camera locations, filters, or descriptions of virtual cameras for these render engines. The disclosed subject matter is therefore robust enough to consider that there is a relatively small but well known set of immersive media ingest formats that is sufficiently capable to satisfy requirements both for real-time or “on-demand” (e.g., non-real-time) distribution of media that is either captured naturally (e.g., with one or more cameras) or created using computer generated techniques. Interpolation of views from an immersive media ingest format by use of either neural network models or network-based render engines is further facilitated as advanced network technologies such as 5G for mobile networks, and fibre optical cable for fixed networks are deployed. That is, these advanced network technologies increase the capacity and capabilities of commercial networks because such advanced network infrastructures can support transport and delivery of increasingly larger amounts of visual information. Network infrastructure management technologies such as Multi-access Edge Computing (MEC), Software Defined Networks (SDN), and Network Functions Virtualization (NFV), enable commercial network service providers to flexibly configure their network infrastructure to adapt to changes in demand for certain network resources, e.g., to respond to dynamic increases or decreases in demand for network throughputs, network speeds, roundtrip latency, and compute resources. Moreover, this inherent ability to adapt to dynamic network requirements likewise facilitates the ability of networks to adapt immersive media ingest formats to suitable distribution formats in order to support a variety of immersive media applications with potentially heterogenous visual media formats for heterogenous client end-points. Immersive Media applications themselves may also have varying requirements for network resources including gaming applications which require significantly lower network latencies to respond to real-time updates in the state of the game, telepresence applications which have symmetric throughput requirements for both the uplink and downlink portions of the network, and passive viewing applications that may have increased demand for downlink resources depending on the type of client end-point display that is consuming the data. In general, any consumer-facing application may be supported by a variety of client end-points with various onboard-client capabilities for storage, compute, and power, and likewise various requirements for particular media representations. The disclosed subject matter therefore enables a sufficiently equipped network, i.e., a network that employs some or all of the characteristics of a modern network, to simultaneously support a plurality of legacy and immersive media-capable devices according to features that are specified within that: 1. Provide flexibility to leverage media ingest formats that are practical for both real-time and “on demand” use cases for the distribution of media. 2. Provide flexibility to support both natural and computer generated content for both legacy and immersive-media capable client end-points. 3. Support both timed and untimed media. 4. Provide a process for dynamically adapting a source media ingest format to a suitable distribution format based on the features and capabilities of the client end-point, as well as based on the requirements of the application. 5. Ensure that the distribution format is streamable over IP-based networks. 6. Enable the network to simultaneously serve a plurality of heterogenous client end-points that may include both legacy and immersive media-capable devices. 7. Provide an exemplary media representation framework that facilitates the organization of the distribution media along scene boundaries. An end-to-end embodiment of the improvements enabled by the disclosed subject matter is achieved according to the processing and components described the detailed description ofFIGS.3through16as follows. FIG.3andFIG.4both employ a single exemplary encompassing distribution format that has been adapted from an ingest source format to match the capabilities of a specific client end-point. As described above, the media that is shown inFIG.3is timed and the media that is shown inFIG.4is untimed. The specific encompassing format is robust enough in its structure to accommodate a large variety of media attributes that each may be layered based on the amount of salient information that each layer contributes to the presentation of the media. Note that such a layering process is already a well-known technique in the current state-of-the-art as demonstrated with Progressive JPEG and scalable video architectures such as those specified in ISO/IEC 14496-10 (Scalable Advanced Video Coding). 1. The media that is streamed according to the encompassing media format is not limited to legacy visual and audio media, but may include any type of media information that is capable of producing a signal that interacts with machines to stimulate the human senses for sight, sound, taste, touch, and smell. 2. The media that is streamed according to the encompassing media format can be both timed or untimed media, or a mixture of both. 3. The encompassing media format is furthermore streamable by enabling a layered representation for media objects by use of a base layer and enhancement layer architecture. In one example, the separate base layer and enhancement layers are computed by application of multi-resolution or multi-tesselation analysis techniques for media objects in each scene. This is analogous to the progressively rendered image formats specified in ISO/IEC 10918-1 (JPEG), and ISO/IEC 15444-1 (JPEG2000), but not limited to raster-based visual formats. In an example embodiment, a progressive representation for a geometric object could be a multi-resolution representation of the object computed using wavelet analysis. In another example of the layered representation of the media format, the enhancement layers apply different attributes to the base layer, such as refining the material properties of the surface of a visual object that is represented by the base layer. In yet another example, the attributes may refine the texture of the surface of the base layer object, such as changing the surface from a smooth to a porous texture, or from a matted surface to a glossy surface. In yet another example of the layered representation, the surfaces of one or more visual objects in the scene may be altered from being Lambertian to being ray-traceable. In yet another example of the layered representation, the network will distribute the base-layer representation to the client so that the client may create a nominal presentation of the scene while the client awaits the transmission of additional enhancement layers to refine the resolution or other characteristics of the base representation. 4. The resolution of the attributes or refining information in the enhancement layers is not explicitly coupled with the resolution of the object in the base layer as it is today in existing MPEG video and JPEG image standards. 5. The encompassing media format supports any type of information media that can be presented or actuated by a presentation device or machine, thereby enabling the support of heterogenous media formats to heterogenous client end-points. In one embodiment of a network that distributes the media format, the network will first query the client end-point to determine the client's capabilities, and if the client is not capable of meaningfully ingesting the media representation then the network will either remove the layers of attributes that are not supported by the client, or adapt the media from its current format into a format that is suitable for the client end-point. In one example of such adaptation, the network would convert a volumetric visual media asset into a 2D representation of the same visual asset, by use of a Network-Based Media Processing protocol. In another example of such adaptation, the network may employ a neural network process to reformat the media to an appropriate format or optionally synthesize views that are needed by the client end-point. 6. The manifest for a complete or partially-complete immersive experience (live streaming event, game, or playback of on-demand asset) is organized by scenes which is the minimal amount of information that rendering and game engines can currently ingest in order to create a presentation. The manifest includes a list of the individual scenes that are to be rendered for the entirety of the immersive experience requested by the client. Associated with each scene are one or more representations of the geometric objects within the scene corresponding to streamable versions of the scene geometry. One embodiment of a scene representation refers to a low resolution version of the geometric objects for the scene. Another embodiment of the same scene refers to an enhancement layer for the low resolution representation of the scene to add additional detail, or increase tessellation, to the geometric objects of the same scene. As described above, each scene may have more than one enhancement layer to increase the detail of the geometric objects of the scene in a progressive manner. 7. Each layer of the media objects that are referenced within a scene is associated with a token (e.g., URI) that points to the address of where the resource can be accessed within the network. Such resources are analogous to CDN's where the content may be fetched by the client. 8. The token for a representation of a geometric object may point to a location within the network or to a location within the client. That is, the client may signal to the network that its resources are available to the network for network-based media processing. FIG.3describes an embodiment of the encompassing media format for timed media as follows. The Timed Scene Manifest includes a list of Scene information301. The Scene301refers to a list of Components302that separately describe processing information and types of media assets that comprise Scene301. Components302refer to Assets303that further refer to Base Layers304and Attribute Enhancement Layers305. FIG.4describes an embodiment of the encompassing media format for untimed media as follows. The Scene Information401is not associated with a start and end duration according to a clock. Scene Information401refers to a list of Components402that separately describe processing information and types of media assets that comprise Scene401. Components402refer to Assets403(e.g., visual, audio, and haptic assets) that further refer to Base Layers404and Attribute Enhancement Layers405. Furthermore, Scene401refers to other Scenes401that are for untimed media. Scene401also refers to a timed media scene. FIG.5illustrates an embodiment of Process500to synthesize an ingest format from natural content. Camera unit501uses a single camera lens to capture a scene of a person. Camera unit502captures a scene with five diverging fields of view by mounting five camera lenses around a ring-shaped object. The arrangement in502is an exemplary arrangement commonly used to capture omnidirectional content for VR applications. Camera unit503captures a scene with seven converging fields of view by mounting seven camera lenses on the inner diameter portion of a sphere. The arrangement503is an exemplary arrangement commonly used to capture light fields for light field or holographic immersive displays. Natural image content509is provided as input to Synthesis Module504that may optionally employ a Neural Network Training Module505using a collection of Training Images506to produce an optional Capture Neural Network Model508. Another process commonly used in lieu of training process505is Photogrammetry. If model508is created during process500depicted inFIG.5, then model508becomes one of the assets in the Ingest Format507for the natural content. Exemplary embodiments of the Ingest Format507include MPI and MSI. FIG.6illustrates an embodiment of a Process600to create an ingest format for synthetic media, e.g., computer-generated imagery. LIDAR Camera601captures Point Clouds602of scene. CGI tools, 3D modelling tools, or another animation processes to create synthetic content are employed on Computer603to create604CGI Assets over a network. Motion Capture Suit with Sensors605A is worn on Actor605to capture a digital recording of the motion for actor605to produce animated MoCap Data606. Data602,604, and606are provided as input to Synthesis Module607which likewise may optionally use a neural network and training data to create a neural network model (not shown inFIG.6). The techniques for representing and streaming heterogeneous immersive media described above, can be implemented as computer software using computer-readable instructions and physically stored in one or more computer-readable media. For example,FIG.7shows a computer system700suitable for implementing certain embodiments of the disclosed subject matter. The computer software can be coded using any suitable machine code or computer language, that may be subject to assembly, compilation, linking, or like mechanisms to create code comprising instructions that can be executed directly, or through interpretation, micro-code execution, and the like, by computer central processing units (CPUs), Graphics Processing Units (GPUs), and the like. The instructions can be executed on various types of computers or components thereof, including, for example, personal computers, tablet computers, servers, smartphones, gaming devices, internet of things devices, and the like. The components shown inFIG.7for computer system700are exemplary in nature and are not intended to suggest any limitation as to the scope of use or functionality of the computer software implementing embodiments of the present disclosure. Neither should the configuration of components be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary embodiment of a computer system700. Computer system700may include certain human interface input devices. Such a human interface input device may be responsive to input by one or more human users through, for example, tactile input (such as: keystrokes, swipes, data glove movements), audio input (such as: voice, clapping), visual input (such as: gestures), olfactory input (not depicted). The human interface devices can also be used to capture certain media not necessarily directly related to conscious input by a human, such as audio (such as: speech, music, ambient sound), images (such as: scanned images, photographic images obtain from a still image camera), video (such as two-dimensional video, three-dimensional video including stereoscopic video). Input human interface devices may include one or more of (only one of each depicted): keyboard701, mouse702, trackpad703, touch screen710, data-glove (not depicted), joystick705, microphone706, scanner707, camera708. Computer system700may also include certain human interface output devices. Such human interface output devices may be stimulating the senses of one or more human users through, for example, tactile output, sound, light, and smell/taste. Such human interface output devices may include tactile output devices (for example tactile feedback by the touch-screen710, data-glove (not depicted), or joystick705, but there can also be tactile feedback devices that do not serve as input devices), audio output devices (such as: speakers709, headphones (not depicted)), visual output devices (such as screens710to include CRT screens, LCD screens, plasma screens, OLED screens, each with or without touch-screen input capability, each with or without tactile feedback capability—some of which may be capable to output two dimensional visual output or more than three dimensional output through means such as stereographic output; virtual-reality glasses (not depicted), holographic displays and smoke tanks (not depicted)), and printers (not depicted). Computer system700can also include human accessible storage devices and their associated media such as optical media including CD/DVD ROM/RW720with CD/DVD or the like media721, thumb-drive722, removable hard drive or solid state drive723, legacy magnetic media such as tape and floppy disc (not depicted), specialized ROM/ASIC/PLD based devices such as security dongles (not depicted), and the like. Those skilled in the art should also understand that term “computer readable media” as used in connection with the presently disclosed subject matter does not encompass transmission media, carrier waves, or other transitory signals. Computer system700can also include interface to one or more communication networks. Networks can for example be wireless, wireline, optical. Networks can further be local, wide-area, metropolitan, vehicular and industrial, real-time, delay-tolerant, and so on. Examples of networks include local area networks such as Ethernet, wireless LANs, cellular networks to include GSM, 3G, 4G, 5G, LTE and the like, TV wireline or wireless wide area digital networks to include cable TV, satellite TV, and terrestrial broadcast TV, vehicular and industrial to include CANBus, and so forth. Certain networks commonly require external network interface adapters that attached to certain general purpose data ports or peripheral buses (749) (such as, for example USB ports of the computer system700; others are commonly integrated into the core of the computer system700by attachment to a system bus as described below (for example Ethernet interface into a PC computer system or cellular network interface into a smartphone computer system). Using any of these networks, computer system700can communicate with other entities. Such communication can be uni-directional, receive only (for example, broadcast TV), uni-directional send-only (for example CANbus to certain CANbus devices), or bi-directional, for example to other computer systems using local or wide area digital networks. Certain protocols and protocol stacks can be used on each of those networks and network interfaces as described above. Aforementioned human interface devices, human-accessible storage devices, and network interfaces can be attached to a core740of the computer system700. The core740can include one or more Central Processing Units (CPU)741, Graphics Processing Units (GPU)742, specialized programmable processing units in the form of Field Programmable Gate Areas (FPGA)743, hardware accelerators for certain tasks744, and so forth. These devices, along with Read-only memory (ROM)745, Random-access memory746, internal mass storage such as internal non-user accessible hard drives, SSDs, and the like747, may be connected through a system bus748. In some computer systems, the system bus748can be accessible in the form of one or more physical plugs to enable extensions by additional CPUs, GPU, and the like. The peripheral devices can be attached either directly to the core's system bus748, or through a peripheral bus749. Architectures for a peripheral bus include PCI, USB, and the like. CPUs741, GPUs742, FPGAs743, and accelerators744can execute certain instructions that, in combination, can make up the aforementioned computer code. That computer code can be stored in ROM745or RAM746. Transitional data can be also be stored in RAM746, whereas permanent data can be stored for example, in the internal mass storage747. Fast storage and retrieve to any of the memory devices can be enabled through the use of cache memory, that can be closely associated with one or more CPU741, GPU742, mass storage747, ROM745, RAM746, and the like. The computer readable media can have computer code thereon for performing various computer-implemented operations. The media and computer code can be those specially designed and constructed for the purposes of the present disclosure, or they can be of the kind well known and available to those having skill in the computer software arts. As an example and not by way of limitation, the computer system having architecture700, and specifically the core740can provide functionality as a result of processor(s) (including CPUs, GPUs, FPGA, accelerators, and the like) executing software embodied in one or more tangible, computer-readable media. Such computer-readable media can be media associated with user-accessible mass storage as introduced above, as well as certain storage of the core740that are of non-transitory nature, such as core-internal mass storage747or ROM745. The software implementing various embodiments of the present disclosure can be stored in such devices and executed by core740. A computer-readable medium can include one or more memory devices or chips, according to particular needs. The software can cause the core740and specifically the processors therein (including CPU, GPU, FPGA, and the like) to execute particular processes or particular parts of particular processes described herein, including defining data structures stored in RAM746and modifying such data structures according to the processes defined by the software. In addition or as an alternative, the computer system can provide functionality as a result of logic hardwired or otherwise embodied in a circuit (for example: accelerator744), which can operate in place of or together with software to execute particular processes or particular parts of particular processes described herein. Reference to software can encompass logic, and vice versa, where appropriate. Reference to a computer-readable media can encompass a circuit (such as an integrated circuit (IC)) storing software for execution, a circuit embodying logic for execution, or both, where appropriate. The present disclosure encompasses any suitable combination of hardware and software. FIG.8illustrates an exemplary Network Media Distribution System800that supports a variety of legacy and heterogenous immersive-media capable displays as client end-points. Content Acquisition Module801captures or creates the media using example embodiments inFIG.6orFIG.5. Ingest formats are created in Content Preparation Module802and then are transmitted to one or more client end-points804in a network media distribution system using Transmission Module803. Gateways may serve customer premise equipment to provide network access to various client end-points for the network. Set Top Boxes may also serve as customer premise equipment to provide access to aggregated content by the network service provider. Radio Demodulators may serve as mobile network access points for mobile devices (e.g., as with Mobile Handset and Displays). In one or more embodiments, Legacy 2D Televisions may be directly connected to gateways, set-top boxes, or WiFi routers. A computer laptop with a legacy 2D display may be a client end-point connected to a WiFi Router. A Head Mounted 2D (raster-based) Display may also connected to a router. A Lenticular Light Field Display may be to a gateway. A display may be comprised of local Compute GPUs, storage devices, and a Visual Presentation Unit that creates multiple views using a ray-based lenticular optical technology. A Holographic Display may be connected to a set top box and may include local compute CPUs, GPUs, storage devices, and a Fresnal pattern, wave-based holographic Visualization Unit. An Augmented Reality Headset may be connected to a radio demodulator and may include a GPU, a storage device, a battery, and a volumetric Visual Presentation Component. A Dense Light Field Display may be connected to a WiFi router and may include multiple GPUs, CPUs, and storage devices; an eye Tracking Device; a camera; and a dense ray-based light field panel. FIG.9illustrates an embodiment of an Immersive Media Distribution Module900that is capable of serving legacy and heterogenous immersive media-capable displays as previously depicted inFIG.8. Content is either created or acquired in Module901, which is further embodied inFIG.5andFIG.6for natural and CGI content respectively. Content901is then converted into an ingest format using the Create Network Ingest Format Module902. Module902is likewise further embodied inFIG.5. andFIG.6for natural and CGI content respectively. The ingest media format is transmitted to the network and stored on Storage Device903. Optionally, the Storage Device may reside in the immersive media content producer's network, and accessed remotely by the Immersive Media Network Distribution Module (not numbered) as depicted by the dashed line that bisects903. Client and application specific information is optionally available on a remote Storage Device904, which may optionally exist remotely in an alternate “cloud” network. As depicted inFIG.9, a Client Interface Module905serves as the primary source and sink of information to execute the major tasks of the distribution network. In this particular embodiment, Module905may be implemented in unified format with other components of the network. Nevertheless the tasks depicted by Module905inFIG.9form essential elements of the disclosed subject matter. Module905receives information about the features and attributes of Client908, and furthermore collects requirements regarding the application currently running on908. This information may be obtained from Device904, or in an alternate embodiment, may be obtained by directly querying the client908. In the case of a direct query to client908, a bi-directional protocol (not shown inFIG.9) is assumed to be present and operational so that the client may communicate directly to the interface module905. Interface module905also initiates and communicates with Media Adaptation and Fragmentation Module910which is described inFIG.10. As ingest media is adapted and fragmented by Module910, the media is optionally transferred to an intermedia storage device depicted as the Media Prepared for Distribution Storage Device909. As the distribution media is prepared and stored in device909, interface module905ensures that Immersive Client908, via its Network Interface908B, either receives the distribution media and corresponding descriptive information906either through a “push” request, or Client908itself may initiate a “pull” request of the media906from Storage Device909. Immersive Client908may optionally employ GPUs (or CPUs not shown)908C. The Distribution Format of the media is stored in Client908's Storage Device or Storage Cache908D. Finally, Client908visually presents the media via its Visualization Component908A. Throughout the process of streaming the immersive media to Client908, the Interface Module905will monitor the status of the Client's progress via Client Progress and Status Feedback Channel907. FIG.10depicts a particular embodiment of a Media Adaptation Process so that the ingested source media may be appropriately adapted to match the requirements of the Client908. Media Adaptation Module1001is comprised of multiple components that facilitate the adaptation of the ingest media into an appropriate distribution format for Client908. These components should be regarded as exemplary. InFIG.10, Adaptation Module1001receives input Network Status1005to track the current traffic load on the network; Client908information including Attributes and Features Description, Application Features and Description as well as Application Current Status, and a Client Neural Network Model (if available) to aid in mapping the geometry of the client's frustum to the interpolation capabilities of the ingest immersive media. Adaptation Module1001ensures that the adapted output, as it is created, is stored into an Client-Adapted Media Storage Device1006. Adaptation Module1001employs a Renderer1001B or a Neural Network Processor1001C to adapt the specific ingest source media to a format that is suitable for the client. Neural Network Processor1001C uses Neural Network Models in1001A. Examples of such a Neural Network Processor1001C include the Deepview neural network model generator as described in MPI and MSI. If the media is in a 2D format, but the client must have a 3D format, then the Neural Network Processor1001C can invoke a process to use highly correlated images from a 2D video signal to derive a volumetric representation of the scene depicted in the video. An example of such a process could be the Neural Radiance Fields from One or Few Images process developed at the University of California, Berkley. An example of a suitable Renderer1001B could be a modified version of the OTOY Octane renderer (not shown) which would be modified to interact directly with the Adaptation Module1001. Adaptation Module1001may optionally employ Media Compressors1001D and Media Decompressors1001E depending on the need for these tools with respect to the format of the ingest media and the format required by Client908. FIG.11depicts the Adapted Media Packaging Module1103that ultimately converts the Adapted Media from Media Adaptation Module1101fromFIG.10now residing on Client Adapted Media Storage Device1102. The Packaging Module1103formats the Adapted Media from Module1101into a robust distribution format, for example, the exemplary formats shown inFIG.3orFIG.4. Manifest Information1104A provides Client908with a list of the scene data that it can expect to receive and also provides a list of Visual Assets and Corresponding Metadata, and Audio Assets and Corresponding Metadata. FIG.12depicts a Packetizer Module1202that “fragments” the adapted media1201into individual Packets1203suitable for streaming to Client908. The components and communications shown inFIG.13for sequence diagram1300are explained as follows: Client end-point1301initiates a Media Request1308to Network Distribution Interface1302. The request1308includes information to identify the media that is requested by the client, either by URN or other standard nomenclature. The Network Distribution Interface1302responds to request1308with Profiles Request1309, which requests that client1301provide information about its currently available resources (including compute, storage, percent battery charged, and other information to characterize the current operating status of the client). Profiles Request1309also requests that the client provide one or more neural network models that can be used by the network for neural network inferencing to extract or interpolate the correct media views to match the features of the client's presentation system, if such models are available at the client. Response1311from client1301to interface1302provides a client token, application token, and one or more neural network model tokens (if such neural network model tokens are available at the client). The interface1302then provides client1301with a Session ID token1311. Interface1302then requests Ingest Media Server1303with Ingest Media Request1312, which includes the URN or standard nomenclature name for the media identified in request1308. Server1303replies to request1312with response1313which includes an ingest media token. Interface1302then provides the media token from response1313in a call1314to client1301. Interface1302then initiates the adaptation process for the requested media in1308by providing the Adaptation Interface1304with the ingest media token, client token, application token, and neural network model tokens. Interface1304requests access to the ingest media by providing server1303with the ingest media token at call1316to request access to the ingest media assets. Server1303responds to request1316with an ingest media access token in response1317to interface1304. Interface1304then requests that Media Adaptation Module1305adapt the ingest media located at the ingest media access token for the client, application, and neural network inference models corresponding to the session ID token created at1313. Request1318from interface1304to module1305contains the required tokens and session ID. Module1305provides interface1302with adapted media access token and session ID in update1319. Interface1302provides Packaging Module1306with adapted media access token and session ID in interface call1320. Packaging module1306provides response1321to interface1302with the Packaged Media Access Token and Session ID in response1321. Module1306provides packaged assets, URNS, and the Packaged Media Access Token for the Session ID to the Packaged Media Server1307in response1322. Client1301executes Request1323to initiate the streaming of media assets corresponding to the Packaged Media Access Token received in message1321. The client1301executes other requests and provides status updates in message1324to the interface1302. FIG.14depicts the ingest media format and assets1002ofFIG.10as optionally consisting of two parts: Immersive Media and Assets in 3D Format1401and 2D Format1402. The 2D Format1402may be a single view coded video stream, e.g., ISO/IEC 14496 Part 10 Advanced Video Coding, or it may be a coded video stream that contains multiple views, e.g., Multi-view Compression Amendment to ISO/IEC 14496 Part 10. FIG.15depicts the carriage of neural network model information along with a coded video stream. In this figure, coded video stream1501includes the neural network model and corresponding parameters directly carried by one or more SEI messages1501A. Whereas in coded video stream1502, the one or more SEI messages carries an identifier for the neural network model and its corresponding parameters. In the scenario for1502, the neural network model and parameters are stored outside of the coded video stream, for example, in1001A ofFIG.10. FIG.16depicts the carriage of neural network model information in the ingested Immersive Media and Assets 3D Format1601(originally depicted as item1401inFIG.14). Media1601refers to Scenes1through N depicted as1602. Each Scene1602refers to Geometry1603and Processing Parameters1604. Geometry1603may contain references1603A to Neural Network models. Processing Parameters1604may also contain references1604A to a neural network model. Both1604A and1603A may refer to the network model directly stored with the Scene or to identifiers that refer to neural network models that reside outside of the ingested media, for example, network models stored in1001A ofFIG.10. Some embodiments may relate to a system, a method, and/or a computer readable medium at any possible technical detail level of integration. The computer readable medium may include a computer-readable non-transitory storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out operations. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program code/instructions for carrying out operations may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects or operations. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer readable media according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). The method, computer system, and computer readable medium may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in the Figures. In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed concurrently or substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. It will be apparent that systems and/or methods, described herein, may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code—it being understood that software and hardware may be designed to implement the systems and/or methods based on the description herein. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. The descriptions of the various aspects and embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Even though combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of possible implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of possible implementations includes each dependent claim in combination with every other claim in the claim set. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. | 79,378 |
Subsets and Splits